Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 4h27m |
Revision | main |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sConformance\sTests\sconformance\-tests$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:99 Unexpected error: <*errors.withStack | 0xc0012cf698>: { error: <*errors.withMessage | 0xc0013ac440>{ cause: <*errors.errorString | 0xc000f058d0>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x2eef258, 0x32e7387, 0x1876cf7, 0x32e7153, 0x14384c5, 0x14379bc, 0x187855c, 0x1879571, 0x1878f65, 0x18785fb, 0x187e889, 0x187e272, 0x188ab71, 0x188a896, 0x1889ee5, 0x188c5a5, 0x1899e09, 0x1899c1e, 0x32ec2f8, 0x148dc2b, 0x13c57e1], } Unable to run conformance tests: error container run failed with exit code 1 occurred /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:232
[BeforeEach] Conformance Tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:55 INFO: Cluster name is capz-conf-5alf7c �[1mSTEP�[0m: Creating namespace "capz-conf-5alf7c" for hosting the cluster Nov 14 01:02:09.543: INFO: starting to create namespace for hosting the "capz-conf-5alf7c" test spec INFO: Creating namespace capz-conf-5alf7c INFO: Creating event watcher for namespace "capz-conf-5alf7c" [Measure] conformance-tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:99 INFO: Creating the workload cluster with name "capz-conf-5alf7c" using the "conformance-ci-artifacts-windows-containerd" template (Kubernetes v1.26.0-beta.0.65+8e48df13531802, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-conf-5alf7c --infrastructure (default) --kubernetes-version v1.26.0-beta.0.65+8e48df13531802 --control-plane-machine-count 1 --worker-machine-count 0 --flavor conformance-ci-artifacts-windows-containerd INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-conf-5alf7c/capz-conf-5alf7c-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-conf-5alf7c/capz-conf-5alf7c-control-plane to be ready (implies underlying nodes to be ready as well) �[1mSTEP�[0m: Waiting for the control plane to be ready �[1mSTEP�[0m: Checking all the the control plane machines are in the expected failure domains INFO: Waiting for the machine deployments to be provisioned �[1mSTEP�[0m: Waiting for the workload nodes to exist �[1mSTEP�[0m: Checking all the machines controlled by capz-conf-5alf7c-md-0 are in the "<None>" failure domain �[1mSTEP�[0m: Waiting for the workload nodes to exist �[1mSTEP�[0m: Checking all the machines controlled by capz-conf-5alf7c-md-win are in the "<None>" failure domain INFO: Waiting for the machine pools to be provisioned INFO: Using repo-list '' for version 'v1.26.0-beta.0.65+8e48df13531802' �[1mSTEP�[0m: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e, command=["-nodes=1" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "--num-nodes=2" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "-node-os-distro=windows" "-disable-log-dump=true" "-ginkgo.skip=\\[LinuxOnly\\]|\\[Excluded:WindowsDocker\\]|device.plugin.for.Windows" "-ginkgo.timeout=4h" "-ginkgo.v=true" "-ginkgo.slow-spec-threshold=120s" "-ginkgo.trace=true" "-prepull-images=true" "-dump-logs-on-failure=true" "-ginkgo.flakeAttempts=0" "-ginkgo.focus=(\\[sig-windows\\]|\\[sig-scheduling\\].SchedulerPreemption|\\[sig-autoscaling\\].\\[Feature:HPA\\]|\\[sig-apps\\].CronJob).*(\\[Serial\\]|\\[Slow\\])|(\\[Serial\\]|\\[Slow\\]).*(\\[Conformance\\]|\\[NodeConformance\\])|\\[sig-api-machinery\\].Garbage.collector" "-ginkgo.progress=true"] I1114 01:10:11.242997 13 e2e.go:126] Starting e2e run "f888dcd1-3d1c-4e00-bec6-4a96a19df9f1" on Ginkgo node 1 Nov 14 01:10:11.257: INFO: Enabling in-tree volume drivers Running Suite: Kubernetes e2e suite - /usr/local/bin ==================================================== Random Seed: �[1m1668388211�[0m - will randomize all specs Will run �[1m82�[0m of �[1m7066�[0m specs �[38;5;243m------------------------------�[0m �[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:77�[0m [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Nov 14 01:10:11.536: INFO: >>> kubeConfig: /tmp/kubeconfig Nov 14 01:10:11.538: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 14 01:10:11.735: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 14 01:10:11.863: INFO: The status of Pod calico-node-windows-xk6bd is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Nov 14 01:10:11.863: INFO: 17 / 18 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 14 01:10:11.863: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Nov 14 01:10:11.863: INFO: POD NODE PHASE GRACE CONDITIONS Nov 14 01:10:11.863: INFO: calico-node-windows-xk6bd capz-conf-bpf2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-14 01:09:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-14 01:10:06 +0000 UTC ContainersNotReady containers with unready status: [calico-node-felix]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-14 01:10:06 +0000 UTC ContainersNotReady containers with unready status: [calico-node-felix]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-14 01:08:57 +0000 UTC }] Nov 14 01:10:11.863: INFO: Nov 14 01:10:14.006: INFO: 18 / 18 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Nov 14 01:10:14.006: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Nov 14 01:10:14.006: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 14 01:10:14.068: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) Nov 14 01:10:14.068: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'calico-node-windows' (0 seconds elapsed) Nov 14 01:10:14.068: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'containerd-logger' (0 seconds elapsed) Nov 14 01:10:14.068: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-proxy' (0 seconds elapsed) Nov 14 01:10:14.068: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 14 01:10:14.068: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy-windows' (0 seconds elapsed) Nov 14 01:10:14.068: INFO: Pre-pulling images so that they are cached for the tests. Nov 14 01:10:14.383: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40 Nov 14 01:10:14.441: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:10:14.501: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Nov 14 01:10:14.502: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:10:23.546: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:10:23.596: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Nov 14 01:10:23.596: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:10:32.548: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:10:32.595: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Nov 14 01:10:32.595: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:10:41.542: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:10:41.590: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 2 Nov 14 01:10:41.590: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40 Nov 14 01:10:41.590: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2 Nov 14 01:10:41.629: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:10:41.678: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2: 2 Nov 14 01:10:41.678: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2 Nov 14 01:10:41.678: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2 Nov 14 01:10:41.718: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:10:41.766: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2: 2 Nov 14 01:10:41.766: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2 Nov 14 01:10:41.766: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2 Nov 14 01:10:41.809: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:10:41.857: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2: 2 Nov 14 01:10:41.857: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2 Nov 14 01:10:41.892: INFO: e2e test version: v1.26.0-beta.0.65+8e48df13531802 Nov 14 01:10:41.922: INFO: kube-apiserver version: v1.26.0-beta.0.65+8e48df13531802 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Nov 14 01:10:41.922: INFO: >>> kubeConfig: /tmp/kubeconfig Nov 14 01:10:41.955: INFO: Cluster IP family: ipv4 �[38;5;243m------------------------------�[0m �[38;5;10m[SynchronizedBeforeSuite] PASSED [30.419 seconds]�[0m [SynchronizedBeforeSuite] �[38;5;243mtest/e2e/e2e.go:77�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Nov 14 01:10:11.536: INFO: >>> kubeConfig: /tmp/kubeconfig Nov 14 01:10:11.538: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 14 01:10:11.735: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 14 01:10:11.863: INFO: The status of Pod calico-node-windows-xk6bd is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Nov 14 01:10:11.863: INFO: 17 / 18 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 14 01:10:11.863: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Nov 14 01:10:11.863: INFO: POD NODE PHASE GRACE CONDITIONS Nov 14 01:10:11.863: INFO: calico-node-windows-xk6bd capz-conf-bpf2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-14 01:09:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-14 01:10:06 +0000 UTC ContainersNotReady containers with unready status: [calico-node-felix]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-14 01:10:06 +0000 UTC ContainersNotReady containers with unready status: [calico-node-felix]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-14 01:08:57 +0000 UTC }] Nov 14 01:10:11.863: INFO: Nov 14 01:10:14.006: INFO: 18 / 18 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Nov 14 01:10:14.006: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Nov 14 01:10:14.006: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 14 01:10:14.068: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) Nov 14 01:10:14.068: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'calico-node-windows' (0 seconds elapsed) Nov 14 01:10:14.068: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'containerd-logger' (0 seconds elapsed) Nov 14 01:10:14.068: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-proxy' (0 seconds elapsed) Nov 14 01:10:14.068: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 14 01:10:14.068: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy-windows' (0 seconds elapsed) Nov 14 01:10:14.068: INFO: Pre-pulling images so that they are cached for the tests. Nov 14 01:10:14.383: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40 Nov 14 01:10:14.441: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:10:14.501: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Nov 14 01:10:14.502: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:10:23.546: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:10:23.596: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Nov 14 01:10:23.596: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:10:32.548: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:10:32.595: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Nov 14 01:10:32.595: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:10:41.542: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:10:41.590: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 2 Nov 14 01:10:41.590: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40 Nov 14 01:10:41.590: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2 Nov 14 01:10:41.629: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:10:41.678: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2: 2 Nov 14 01:10:41.678: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2 Nov 14 01:10:41.678: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2 Nov 14 01:10:41.718: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:10:41.766: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2: 2 Nov 14 01:10:41.766: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2 Nov 14 01:10:41.766: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2 Nov 14 01:10:41.809: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:10:41.857: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2: 2 Nov 14 01:10:41.857: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2 Nov 14 01:10:41.892: INFO: e2e test version: v1.26.0-beta.0.65+8e48df13531802 Nov 14 01:10:41.922: INFO: kube-apiserver version: v1.26.0-beta.0.65+8e48df13531802 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Nov 14 01:10:41.922: INFO: >>> kubeConfig: /tmp/kubeconfig Nov 14 01:10:41.955: INFO: Cluster IP family: ipv4 �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPredicates [Serial]�[0m �[1mvalidates that NodeSelector is respected if matching [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/predicates.go:466�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:10:41.999�[0m Nov 14 01:10:41.999: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-pred �[38;5;243m11/14/22 01:10:42�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:10:42.097�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:10:42.158�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Nov 14 01:10:42.230: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 14 01:10:42.305: INFO: Waiting for terminating namespaces to be deleted... Nov 14 01:10:42.336: INFO: Logging pods the apiserver thinks is on node capz-conf-bpf2r before test Nov 14 01:10:42.381: INFO: img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40-zb8gt from img-puller-3713 started at 2022-11-14 01:10:14 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.381: INFO: Container app ready: true, restart count 0 Nov 14 01:10:42.381: INFO: img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2-kx6z2 from img-puller-3713 started at 2022-11-14 01:10:14 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.381: INFO: Container app ready: false, restart count 1 Nov 14 01:10:42.381: INFO: img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2-75whz from img-puller-3713 started at 2022-11-14 01:10:14 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.381: INFO: Container app ready: true, restart count 0 Nov 14 01:10:42.381: INFO: img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2-xnghm from img-puller-3713 started at 2022-11-14 01:10:14 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.381: INFO: Container app ready: true, restart count 0 Nov 14 01:10:42.381: INFO: calico-node-windows-xk6bd from kube-system started at 2022-11-14 01:08:57 +0000 UTC (2 container statuses recorded) Nov 14 01:10:42.381: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 01:10:42.381: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 01:10:42.381: INFO: containerd-logger-bpt69 from kube-system started at 2022-11-14 01:08:57 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.381: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 01:10:42.381: INFO: csi-proxy-76x9p from kube-system started at 2022-11-14 01:09:18 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.381: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 01:10:42.381: INFO: kube-proxy-windows-nz2rt from kube-system started at 2022-11-14 01:08:57 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.381: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 01:10:42.381: INFO: Logging pods the apiserver thinks is on node capz-conf-sq8nr before test Nov 14 01:10:42.424: INFO: img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40-2q4tt from img-puller-3713 started at 2022-11-14 01:10:14 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.425: INFO: Container app ready: true, restart count 0 Nov 14 01:10:42.425: INFO: img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2-w87jp from img-puller-3713 started at 2022-11-14 01:10:14 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.425: INFO: Container app ready: true, restart count 1 Nov 14 01:10:42.425: INFO: img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2-5hqjt from img-puller-3713 started at 2022-11-14 01:10:14 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.425: INFO: Container app ready: true, restart count 0 Nov 14 01:10:42.425: INFO: img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2-mw9hp from img-puller-3713 started at 2022-11-14 01:10:14 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.425: INFO: Container app ready: true, restart count 0 Nov 14 01:10:42.425: INFO: calico-node-windows-w6hn2 from kube-system started at 2022-11-14 01:08:50 +0000 UTC (2 container statuses recorded) Nov 14 01:10:42.425: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 01:10:42.425: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 01:10:42.425: INFO: containerd-logger-bf8mz from kube-system started at 2022-11-14 01:08:50 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.425: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 01:10:42.425: INFO: csi-proxy-fbwsw from kube-system started at 2022-11-14 01:09:15 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.425: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 01:10:42.425: INFO: kube-proxy-windows-lldgb from kube-system started at 2022-11-14 01:08:50 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.425: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:466 �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m11/14/22 01:10:42.425�[0m Nov 14 01:10:42.474: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-5017" to be "running" Nov 14 01:10:42.504: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 30.309809ms Nov 14 01:10:44.536: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062439825s Nov 14 01:10:46.540: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066302292s Nov 14 01:10:48.557: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083383525s Nov 14 01:10:50.540: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065753918s Nov 14 01:10:52.537: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063311227s Nov 14 01:10:54.543: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 12.069016854s Nov 14 01:10:56.537: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 14.062649899s Nov 14 01:10:58.538: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 16.064009977s Nov 14 01:11:00.545: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 18.070790379s Nov 14 01:11:02.549: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 20.075218282s Nov 14 01:11:02.549: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m11/14/22 01:11:02.582�[0m �[1mSTEP:�[0m Trying to apply a random label on the found node. �[38;5;243m11/14/22 01:11:02.63�[0m �[1mSTEP:�[0m verifying the node has the label kubernetes.io/e2e-1fcf71f0-3662-4729-a862-b778e1473f72 42 �[38;5;243m11/14/22 01:11:02.684�[0m �[1mSTEP:�[0m Trying to relaunch the pod, now with labels. �[38;5;243m11/14/22 01:11:02.719�[0m Nov 14 01:11:02.757: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-5017" to be "not pending" Nov 14 01:11:02.788: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 30.739412ms Nov 14 01:11:04.820: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062848761s Nov 14 01:11:06.821: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063109071s Nov 14 01:11:08.821: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 6.063447828s Nov 14 01:11:08.821: INFO: Pod "with-labels" satisfied condition "not pending" �[1mSTEP:�[0m removing the label kubernetes.io/e2e-1fcf71f0-3662-4729-a862-b778e1473f72 off the node capz-conf-bpf2r �[38;5;243m11/14/22 01:11:08.853�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-1fcf71f0-3662-4729-a862-b778e1473f72 �[38;5;243m11/14/22 01:11:08.942�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 01:11:08.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-pred-5017" for this suite. �[38;5;243m11/14/22 01:11:09.011�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [27.055 seconds]�[0m [sig-scheduling] SchedulerPredicates [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m validates that NodeSelector is respected if matching [Conformance] �[38;5;243mtest/e2e/scheduling/predicates.go:466�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:10:41.999�[0m Nov 14 01:10:41.999: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-pred �[38;5;243m11/14/22 01:10:42�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:10:42.097�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:10:42.158�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Nov 14 01:10:42.230: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 14 01:10:42.305: INFO: Waiting for terminating namespaces to be deleted... Nov 14 01:10:42.336: INFO: Logging pods the apiserver thinks is on node capz-conf-bpf2r before test Nov 14 01:10:42.381: INFO: img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40-zb8gt from img-puller-3713 started at 2022-11-14 01:10:14 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.381: INFO: Container app ready: true, restart count 0 Nov 14 01:10:42.381: INFO: img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2-kx6z2 from img-puller-3713 started at 2022-11-14 01:10:14 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.381: INFO: Container app ready: false, restart count 1 Nov 14 01:10:42.381: INFO: img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2-75whz from img-puller-3713 started at 2022-11-14 01:10:14 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.381: INFO: Container app ready: true, restart count 0 Nov 14 01:10:42.381: INFO: img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2-xnghm from img-puller-3713 started at 2022-11-14 01:10:14 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.381: INFO: Container app ready: true, restart count 0 Nov 14 01:10:42.381: INFO: calico-node-windows-xk6bd from kube-system started at 2022-11-14 01:08:57 +0000 UTC (2 container statuses recorded) Nov 14 01:10:42.381: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 01:10:42.381: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 01:10:42.381: INFO: containerd-logger-bpt69 from kube-system started at 2022-11-14 01:08:57 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.381: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 01:10:42.381: INFO: csi-proxy-76x9p from kube-system started at 2022-11-14 01:09:18 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.381: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 01:10:42.381: INFO: kube-proxy-windows-nz2rt from kube-system started at 2022-11-14 01:08:57 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.381: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 01:10:42.381: INFO: Logging pods the apiserver thinks is on node capz-conf-sq8nr before test Nov 14 01:10:42.424: INFO: img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40-2q4tt from img-puller-3713 started at 2022-11-14 01:10:14 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.425: INFO: Container app ready: true, restart count 0 Nov 14 01:10:42.425: INFO: img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2-w87jp from img-puller-3713 started at 2022-11-14 01:10:14 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.425: INFO: Container app ready: true, restart count 1 Nov 14 01:10:42.425: INFO: img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2-5hqjt from img-puller-3713 started at 2022-11-14 01:10:14 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.425: INFO: Container app ready: true, restart count 0 Nov 14 01:10:42.425: INFO: img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2-mw9hp from img-puller-3713 started at 2022-11-14 01:10:14 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.425: INFO: Container app ready: true, restart count 0 Nov 14 01:10:42.425: INFO: calico-node-windows-w6hn2 from kube-system started at 2022-11-14 01:08:50 +0000 UTC (2 container statuses recorded) Nov 14 01:10:42.425: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 01:10:42.425: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 01:10:42.425: INFO: containerd-logger-bf8mz from kube-system started at 2022-11-14 01:08:50 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.425: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 01:10:42.425: INFO: csi-proxy-fbwsw from kube-system started at 2022-11-14 01:09:15 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.425: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 01:10:42.425: INFO: kube-proxy-windows-lldgb from kube-system started at 2022-11-14 01:08:50 +0000 UTC (1 container statuses recorded) Nov 14 01:10:42.425: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:466 �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m11/14/22 01:10:42.425�[0m Nov 14 01:10:42.474: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-5017" to be "running" Nov 14 01:10:42.504: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 30.309809ms Nov 14 01:10:44.536: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062439825s Nov 14 01:10:46.540: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066302292s Nov 14 01:10:48.557: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083383525s Nov 14 01:10:50.540: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065753918s Nov 14 01:10:52.537: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063311227s Nov 14 01:10:54.543: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 12.069016854s Nov 14 01:10:56.537: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 14.062649899s Nov 14 01:10:58.538: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 16.064009977s Nov 14 01:11:00.545: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 18.070790379s Nov 14 01:11:02.549: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 20.075218282s Nov 14 01:11:02.549: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m11/14/22 01:11:02.582�[0m �[1mSTEP:�[0m Trying to apply a random label on the found node. �[38;5;243m11/14/22 01:11:02.63�[0m �[1mSTEP:�[0m verifying the node has the label kubernetes.io/e2e-1fcf71f0-3662-4729-a862-b778e1473f72 42 �[38;5;243m11/14/22 01:11:02.684�[0m �[1mSTEP:�[0m Trying to relaunch the pod, now with labels. �[38;5;243m11/14/22 01:11:02.719�[0m Nov 14 01:11:02.757: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-5017" to be "not pending" Nov 14 01:11:02.788: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 30.739412ms Nov 14 01:11:04.820: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062848761s Nov 14 01:11:06.821: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063109071s Nov 14 01:11:08.821: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 6.063447828s Nov 14 01:11:08.821: INFO: Pod "with-labels" satisfied condition "not pending" �[1mSTEP:�[0m removing the label kubernetes.io/e2e-1fcf71f0-3662-4729-a862-b778e1473f72 off the node capz-conf-bpf2r �[38;5;243m11/14/22 01:11:08.853�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-1fcf71f0-3662-4729-a862-b778e1473f72 �[38;5;243m11/14/22 01:11:08.942�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 01:11:08.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-pred-5017" for this suite. �[38;5;243m11/14/22 01:11:09.011�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243m[Serial] [Slow] Deployment (Pod Resource)�[0m �[1mShould scale from 5 pods to 3 pods and then from 3 pods to 1 pod using Average Utilization for aggregation�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:52�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:11:09.06�[0m Nov 14 01:11:09.060: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/14/22 01:11:09.061�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:11:09.157�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:11:09.217�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod using Average Utilization for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:52 Nov 14 01:11:09.277: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 5 replicas �[38;5;243m11/14/22 01:11:09.278�[0m �[1mSTEP:�[0m Creating deployment test-deployment in namespace horizontal-pod-autoscaling-4095 �[38;5;243m11/14/22 01:11:09.329�[0m I1114 01:11:09.367702 13 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-4095, replica count: 5 I1114 01:11:19.418538 13 runners.go:193] test-deployment Pods: 5 out of 5 created, 0 running, 5 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1114 01:11:29.418806 13 runners.go:193] test-deployment Pods: 5 out of 5 created, 5 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/14/22 01:11:29.418�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-4095 �[38;5;243m11/14/22 01:11:29.499�[0m I1114 01:11:29.541298 13 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-4095, replica count: 1 I1114 01:11:39.595801 13 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 14 01:11:44.599: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Nov 14 01:11:44.630: INFO: RC test-deployment: consume 325 millicores in total Nov 14 01:11:44.631: INFO: RC test-deployment: setting consumption to 325 millicores in total Nov 14 01:11:44.631: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:11:44.631: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:11:44.631: INFO: RC test-deployment: consume 0 MB in total Nov 14 01:11:44.632: INFO: RC test-deployment: consume custom metric 0 in total Nov 14 01:11:44.632: INFO: RC test-deployment: disabling mem consumption Nov 14 01:11:44.636: INFO: RC test-deployment: disabling consumption of custom metric QPS Nov 14 01:11:44.705: INFO: waiting for 3 replicas (current: 5) Nov 14 01:12:04.741: INFO: waiting for 3 replicas (current: 5) Nov 14 01:12:20.710: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:12:20.710: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:12:24.740: INFO: waiting for 3 replicas (current: 5) Nov 14 01:12:44.738: INFO: waiting for 3 replicas (current: 5) Nov 14 01:12:50.772: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:12:50.772: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:13:04.741: INFO: waiting for 3 replicas (current: 5) Nov 14 01:13:20.815: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:13:20.816: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:13:24.739: INFO: waiting for 3 replicas (current: 5) Nov 14 01:13:44.741: INFO: waiting for 3 replicas (current: 5) Nov 14 01:13:50.862: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:13:50.863: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:14:04.738: INFO: waiting for 3 replicas (current: 5) Nov 14 01:14:20.905: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:14:20.905: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:14:24.738: INFO: waiting for 3 replicas (current: 5) Nov 14 01:14:44.739: INFO: waiting for 3 replicas (current: 5) Nov 14 01:14:50.950: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:14:50.950: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:15:04.740: INFO: waiting for 3 replicas (current: 5) Nov 14 01:15:20.993: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:15:20.994: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:15:24.739: INFO: waiting for 3 replicas (current: 5) Nov 14 01:15:44.739: INFO: waiting for 3 replicas (current: 5) Nov 14 01:15:51.054: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:15:51.055: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:16:04.741: INFO: waiting for 3 replicas (current: 5) Nov 14 01:16:21.096: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:16:21.097: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:16:24.740: INFO: waiting for 3 replicas (current: 5) Nov 14 01:16:44.739: INFO: waiting for 3 replicas (current: 5) Nov 14 01:16:51.141: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:16:51.141: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:17:04.741: INFO: waiting for 3 replicas (current: 3) Nov 14 01:17:04.741: INFO: RC test-deployment: consume 10 millicores in total Nov 14 01:17:04.741: INFO: RC test-deployment: setting consumption to 10 millicores in total Nov 14 01:17:04.772: INFO: waiting for 1 replicas (current: 3) Nov 14 01:17:21.183: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:17:21.183: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:17:24.808: INFO: waiting for 1 replicas (current: 3) Nov 14 01:17:44.805: INFO: waiting for 1 replicas (current: 3) Nov 14 01:17:51.223: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:17:51.224: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:18:04.805: INFO: waiting for 1 replicas (current: 3) Nov 14 01:18:21.263: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:18:21.263: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:18:24.807: INFO: waiting for 1 replicas (current: 3) Nov 14 01:18:44.807: INFO: waiting for 1 replicas (current: 3) Nov 14 01:18:51.303: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:18:51.303: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:19:04.808: INFO: waiting for 1 replicas (current: 3) Nov 14 01:19:21.344: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:19:21.344: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:19:24.806: INFO: waiting for 1 replicas (current: 3) Nov 14 01:19:44.806: INFO: waiting for 1 replicas (current: 3) Nov 14 01:19:51.384: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:19:51.384: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:20:04.808: INFO: waiting for 1 replicas (current: 3) Nov 14 01:20:21.424: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:20:21.424: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:20:24.815: INFO: waiting for 1 replicas (current: 3) Nov 14 01:20:44.807: INFO: waiting for 1 replicas (current: 3) Nov 14 01:20:51.465: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:20:51.465: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:21:04.805: INFO: waiting for 1 replicas (current: 3) Nov 14 01:21:21.507: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:21:21.508: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:21:24.804: INFO: waiting for 1 replicas (current: 3) Nov 14 01:21:44.804: INFO: waiting for 1 replicas (current: 3) Nov 14 01:21:51.551: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:21:51.552: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:22:04.806: INFO: waiting for 1 replicas (current: 2) Nov 14 01:22:21.590: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:22:21.591: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:22:24.804: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m11/14/22 01:22:24.84�[0m Nov 14 01:22:24.840: INFO: RC test-deployment: stopping metric consumer Nov 14 01:22:24.840: INFO: RC test-deployment: stopping CPU consumer Nov 14 01:22:24.840: INFO: RC test-deployment: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-4095, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 01:22:34.84�[0m Nov 14 01:22:34.961: INFO: Deleting Deployment.apps test-deployment took: 36.678137ms Nov 14 01:22:35.062: INFO: Terminating Deployment.apps test-deployment pods took: 101.829893ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-4095, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 01:22:37.342�[0m Nov 14 01:22:37.460: INFO: Deleting ReplicationController test-deployment-ctrl took: 35.186598ms Nov 14 01:22:37.560: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.462611ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 14 01:22:39.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-4095" for this suite. �[38;5;243m11/14/22 01:22:39.373�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [690.350 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] Deployment (Pod Resource) �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:48�[0m Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod using Average Utilization for aggregation �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:52�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:11:09.06�[0m Nov 14 01:11:09.060: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/14/22 01:11:09.061�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:11:09.157�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:11:09.217�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod using Average Utilization for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:52 Nov 14 01:11:09.277: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 5 replicas �[38;5;243m11/14/22 01:11:09.278�[0m �[1mSTEP:�[0m Creating deployment test-deployment in namespace horizontal-pod-autoscaling-4095 �[38;5;243m11/14/22 01:11:09.329�[0m I1114 01:11:09.367702 13 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-4095, replica count: 5 I1114 01:11:19.418538 13 runners.go:193] test-deployment Pods: 5 out of 5 created, 0 running, 5 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1114 01:11:29.418806 13 runners.go:193] test-deployment Pods: 5 out of 5 created, 5 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/14/22 01:11:29.418�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-4095 �[38;5;243m11/14/22 01:11:29.499�[0m I1114 01:11:29.541298 13 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-4095, replica count: 1 I1114 01:11:39.595801 13 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 14 01:11:44.599: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Nov 14 01:11:44.630: INFO: RC test-deployment: consume 325 millicores in total Nov 14 01:11:44.631: INFO: RC test-deployment: setting consumption to 325 millicores in total Nov 14 01:11:44.631: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:11:44.631: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:11:44.631: INFO: RC test-deployment: consume 0 MB in total Nov 14 01:11:44.632: INFO: RC test-deployment: consume custom metric 0 in total Nov 14 01:11:44.632: INFO: RC test-deployment: disabling mem consumption Nov 14 01:11:44.636: INFO: RC test-deployment: disabling consumption of custom metric QPS Nov 14 01:11:44.705: INFO: waiting for 3 replicas (current: 5) Nov 14 01:12:04.741: INFO: waiting for 3 replicas (current: 5) Nov 14 01:12:20.710: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:12:20.710: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:12:24.740: INFO: waiting for 3 replicas (current: 5) Nov 14 01:12:44.738: INFO: waiting for 3 replicas (current: 5) Nov 14 01:12:50.772: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:12:50.772: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:13:04.741: INFO: waiting for 3 replicas (current: 5) Nov 14 01:13:20.815: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:13:20.816: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:13:24.739: INFO: waiting for 3 replicas (current: 5) Nov 14 01:13:44.741: INFO: waiting for 3 replicas (current: 5) Nov 14 01:13:50.862: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:13:50.863: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:14:04.738: INFO: waiting for 3 replicas (current: 5) Nov 14 01:14:20.905: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:14:20.905: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:14:24.738: INFO: waiting for 3 replicas (current: 5) Nov 14 01:14:44.739: INFO: waiting for 3 replicas (current: 5) Nov 14 01:14:50.950: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:14:50.950: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:15:04.740: INFO: waiting for 3 replicas (current: 5) Nov 14 01:15:20.993: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:15:20.994: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:15:24.739: INFO: waiting for 3 replicas (current: 5) Nov 14 01:15:44.739: INFO: waiting for 3 replicas (current: 5) Nov 14 01:15:51.054: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:15:51.055: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:16:04.741: INFO: waiting for 3 replicas (current: 5) Nov 14 01:16:21.096: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:16:21.097: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:16:24.740: INFO: waiting for 3 replicas (current: 5) Nov 14 01:16:44.739: INFO: waiting for 3 replicas (current: 5) Nov 14 01:16:51.141: INFO: RC test-deployment: sending request to consume 325 millicores Nov 14 01:16:51.141: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 14 01:17:04.741: INFO: waiting for 3 replicas (current: 3) Nov 14 01:17:04.741: INFO: RC test-deployment: consume 10 millicores in total Nov 14 01:17:04.741: INFO: RC test-deployment: setting consumption to 10 millicores in total Nov 14 01:17:04.772: INFO: waiting for 1 replicas (current: 3) Nov 14 01:17:21.183: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:17:21.183: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:17:24.808: INFO: waiting for 1 replicas (current: 3) Nov 14 01:17:44.805: INFO: waiting for 1 replicas (current: 3) Nov 14 01:17:51.223: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:17:51.224: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:18:04.805: INFO: waiting for 1 replicas (current: 3) Nov 14 01:18:21.263: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:18:21.263: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:18:24.807: INFO: waiting for 1 replicas (current: 3) Nov 14 01:18:44.807: INFO: waiting for 1 replicas (current: 3) Nov 14 01:18:51.303: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:18:51.303: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:19:04.808: INFO: waiting for 1 replicas (current: 3) Nov 14 01:19:21.344: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:19:21.344: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:19:24.806: INFO: waiting for 1 replicas (current: 3) Nov 14 01:19:44.806: INFO: waiting for 1 replicas (current: 3) Nov 14 01:19:51.384: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:19:51.384: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:20:04.808: INFO: waiting for 1 replicas (current: 3) Nov 14 01:20:21.424: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:20:21.424: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:20:24.815: INFO: waiting for 1 replicas (current: 3) Nov 14 01:20:44.807: INFO: waiting for 1 replicas (current: 3) Nov 14 01:20:51.465: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:20:51.465: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:21:04.805: INFO: waiting for 1 replicas (current: 3) Nov 14 01:21:21.507: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:21:21.508: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:21:24.804: INFO: waiting for 1 replicas (current: 3) Nov 14 01:21:44.804: INFO: waiting for 1 replicas (current: 3) Nov 14 01:21:51.551: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:21:51.552: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:22:04.806: INFO: waiting for 1 replicas (current: 2) Nov 14 01:22:21.590: INFO: RC test-deployment: sending request to consume 10 millicores Nov 14 01:22:21.591: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4095/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 14 01:22:24.804: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m11/14/22 01:22:24.84�[0m Nov 14 01:22:24.840: INFO: RC test-deployment: stopping metric consumer Nov 14 01:22:24.840: INFO: RC test-deployment: stopping CPU consumer Nov 14 01:22:24.840: INFO: RC test-deployment: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-4095, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 01:22:34.84�[0m Nov 14 01:22:34.961: INFO: Deleting Deployment.apps test-deployment took: 36.678137ms Nov 14 01:22:35.062: INFO: Terminating Deployment.apps test-deployment pods took: 101.829893ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-4095, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 01:22:37.342�[0m Nov 14 01:22:37.460: INFO: Deleting ReplicationController test-deployment-ctrl took: 35.186598ms Nov 14 01:22:37.560: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.462611ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 14 01:22:39.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-4095" for this suite. �[38;5;243m11/14/22 01:22:39.373�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould update pod when spec was updated and update strategy is RollingUpdate [Conformance]�[0m �[38;5;243mtest/e2e/apps/daemon_set.go:374�[0m [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:22:39.418�[0m Nov 14 01:22:39.419: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m11/14/22 01:22:39.42�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:22:39.516�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:22:39.575�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/apps/daemon_set.go:374 Nov 14 01:22:39.775: INFO: Creating simple daemon set daemon-set �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m11/14/22 01:22:39.81�[0m Nov 14 01:22:39.881: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:39.914: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:22:39.914: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:22:40.951: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:40.984: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:22:40.984: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:22:41.951: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:41.984: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:22:41.984: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:22:42.955: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:42.988: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:22:42.988: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:22:43.951: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:43.984: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 14 01:22:43.984: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:22:44.954: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:44.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 14 01:22:44.986: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m Update daemon pods image. �[38;5;243m11/14/22 01:22:45.112�[0m �[1mSTEP:�[0m Check that daemon pods images are updated. �[38;5;243m11/14/22 01:22:45.19�[0m Nov 14 01:22:45.223: INFO: Wrong image for pod: daemon-set-hgjk7. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 14 01:22:45.259: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:46.293: INFO: Wrong image for pod: daemon-set-hgjk7. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 14 01:22:46.329: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:47.295: INFO: Wrong image for pod: daemon-set-hgjk7. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 14 01:22:47.332: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:48.293: INFO: Wrong image for pod: daemon-set-hgjk7. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 14 01:22:48.332: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:49.294: INFO: Wrong image for pod: daemon-set-hgjk7. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 14 01:22:49.331: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:50.296: INFO: Wrong image for pod: daemon-set-hgjk7. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 14 01:22:50.296: INFO: Pod daemon-set-vw5qz is not available Nov 14 01:22:50.332: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:51.293: INFO: Wrong image for pod: daemon-set-hgjk7. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 14 01:22:51.293: INFO: Pod daemon-set-vw5qz is not available Nov 14 01:22:51.329: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:52.295: INFO: Wrong image for pod: daemon-set-hgjk7. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 14 01:22:52.295: INFO: Pod daemon-set-vw5qz is not available Nov 14 01:22:52.332: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:53.293: INFO: Wrong image for pod: daemon-set-hgjk7. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 14 01:22:53.293: INFO: Pod daemon-set-vw5qz is not available Nov 14 01:22:53.329: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:54.330: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:55.328: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:56.332: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:57.330: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:58.293: INFO: Pod daemon-set-s7pc6 is not available Nov 14 01:22:58.330: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node �[1mSTEP:�[0m Check that daemon pods are still running on every node of the cluster. �[38;5;243m11/14/22 01:22:58.33�[0m Nov 14 01:22:58.367: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:58.400: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 14 01:22:58.400: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:22:59.438: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:59.471: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 14 01:22:59.471: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:23:00.439: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:23:00.471: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 14 01:23:00.471: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:23:01.438: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:23:01.470: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 14 01:23:01.470: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m11/14/22 01:23:01.628�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-6714, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 01:23:01.628�[0m Nov 14 01:23:01.747: INFO: Deleting DaemonSet.extensions daemon-set took: 37.439647ms Nov 14 01:23:01.848: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.031063ms Nov 14 01:23:06.581: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:23:06.581: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Nov 14 01:23:06.612: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"3057"},"items":null} Nov 14 01:23:06.644: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"3057"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 01:23:06.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "daemonsets-6714" for this suite. �[38;5;243m11/14/22 01:23:06.783�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [27.402 seconds]�[0m [sig-apps] Daemon set [Serial] �[38;5;243mtest/e2e/apps/framework.go:23�[0m should update pod when spec was updated and update strategy is RollingUpdate [Conformance] �[38;5;243mtest/e2e/apps/daemon_set.go:374�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:22:39.418�[0m Nov 14 01:22:39.419: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m11/14/22 01:22:39.42�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:22:39.516�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:22:39.575�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/apps/daemon_set.go:374 Nov 14 01:22:39.775: INFO: Creating simple daemon set daemon-set �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m11/14/22 01:22:39.81�[0m Nov 14 01:22:39.881: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:39.914: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:22:39.914: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:22:40.951: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:40.984: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:22:40.984: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:22:41.951: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:41.984: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:22:41.984: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:22:42.955: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:42.988: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:22:42.988: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:22:43.951: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:43.984: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 14 01:22:43.984: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:22:44.954: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:44.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 14 01:22:44.986: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m Update daemon pods image. �[38;5;243m11/14/22 01:22:45.112�[0m �[1mSTEP:�[0m Check that daemon pods images are updated. �[38;5;243m11/14/22 01:22:45.19�[0m Nov 14 01:22:45.223: INFO: Wrong image for pod: daemon-set-hgjk7. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 14 01:22:45.259: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:46.293: INFO: Wrong image for pod: daemon-set-hgjk7. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 14 01:22:46.329: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:47.295: INFO: Wrong image for pod: daemon-set-hgjk7. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 14 01:22:47.332: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:48.293: INFO: Wrong image for pod: daemon-set-hgjk7. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 14 01:22:48.332: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:49.294: INFO: Wrong image for pod: daemon-set-hgjk7. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 14 01:22:49.331: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:50.296: INFO: Wrong image for pod: daemon-set-hgjk7. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 14 01:22:50.296: INFO: Pod daemon-set-vw5qz is not available Nov 14 01:22:50.332: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:51.293: INFO: Wrong image for pod: daemon-set-hgjk7. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 14 01:22:51.293: INFO: Pod daemon-set-vw5qz is not available Nov 14 01:22:51.329: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:52.295: INFO: Wrong image for pod: daemon-set-hgjk7. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 14 01:22:52.295: INFO: Pod daemon-set-vw5qz is not available Nov 14 01:22:52.332: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:53.293: INFO: Wrong image for pod: daemon-set-hgjk7. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 14 01:22:53.293: INFO: Pod daemon-set-vw5qz is not available Nov 14 01:22:53.329: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:54.330: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:55.328: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:56.332: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:57.330: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:58.293: INFO: Pod daemon-set-s7pc6 is not available Nov 14 01:22:58.330: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node �[1mSTEP:�[0m Check that daemon pods are still running on every node of the cluster. �[38;5;243m11/14/22 01:22:58.33�[0m Nov 14 01:22:58.367: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:58.400: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 14 01:22:58.400: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:22:59.438: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:22:59.471: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 14 01:22:59.471: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:23:00.439: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:23:00.471: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 14 01:23:00.471: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:23:01.438: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:23:01.470: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 14 01:23:01.470: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m11/14/22 01:23:01.628�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-6714, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 01:23:01.628�[0m Nov 14 01:23:01.747: INFO: Deleting DaemonSet.extensions daemon-set took: 37.439647ms Nov 14 01:23:01.848: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.031063ms Nov 14 01:23:06.581: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:23:06.581: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Nov 14 01:23:06.612: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"3057"},"items":null} Nov 14 01:23:06.644: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"3057"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 01:23:06.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "daemonsets-6714" for this suite. �[38;5;243m11/14/22 01:23:06.783�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould list and delete a collection of DaemonSets [Conformance]�[0m �[38;5;243mtest/e2e/apps/daemon_set.go:823�[0m [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:23:06.826�[0m Nov 14 01:23:06.827: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m11/14/22 01:23:06.829�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:23:06.928�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:23:06.989�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:823 �[1mSTEP:�[0m Creating simple DaemonSet "daemon-set" �[38;5;243m11/14/22 01:23:07.185�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m11/14/22 01:23:07.227�[0m Nov 14 01:23:07.270: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:23:07.304: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:23:07.304: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:23:08.341: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:23:08.374: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:23:08.374: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:23:09.342: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:23:09.375: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:23:09.375: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:23:10.342: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:23:10.375: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:23:10.375: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:23:11.343: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:23:11.376: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:23:11.376: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:23:12.342: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:23:12.375: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 14 01:23:12.375: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m listing all DeamonSets �[38;5;243m11/14/22 01:23:12.406�[0m �[1mSTEP:�[0m DeleteCollection of the DaemonSets �[38;5;243m11/14/22 01:23:12.437�[0m �[1mSTEP:�[0m Verify that ReplicaSets have been deleted �[38;5;243m11/14/22 01:23:12.477�[0m [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 Nov 14 01:23:12.570: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"3135"},"items":null} Nov 14 01:23:12.604: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"3135"},"items":[{"metadata":{"name":"daemon-set-7xfgv","generateName":"daemon-set-","namespace":"daemonsets-2186","uid":"73c62e14-60d5-45f5-8e63-53db5671308c","resourceVersion":"3134","creationTimestamp":"2022-11-14T01:23:07Z","deletionTimestamp":"2022-11-14T01:23:42Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"849f988f65","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"70a6dc0a8b25ddac00b59427c29e0f1e39efea85f4231b2cb0356e6ab99c6766","cni.projectcalico.org/podIP":"192.168.166.76/32","cni.projectcalico.org/podIPs":"192.168.166.76/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"8e065dd4-5a3f-4645-8e20-a87778b44703","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-11-14T01:23:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-14T01:23:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e065dd4-5a3f-4645-8e20-a87778b44703\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2022-11-14T01:23:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.166.76\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-jxklx","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-jxklx","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-sq8nr","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-sq8nr"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-14T01:23:07Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-14T01:23:11Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-14T01:23:11Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-14T01:23:07Z"}],"hostIP":"10.1.0.5","podIP":"192.168.166.76","podIPs":[{"ip":"192.168.166.76"}],"startTime":"2022-11-14T01:23:07Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2022-11-14T01:23:10Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://232b5d9537ec5292d50524feea476c734701a1ec30cdef92be22db10724ac877","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-lxcl8","generateName":"daemon-set-","namespace":"daemonsets-2186","uid":"ebe7b3b0-03b1-4d74-aad6-97aa9464d85a","resourceVersion":"3135","creationTimestamp":"2022-11-14T01:23:07Z","deletionTimestamp":"2022-11-14T01:23:42Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"849f988f65","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"362aa19ffac37fa453745ddd73c7bbe0cfccf685ccd4a929721d590d4df8c4f8","cni.projectcalico.org/podIP":"192.168.114.78/32","cni.projectcalico.org/podIPs":"192.168.114.78/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"8e065dd4-5a3f-4645-8e20-a87778b44703","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-11-14T01:23:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-14T01:23:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e065dd4-5a3f-4645-8e20-a87778b44703\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2022-11-14T01:23:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.114.78\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-kmds8","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-kmds8","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-bpf2r","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-bpf2r"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-14T01:23:07Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-14T01:23:11Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-14T01:23:11Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-14T01:23:07Z"}],"hostIP":"10.1.0.4","podIP":"192.168.114.78","podIPs":[{"ip":"192.168.114.78"}],"startTime":"2022-11-14T01:23:07Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2022-11-14T01:23:11Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://bf89599915bd4747efe575fa892f233e43bf603744662077ab287ebb922ad466","started":true}],"qosClass":"BestEffort"}}]} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 01:23:12.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "daemonsets-2186" for this suite. �[38;5;243m11/14/22 01:23:12.743�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [5.951 seconds]�[0m [sig-apps] Daemon set [Serial] �[38;5;243mtest/e2e/apps/framework.go:23�[0m should list and delete a collection of DaemonSets [Conformance] �[38;5;243mtest/e2e/apps/daemon_set.go:823�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:23:06.826�[0m Nov 14 01:23:06.827: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m11/14/22 01:23:06.829�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:23:06.928�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:23:06.989�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:823 �[1mSTEP:�[0m Creating simple DaemonSet "daemon-set" �[38;5;243m11/14/22 01:23:07.185�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m11/14/22 01:23:07.227�[0m Nov 14 01:23:07.270: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:23:07.304: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:23:07.304: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:23:08.341: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:23:08.374: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:23:08.374: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:23:09.342: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:23:09.375: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:23:09.375: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:23:10.342: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:23:10.375: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:23:10.375: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:23:11.343: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:23:11.376: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 14 01:23:11.376: INFO: Node capz-conf-bpf2r is running 0 daemon pod, expected 1 Nov 14 01:23:12.342: INFO: DaemonSet pods can't tolerate node capz-conf-5alf7c-control-plane-hknpt with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 14 01:23:12.375: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 14 01:23:12.375: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m listing all DeamonSets �[38;5;243m11/14/22 01:23:12.406�[0m �[1mSTEP:�[0m DeleteCollection of the DaemonSets �[38;5;243m11/14/22 01:23:12.437�[0m �[1mSTEP:�[0m Verify that ReplicaSets have been deleted �[38;5;243m11/14/22 01:23:12.477�[0m [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 Nov 14 01:23:12.570: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"3135"},"items":null} Nov 14 01:23:12.604: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"3135"},"items":[{"metadata":{"name":"daemon-set-7xfgv","generateName":"daemon-set-","namespace":"daemonsets-2186","uid":"73c62e14-60d5-45f5-8e63-53db5671308c","resourceVersion":"3134","creationTimestamp":"2022-11-14T01:23:07Z","deletionTimestamp":"2022-11-14T01:23:42Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"849f988f65","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"70a6dc0a8b25ddac00b59427c29e0f1e39efea85f4231b2cb0356e6ab99c6766","cni.projectcalico.org/podIP":"192.168.166.76/32","cni.projectcalico.org/podIPs":"192.168.166.76/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"8e065dd4-5a3f-4645-8e20-a87778b44703","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-11-14T01:23:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-14T01:23:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e065dd4-5a3f-4645-8e20-a87778b44703\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2022-11-14T01:23:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.166.76\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-jxklx","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-jxklx","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-sq8nr","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-sq8nr"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-14T01:23:07Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-14T01:23:11Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-14T01:23:11Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-14T01:23:07Z"}],"hostIP":"10.1.0.5","podIP":"192.168.166.76","podIPs":[{"ip":"192.168.166.76"}],"startTime":"2022-11-14T01:23:07Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2022-11-14T01:23:10Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://232b5d9537ec5292d50524feea476c734701a1ec30cdef92be22db10724ac877","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-lxcl8","generateName":"daemon-set-","namespace":"daemonsets-2186","uid":"ebe7b3b0-03b1-4d74-aad6-97aa9464d85a","resourceVersion":"3135","creationTimestamp":"2022-11-14T01:23:07Z","deletionTimestamp":"2022-11-14T01:23:42Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"849f988f65","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"362aa19ffac37fa453745ddd73c7bbe0cfccf685ccd4a929721d590d4df8c4f8","cni.projectcalico.org/podIP":"192.168.114.78/32","cni.projectcalico.org/podIPs":"192.168.114.78/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"8e065dd4-5a3f-4645-8e20-a87778b44703","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-11-14T01:23:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-14T01:23:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e065dd4-5a3f-4645-8e20-a87778b44703\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2022-11-14T01:23:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.114.78\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-kmds8","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-kmds8","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-bpf2r","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-bpf2r"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-14T01:23:07Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-14T01:23:11Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-14T01:23:11Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-14T01:23:07Z"}],"hostIP":"10.1.0.4","podIP":"192.168.114.78","podIPs":[{"ip":"192.168.114.78"}],"startTime":"2022-11-14T01:23:07Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2022-11-14T01:23:11Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://bf89599915bd4747efe575fa892f233e43bf603744662077ab287ebb922ad466","started":true}],"qosClass":"BestEffort"}}]} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 01:23:12.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "daemonsets-2186" for this suite. �[38;5;243m11/14/22 01:23:12.743�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-node] Pods�[0m �[1mshould cap back-off at MaxContainerBackOff [Slow][NodeConformance]�[0m �[38;5;243mtest/e2e/common/node/pods.go:717�[0m [BeforeEach] [sig-node] Pods set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:23:12.789�[0m Nov 14 01:23:12.789: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename pods �[38;5;243m11/14/22 01:23:12.79�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:23:12.888�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:23:12.948�[0m [BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:194 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] test/e2e/common/node/pods.go:717 Nov 14 01:23:13.047: INFO: Waiting up to 5m0s for pod "back-off-cap" in namespace "pods-978" to be "running and ready" Nov 14 01:23:13.077: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 30.332844ms Nov 14 01:23:13.077: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 14 01:23:15.109: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062746705s Nov 14 01:23:15.109: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 14 01:23:17.109: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062064972s Nov 14 01:23:17.109: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 14 01:23:19.110: INFO: Pod "back-off-cap": Phase="Running", Reason="", readiness=true. Elapsed: 6.063892021s Nov 14 01:23:19.111: INFO: The phase of Pod back-off-cap is Running (Ready = true) Nov 14 01:23:19.111: INFO: Pod "back-off-cap" satisfied condition "running and ready" �[1mSTEP:�[0m getting restart delay when capped �[38;5;243m11/14/22 01:33:19.146�[0m Nov 14 01:34:49.203: INFO: getRestartDelay: restartCount = 7, finishedAt=2022-11-14 01:29:44 +0000 UTC restartedAt=2022-11-14 01:34:47 +0000 UTC (5m3s) Nov 14 01:40:06.526: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-11-14 01:34:52 +0000 UTC restartedAt=2022-11-14 01:40:05 +0000 UTC (5m13s) Nov 14 01:45:26.940: INFO: getRestartDelay: restartCount = 9, finishedAt=2022-11-14 01:40:10 +0000 UTC restartedAt=2022-11-14 01:45:25 +0000 UTC (5m15s) �[1mSTEP:�[0m getting restart delay after a capped delay �[38;5;243m11/14/22 01:45:26.94�[0m Nov 14 01:50:34.948: INFO: getRestartDelay: restartCount = 10, finishedAt=2022-11-14 01:45:30 +0000 UTC restartedAt=2022-11-14 01:50:33 +0000 UTC (5m3s) [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 Nov 14 01:50:34.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "pods-978" for this suite. �[38;5;243m11/14/22 01:50:34.992�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [1642.239 seconds]�[0m [sig-node] Pods �[38;5;243mtest/e2e/common/node/framework.go:23�[0m should cap back-off at MaxContainerBackOff [Slow][NodeConformance] �[38;5;243mtest/e2e/common/node/pods.go:717�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-node] Pods set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:23:12.789�[0m Nov 14 01:23:12.789: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename pods �[38;5;243m11/14/22 01:23:12.79�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:23:12.888�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:23:12.948�[0m [BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:194 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] test/e2e/common/node/pods.go:717 Nov 14 01:23:13.047: INFO: Waiting up to 5m0s for pod "back-off-cap" in namespace "pods-978" to be "running and ready" Nov 14 01:23:13.077: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 30.332844ms Nov 14 01:23:13.077: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 14 01:23:15.109: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062746705s Nov 14 01:23:15.109: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 14 01:23:17.109: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062064972s Nov 14 01:23:17.109: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 14 01:23:19.110: INFO: Pod "back-off-cap": Phase="Running", Reason="", readiness=true. Elapsed: 6.063892021s Nov 14 01:23:19.111: INFO: The phase of Pod back-off-cap is Running (Ready = true) Nov 14 01:23:19.111: INFO: Pod "back-off-cap" satisfied condition "running and ready" �[1mSTEP:�[0m getting restart delay when capped �[38;5;243m11/14/22 01:33:19.146�[0m Nov 14 01:34:49.203: INFO: getRestartDelay: restartCount = 7, finishedAt=2022-11-14 01:29:44 +0000 UTC restartedAt=2022-11-14 01:34:47 +0000 UTC (5m3s) Nov 14 01:40:06.526: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-11-14 01:34:52 +0000 UTC restartedAt=2022-11-14 01:40:05 +0000 UTC (5m13s) Nov 14 01:45:26.940: INFO: getRestartDelay: restartCount = 9, finishedAt=2022-11-14 01:40:10 +0000 UTC restartedAt=2022-11-14 01:45:25 +0000 UTC (5m15s) �[1mSTEP:�[0m getting restart delay after a capped delay �[38;5;243m11/14/22 01:45:26.94�[0m Nov 14 01:50:34.948: INFO: getRestartDelay: restartCount = 10, finishedAt=2022-11-14 01:45:30 +0000 UTC restartedAt=2022-11-14 01:50:33 +0000 UTC (5m3s) [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 Nov 14 01:50:34.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "pods-978" for this suite. �[38;5;243m11/14/22 01:50:34.992�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould orphan pods created by rc if deleteOptions.OrphanDependents is nil�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:439�[0m [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:50:35.036�[0m Nov 14 01:50:35.036: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m11/14/22 01:50:35.037�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:50:35.133�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:50:35.193�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 [It] should orphan pods created by rc if deleteOptions.OrphanDependents is nil test/e2e/apimachinery/garbage_collector.go:439 �[1mSTEP:�[0m create the rc �[38;5;243m11/14/22 01:50:35.253�[0m �[1mSTEP:�[0m delete the rc �[38;5;243m11/14/22 01:50:40.32�[0m �[1mSTEP:�[0m wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[38;5;243m11/14/22 01:50:40.354�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m11/14/22 01:51:10.389�[0m Nov 14 01:51:10.522: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" in namespace "kube-system" to be "running and ready" Nov 14 01:51:10.554: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt": Phase="Running", Reason="", readiness=true. Elapsed: 31.595055ms Nov 14 01:51:10.554: INFO: The phase of Pod kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt is Running (Ready = true) Nov 14 01:51:10.554: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" satisfied condition "running and ready" Nov 14 01:51:10.872: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Nov 14 01:51:10.872: INFO: Deleting pod "simpletest.rc-f4tjh" in namespace "gc-7426" Nov 14 01:51:10.917: INFO: Deleting pod "simpletest.rc-ndfpc" in namespace "gc-7426" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 14 01:51:10.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gc-7426" for this suite. �[38;5;243m11/14/22 01:51:10.993�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [35.992 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should orphan pods created by rc if deleteOptions.OrphanDependents is nil �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:439�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:50:35.036�[0m Nov 14 01:50:35.036: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m11/14/22 01:50:35.037�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:50:35.133�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:50:35.193�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 [It] should orphan pods created by rc if deleteOptions.OrphanDependents is nil test/e2e/apimachinery/garbage_collector.go:439 �[1mSTEP:�[0m create the rc �[38;5;243m11/14/22 01:50:35.253�[0m �[1mSTEP:�[0m delete the rc �[38;5;243m11/14/22 01:50:40.32�[0m �[1mSTEP:�[0m wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[38;5;243m11/14/22 01:50:40.354�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m11/14/22 01:51:10.389�[0m Nov 14 01:51:10.522: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" in namespace "kube-system" to be "running and ready" Nov 14 01:51:10.554: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt": Phase="Running", Reason="", readiness=true. Elapsed: 31.595055ms Nov 14 01:51:10.554: INFO: The phase of Pod kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt is Running (Ready = true) Nov 14 01:51:10.554: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" satisfied condition "running and ready" Nov 14 01:51:10.872: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Nov 14 01:51:10.872: INFO: Deleting pod "simpletest.rc-f4tjh" in namespace "gc-7426" Nov 14 01:51:10.917: INFO: Deleting pod "simpletest.rc-ndfpc" in namespace "gc-7426" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 14 01:51:10.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gc-7426" for this suite. �[38;5;243m11/14/22 01:51:10.993�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould fail substituting values in a volume subpath with backticks [Slow] [Conformance]�[0m �[38;5;243mtest/e2e/common/node/expansion.go:152�[0m [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:51:11.031�[0m Nov 14 01:51:11.031: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m11/14/22 01:51:11.033�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:51:11.128�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:51:11.188�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] test/e2e/common/node/expansion.go:152 Nov 14 01:51:11.285: INFO: Waiting up to 2m0s for pod "var-expansion-72346a81-3997-4e9b-9357-3b94826de943" in namespace "var-expansion-4564" to be "container 0 failed with reason CreateContainerConfigError" Nov 14 01:51:11.319: INFO: Pod "var-expansion-72346a81-3997-4e9b-9357-3b94826de943": Phase="Pending", Reason="", readiness=false. Elapsed: 33.632833ms Nov 14 01:51:13.350: INFO: Pod "var-expansion-72346a81-3997-4e9b-9357-3b94826de943": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065607268s Nov 14 01:51:15.350: INFO: Pod "var-expansion-72346a81-3997-4e9b-9357-3b94826de943": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064977654s Nov 14 01:51:15.350: INFO: Pod "var-expansion-72346a81-3997-4e9b-9357-3b94826de943" satisfied condition "container 0 failed with reason CreateContainerConfigError" Nov 14 01:51:15.350: INFO: Deleting pod "var-expansion-72346a81-3997-4e9b-9357-3b94826de943" in namespace "var-expansion-4564" Nov 14 01:51:15.386: INFO: Wait up to 5m0s for pod "var-expansion-72346a81-3997-4e9b-9357-3b94826de943" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 14 01:51:17.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "var-expansion-4564" for this suite. �[38;5;243m11/14/22 01:51:17.484�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [6.491 seconds]�[0m [sig-node] Variable Expansion �[38;5;243mtest/e2e/common/node/framework.go:23�[0m should fail substituting values in a volume subpath with backticks [Slow] [Conformance] �[38;5;243mtest/e2e/common/node/expansion.go:152�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:51:11.031�[0m Nov 14 01:51:11.031: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m11/14/22 01:51:11.033�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:51:11.128�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:51:11.188�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] test/e2e/common/node/expansion.go:152 Nov 14 01:51:11.285: INFO: Waiting up to 2m0s for pod "var-expansion-72346a81-3997-4e9b-9357-3b94826de943" in namespace "var-expansion-4564" to be "container 0 failed with reason CreateContainerConfigError" Nov 14 01:51:11.319: INFO: Pod "var-expansion-72346a81-3997-4e9b-9357-3b94826de943": Phase="Pending", Reason="", readiness=false. Elapsed: 33.632833ms Nov 14 01:51:13.350: INFO: Pod "var-expansion-72346a81-3997-4e9b-9357-3b94826de943": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065607268s Nov 14 01:51:15.350: INFO: Pod "var-expansion-72346a81-3997-4e9b-9357-3b94826de943": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064977654s Nov 14 01:51:15.350: INFO: Pod "var-expansion-72346a81-3997-4e9b-9357-3b94826de943" satisfied condition "container 0 failed with reason CreateContainerConfigError" Nov 14 01:51:15.350: INFO: Deleting pod "var-expansion-72346a81-3997-4e9b-9357-3b94826de943" in namespace "var-expansion-4564" Nov 14 01:51:15.386: INFO: Wait up to 5m0s for pod "var-expansion-72346a81-3997-4e9b-9357-3b94826de943" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 14 01:51:17.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "var-expansion-4564" for this suite. �[38;5;243m11/14/22 01:51:17.484�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mwith both scale up and down controls configured�[0m �[1mshould keep recommendation within the range with stabilization window and pod limit rate�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:447�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:51:17.535�[0m Nov 14 01:51:17.535: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/14/22 01:51:17.536�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:51:17.632�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:51:17.692�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:31 [It] should keep recommendation within the range with stabilization window and pod limit rate test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:447 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m11/14/22 01:51:17.752�[0m Nov 14 01:51:17.752: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 2 replicas �[38;5;243m11/14/22 01:51:17.753�[0m �[1mSTEP:�[0m Creating deployment consumer in namespace horizontal-pod-autoscaling-332 �[38;5;243m11/14/22 01:51:17.795�[0m I1114 01:51:17.832322 13 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-332, replica count: 2 I1114 01:51:27.884019 13 runners.go:193] consumer Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/14/22 01:51:27.884�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-332 �[38;5;243m11/14/22 01:51:27.926�[0m I1114 01:51:27.963821 13 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-332, replica count: 1 I1114 01:51:38.014801 13 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 14 01:51:43.015: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Nov 14 01:51:43.047: INFO: RC consumer: consume 220 millicores in total Nov 14 01:51:43.047: INFO: RC consumer: setting consumption to 220 millicores in total Nov 14 01:51:43.047: INFO: RC consumer: consume 0 MB in total Nov 14 01:51:43.047: INFO: RC consumer: sending request to consume 220 millicores Nov 14 01:51:43.047: INFO: RC consumer: consume custom metric 0 in total Nov 14 01:51:43.047: INFO: RC consumer: disabling consumption of custom metric QPS Nov 14 01:51:43.047: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Nov 14 01:51:43.047: INFO: RC consumer: disabling mem consumption �[1mSTEP:�[0m triggering scale up by increasing consumption �[38;5;243m11/14/22 01:51:43.085�[0m Nov 14 01:51:43.085: INFO: RC consumer: consume 440 millicores in total Nov 14 01:51:43.115: INFO: RC consumer: setting consumption to 440 millicores in total �[1mSTEP:�[0m verifying number of replicas stay in desired range with pod limit rate �[38;5;243m11/14/22 01:51:43.115�[0m Nov 14 01:51:43.146: INFO: expecting there to be in [2, 3] replicas (are: 2) Nov 14 01:51:43.176: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Nov 14 01:51:53.208: INFO: expecting there to be in [2, 3] replicas (are: 2) Nov 14 01:51:53.238: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Nov 14 01:52:03.209: INFO: expecting there to be in [2, 3] replicas (are: 2) Nov 14 01:52:03.240: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:2 DesiredReplicas:2 CurrentCPUUtilizationPercentage:0xc00333bb40} Nov 14 01:52:13.116: INFO: RC consumer: sending request to consume 440 millicores Nov 14 01:52:13.116: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=440&requestSizeMillicores=100 } Nov 14 01:52:13.208: INFO: expecting there to be in [2, 3] replicas (are: 2) Nov 14 01:52:13.239: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:2 DesiredReplicas:2 CurrentCPUUtilizationPercentage:0xc00333be40} Nov 14 01:52:23.210: INFO: expecting there to be in [2, 3] replicas (are: 2) Nov 14 01:52:23.241: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:2 DesiredReplicas:2 CurrentCPUUtilizationPercentage:0xc003680110} Nov 14 01:52:33.208: INFO: expecting there to be in [2, 3] replicas (are: 3) Nov 14 01:52:33.239: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:52:28 +0000 UTC CurrentReplicas:2 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003740080} Nov 14 01:52:43.157: INFO: RC consumer: sending request to consume 440 millicores Nov 14 01:52:43.157: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=440&requestSizeMillicores=100 } Nov 14 01:52:43.207: INFO: expecting there to be in [2, 3] replicas (are: 3) Nov 14 01:52:43.238: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:52:28 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0034a8600} Nov 14 01:52:53.209: INFO: expecting there to be in [2, 3] replicas (are: 3) Nov 14 01:52:53.240: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:52:28 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0034a8110} Nov 14 01:53:03.209: INFO: expecting there to be in [2, 3] replicas (are: 3) Nov 14 01:53:03.240: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:52:28 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0034a83c0} Nov 14 01:53:13.209: INFO: expecting there to be in [2, 3] replicas (are: 3) Nov 14 01:53:13.215: INFO: RC consumer: sending request to consume 440 millicores Nov 14 01:53:13.215: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=440&requestSizeMillicores=100 } Nov 14 01:53:13.241: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:52:28 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0038544f0} Nov 14 01:53:23.209: INFO: expecting there to be in [2, 3] replicas (are: 3) Nov 14 01:53:23.241: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:52:28 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003854830} Nov 14 01:53:33.210: INFO: expecting there to be in [2, 3] replicas (are: 3) Nov 14 01:53:33.241: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:52:28 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0034a8780} Nov 14 01:53:43.210: INFO: expecting there to be in [2, 3] replicas (are: 3) Nov 14 01:53:43.241: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:52:28 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0034a8aa0} Nov 14 01:53:43.255: INFO: RC consumer: sending request to consume 440 millicores Nov 14 01:53:43.255: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=440&requestSizeMillicores=100 } Nov 14 01:53:43.272: INFO: expecting there to be in [2, 3] replicas (are: 3) Nov 14 01:53:43.303: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:52:28 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000c9e900} Nov 14 01:53:43.303: INFO: Number of replicas was stable over 2m0s �[1mSTEP:�[0m waiting for replicas to scale up �[38;5;243m11/14/22 01:53:43.303�[0m Nov 14 01:53:43.334: INFO: waiting for 4 replicas (current: 3) Nov 14 01:54:03.367: INFO: waiting for 4 replicas (current: 3) Nov 14 01:54:13.295: INFO: RC consumer: sending request to consume 440 millicores Nov 14 01:54:13.295: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=440&requestSizeMillicores=100 } Nov 14 01:54:23.368: INFO: waiting for 4 replicas (current: 3) Nov 14 01:54:43.338: INFO: RC consumer: sending request to consume 440 millicores Nov 14 01:54:43.338: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=440&requestSizeMillicores=100 } Nov 14 01:54:43.367: INFO: waiting for 4 replicas (current: 4) Nov 14 01:54:43.367: INFO: time waited for scale up: 1m0.063904598s �[1mSTEP:�[0m triggering scale down by lowering consumption �[38;5;243m11/14/22 01:54:43.367�[0m Nov 14 01:54:43.367: INFO: RC consumer: consume 220 millicores in total Nov 14 01:54:43.378: INFO: RC consumer: setting consumption to 220 millicores in total �[1mSTEP:�[0m verifying number of replicas stay in desired range within stabilisation window �[38;5;243m11/14/22 01:54:43.378�[0m Nov 14 01:54:43.409: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:54:43.440: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a9190} Nov 14 01:54:53.474: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:54:53.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a82d0} Nov 14 01:55:03.474: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:55:03.505: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a8590} Nov 14 01:55:13.378: INFO: RC consumer: sending request to consume 220 millicores Nov 14 01:55:13.379: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Nov 14 01:55:13.473: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:55:13.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a88b0} Nov 14 01:55:23.473: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:55:23.503: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a8990} Nov 14 01:55:33.473: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:55:33.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc000c9f130} Nov 14 01:55:43.420: INFO: RC consumer: sending request to consume 220 millicores Nov 14 01:55:43.421: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Nov 14 01:55:43.472: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:55:43.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc000c9fac0} Nov 14 01:55:53.475: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:55:53.506: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a8e60} Nov 14 01:56:03.474: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:56:03.505: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a9130} Nov 14 01:56:13.460: INFO: RC consumer: sending request to consume 220 millicores Nov 14 01:56:13.461: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Nov 14 01:56:13.473: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:56:13.506: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a9610} Nov 14 01:56:23.473: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:56:23.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0006a20c0} Nov 14 01:56:33.473: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:56:33.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0038545d0} Nov 14 01:56:43.472: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:56:43.500: INFO: RC consumer: sending request to consume 220 millicores Nov 14 01:56:43.500: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Nov 14 01:56:43.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a97c0} Nov 14 01:56:53.474: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:56:53.505: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0030027a0} Nov 14 01:57:03.474: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:57:03.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a8300} Nov 14 01:57:13.473: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:57:13.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a85d0} Nov 14 01:57:13.540: INFO: RC consumer: sending request to consume 220 millicores Nov 14 01:57:13.540: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Nov 14 01:57:23.474: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:57:23.505: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc003002e20} Nov 14 01:57:33.473: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:57:33.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a8a60} Nov 14 01:57:43.474: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:57:43.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc003003880} Nov 14 01:57:43.536: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:57:43.567: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc000c9e380} Nov 14 01:57:43.567: INFO: Number of replicas was stable over 3m0s �[1mSTEP:�[0m waiting for replicas to scale down after stabilisation window passed �[38;5;243m11/14/22 01:57:43.567�[0m Nov 14 01:57:43.580: INFO: RC consumer: sending request to consume 220 millicores Nov 14 01:57:43.580: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Nov 14 01:57:43.597: INFO: waiting for 2 replicas (current: 4) Nov 14 01:58:03.631: INFO: waiting for 2 replicas (current: 4) Nov 14 01:58:13.620: INFO: RC consumer: sending request to consume 220 millicores Nov 14 01:58:13.620: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Nov 14 01:58:23.631: INFO: waiting for 2 replicas (current: 2) Nov 14 01:58:23.631: INFO: time waited for scale down: 40.06443941s �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m11/14/22 01:58:23.669�[0m Nov 14 01:58:23.669: INFO: RC consumer: stopping metric consumer Nov 14 01:58:23.669: INFO: RC consumer: stopping CPU consumer Nov 14 01:58:23.669: INFO: RC consumer: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-332, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 01:58:33.671�[0m Nov 14 01:58:33.793: INFO: Deleting Deployment.apps consumer took: 39.080904ms Nov 14 01:58:33.894: INFO: Terminating Deployment.apps consumer pods took: 100.813454ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-332, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 01:58:35.659�[0m Nov 14 01:58:35.774: INFO: Deleting ReplicationController consumer-ctrl took: 34.385522ms Nov 14 01:58:35.875: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.75805ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/node/init/init.go:32 Nov 14 01:58:38.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-332" for this suite. �[38;5;243m11/14/22 01:58:38.074�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [440.574 seconds]�[0m [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m with both scale up and down controls configured �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:393�[0m should keep recommendation within the range with stabilization window and pod limit rate �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:447�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:51:17.535�[0m Nov 14 01:51:17.535: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/14/22 01:51:17.536�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:51:17.632�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:51:17.692�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:31 [It] should keep recommendation within the range with stabilization window and pod limit rate test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:447 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m11/14/22 01:51:17.752�[0m Nov 14 01:51:17.752: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 2 replicas �[38;5;243m11/14/22 01:51:17.753�[0m �[1mSTEP:�[0m Creating deployment consumer in namespace horizontal-pod-autoscaling-332 �[38;5;243m11/14/22 01:51:17.795�[0m I1114 01:51:17.832322 13 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-332, replica count: 2 I1114 01:51:27.884019 13 runners.go:193] consumer Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/14/22 01:51:27.884�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-332 �[38;5;243m11/14/22 01:51:27.926�[0m I1114 01:51:27.963821 13 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-332, replica count: 1 I1114 01:51:38.014801 13 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 14 01:51:43.015: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Nov 14 01:51:43.047: INFO: RC consumer: consume 220 millicores in total Nov 14 01:51:43.047: INFO: RC consumer: setting consumption to 220 millicores in total Nov 14 01:51:43.047: INFO: RC consumer: consume 0 MB in total Nov 14 01:51:43.047: INFO: RC consumer: sending request to consume 220 millicores Nov 14 01:51:43.047: INFO: RC consumer: consume custom metric 0 in total Nov 14 01:51:43.047: INFO: RC consumer: disabling consumption of custom metric QPS Nov 14 01:51:43.047: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Nov 14 01:51:43.047: INFO: RC consumer: disabling mem consumption �[1mSTEP:�[0m triggering scale up by increasing consumption �[38;5;243m11/14/22 01:51:43.085�[0m Nov 14 01:51:43.085: INFO: RC consumer: consume 440 millicores in total Nov 14 01:51:43.115: INFO: RC consumer: setting consumption to 440 millicores in total �[1mSTEP:�[0m verifying number of replicas stay in desired range with pod limit rate �[38;5;243m11/14/22 01:51:43.115�[0m Nov 14 01:51:43.146: INFO: expecting there to be in [2, 3] replicas (are: 2) Nov 14 01:51:43.176: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Nov 14 01:51:53.208: INFO: expecting there to be in [2, 3] replicas (are: 2) Nov 14 01:51:53.238: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Nov 14 01:52:03.209: INFO: expecting there to be in [2, 3] replicas (are: 2) Nov 14 01:52:03.240: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:2 DesiredReplicas:2 CurrentCPUUtilizationPercentage:0xc00333bb40} Nov 14 01:52:13.116: INFO: RC consumer: sending request to consume 440 millicores Nov 14 01:52:13.116: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=440&requestSizeMillicores=100 } Nov 14 01:52:13.208: INFO: expecting there to be in [2, 3] replicas (are: 2) Nov 14 01:52:13.239: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:2 DesiredReplicas:2 CurrentCPUUtilizationPercentage:0xc00333be40} Nov 14 01:52:23.210: INFO: expecting there to be in [2, 3] replicas (are: 2) Nov 14 01:52:23.241: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:2 DesiredReplicas:2 CurrentCPUUtilizationPercentage:0xc003680110} Nov 14 01:52:33.208: INFO: expecting there to be in [2, 3] replicas (are: 3) Nov 14 01:52:33.239: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:52:28 +0000 UTC CurrentReplicas:2 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003740080} Nov 14 01:52:43.157: INFO: RC consumer: sending request to consume 440 millicores Nov 14 01:52:43.157: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=440&requestSizeMillicores=100 } Nov 14 01:52:43.207: INFO: expecting there to be in [2, 3] replicas (are: 3) Nov 14 01:52:43.238: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:52:28 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0034a8600} Nov 14 01:52:53.209: INFO: expecting there to be in [2, 3] replicas (are: 3) Nov 14 01:52:53.240: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:52:28 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0034a8110} Nov 14 01:53:03.209: INFO: expecting there to be in [2, 3] replicas (are: 3) Nov 14 01:53:03.240: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:52:28 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0034a83c0} Nov 14 01:53:13.209: INFO: expecting there to be in [2, 3] replicas (are: 3) Nov 14 01:53:13.215: INFO: RC consumer: sending request to consume 440 millicores Nov 14 01:53:13.215: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=440&requestSizeMillicores=100 } Nov 14 01:53:13.241: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:52:28 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0038544f0} Nov 14 01:53:23.209: INFO: expecting there to be in [2, 3] replicas (are: 3) Nov 14 01:53:23.241: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:52:28 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003854830} Nov 14 01:53:33.210: INFO: expecting there to be in [2, 3] replicas (are: 3) Nov 14 01:53:33.241: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:52:28 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0034a8780} Nov 14 01:53:43.210: INFO: expecting there to be in [2, 3] replicas (are: 3) Nov 14 01:53:43.241: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:52:28 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0034a8aa0} Nov 14 01:53:43.255: INFO: RC consumer: sending request to consume 440 millicores Nov 14 01:53:43.255: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=440&requestSizeMillicores=100 } Nov 14 01:53:43.272: INFO: expecting there to be in [2, 3] replicas (are: 3) Nov 14 01:53:43.303: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:52:28 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000c9e900} Nov 14 01:53:43.303: INFO: Number of replicas was stable over 2m0s �[1mSTEP:�[0m waiting for replicas to scale up �[38;5;243m11/14/22 01:53:43.303�[0m Nov 14 01:53:43.334: INFO: waiting for 4 replicas (current: 3) Nov 14 01:54:03.367: INFO: waiting for 4 replicas (current: 3) Nov 14 01:54:13.295: INFO: RC consumer: sending request to consume 440 millicores Nov 14 01:54:13.295: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=440&requestSizeMillicores=100 } Nov 14 01:54:23.368: INFO: waiting for 4 replicas (current: 3) Nov 14 01:54:43.338: INFO: RC consumer: sending request to consume 440 millicores Nov 14 01:54:43.338: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=440&requestSizeMillicores=100 } Nov 14 01:54:43.367: INFO: waiting for 4 replicas (current: 4) Nov 14 01:54:43.367: INFO: time waited for scale up: 1m0.063904598s �[1mSTEP:�[0m triggering scale down by lowering consumption �[38;5;243m11/14/22 01:54:43.367�[0m Nov 14 01:54:43.367: INFO: RC consumer: consume 220 millicores in total Nov 14 01:54:43.378: INFO: RC consumer: setting consumption to 220 millicores in total �[1mSTEP:�[0m verifying number of replicas stay in desired range within stabilisation window �[38;5;243m11/14/22 01:54:43.378�[0m Nov 14 01:54:43.409: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:54:43.440: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a9190} Nov 14 01:54:53.474: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:54:53.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a82d0} Nov 14 01:55:03.474: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:55:03.505: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a8590} Nov 14 01:55:13.378: INFO: RC consumer: sending request to consume 220 millicores Nov 14 01:55:13.379: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Nov 14 01:55:13.473: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:55:13.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a88b0} Nov 14 01:55:23.473: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:55:23.503: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a8990} Nov 14 01:55:33.473: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:55:33.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc000c9f130} Nov 14 01:55:43.420: INFO: RC consumer: sending request to consume 220 millicores Nov 14 01:55:43.421: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Nov 14 01:55:43.472: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:55:43.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc000c9fac0} Nov 14 01:55:53.475: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:55:53.506: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a8e60} Nov 14 01:56:03.474: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:56:03.505: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a9130} Nov 14 01:56:13.460: INFO: RC consumer: sending request to consume 220 millicores Nov 14 01:56:13.461: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Nov 14 01:56:13.473: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:56:13.506: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a9610} Nov 14 01:56:23.473: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:56:23.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0006a20c0} Nov 14 01:56:33.473: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:56:33.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0038545d0} Nov 14 01:56:43.472: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:56:43.500: INFO: RC consumer: sending request to consume 220 millicores Nov 14 01:56:43.500: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Nov 14 01:56:43.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a97c0} Nov 14 01:56:53.474: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:56:53.505: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0030027a0} Nov 14 01:57:03.474: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:57:03.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a8300} Nov 14 01:57:13.473: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:57:13.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a85d0} Nov 14 01:57:13.540: INFO: RC consumer: sending request to consume 220 millicores Nov 14 01:57:13.540: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Nov 14 01:57:23.474: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:57:23.505: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc003002e20} Nov 14 01:57:33.473: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:57:33.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc0034a8a60} Nov 14 01:57:43.474: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:57:43.504: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc003003880} Nov 14 01:57:43.536: INFO: expecting there to be in [4, 4] replicas (are: 4) Nov 14 01:57:43.567: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-14 01:54:28 +0000 UTC CurrentReplicas:4 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc000c9e380} Nov 14 01:57:43.567: INFO: Number of replicas was stable over 3m0s �[1mSTEP:�[0m waiting for replicas to scale down after stabilisation window passed �[38;5;243m11/14/22 01:57:43.567�[0m Nov 14 01:57:43.580: INFO: RC consumer: sending request to consume 220 millicores Nov 14 01:57:43.580: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Nov 14 01:57:43.597: INFO: waiting for 2 replicas (current: 4) Nov 14 01:58:03.631: INFO: waiting for 2 replicas (current: 4) Nov 14 01:58:13.620: INFO: RC consumer: sending request to consume 220 millicores Nov 14 01:58:13.620: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-332/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Nov 14 01:58:23.631: INFO: waiting for 2 replicas (current: 2) Nov 14 01:58:23.631: INFO: time waited for scale down: 40.06443941s �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m11/14/22 01:58:23.669�[0m Nov 14 01:58:23.669: INFO: RC consumer: stopping metric consumer Nov 14 01:58:23.669: INFO: RC consumer: stopping CPU consumer Nov 14 01:58:23.669: INFO: RC consumer: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-332, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 01:58:33.671�[0m Nov 14 01:58:33.793: INFO: Deleting Deployment.apps consumer took: 39.080904ms Nov 14 01:58:33.894: INFO: Terminating Deployment.apps consumer pods took: 100.813454ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-332, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 01:58:35.659�[0m Nov 14 01:58:35.774: INFO: Deleting ReplicationController consumer-ctrl took: 34.385522ms Nov 14 01:58:35.875: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.75805ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/node/init/init.go:32 Nov 14 01:58:38.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-332" for this suite. �[38;5;243m11/14/22 01:58:38.074�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Density [Serial] [Slow] �[38;5;243mcreate a batch of pods�[0m �[1mlatency/resource should be within limit when create 10 pods with 0s interval�[0m �[38;5;243mtest/e2e/windows/density.go:68�[0m [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:58:38.115�[0m Nov 14 01:58:38.116: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename density-test-windows �[38;5;243m11/14/22 01:58:38.117�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:58:38.214�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:58:38.273�[0m [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/metrics/init/init.go:31 [It] latency/resource should be within limit when create 10 pods with 0s interval test/e2e/windows/density.go:68 �[1mSTEP:�[0m Creating a batch of pods �[38;5;243m11/14/22 01:58:38.334�[0m �[1mSTEP:�[0m Waiting for all Pods to be observed by the watch... �[38;5;243m11/14/22 01:58:38.334�[0m Nov 14 01:59:08.373: INFO: Waiting for pod test-fc46dae6-756f-43c9-a9fd-ec0cbe5f0465 to disappear Nov 14 01:59:08.378: INFO: Waiting for pod test-f09e4d35-5ed0-4358-b0e6-5980b186d3cb to disappear Nov 14 01:59:08.378: INFO: Waiting for pod test-a07cbb24-4c60-40b4-a67b-604eec3899db to disappear Nov 14 01:59:08.379: INFO: Waiting for pod test-a8b18361-0c2b-43ca-b80c-8b2a013781ac to disappear Nov 14 01:59:08.380: INFO: Waiting for pod test-9ec0d9ff-862e-42b3-911f-cd432ba90abc to disappear Nov 14 01:59:08.410: INFO: Waiting for pod test-0c50ff4b-9e2c-44e0-bcfa-239adcbdf61c to disappear Nov 14 01:59:08.419: INFO: Waiting for pod test-84fea8a7-5186-4ad5-91fd-6c4d8acbf3c7 to disappear Nov 14 01:59:08.420: INFO: Waiting for pod test-5bb2a439-c514-4a20-8ce7-3ba4881239d1 to disappear Nov 14 01:59:08.421: INFO: Waiting for pod test-95a64b16-da0f-473f-a2f4-1d577bff290e to disappear Nov 14 01:59:08.421: INFO: Waiting for pod test-463599c1-4e84-404e-a9a8-c7f111ada20a to disappear Nov 14 01:59:08.427: INFO: Pod test-fc46dae6-756f-43c9-a9fd-ec0cbe5f0465 still exists Nov 14 01:59:08.448: INFO: Pod test-9ec0d9ff-862e-42b3-911f-cd432ba90abc still exists Nov 14 01:59:08.448: INFO: Pod test-a8b18361-0c2b-43ca-b80c-8b2a013781ac still exists Nov 14 01:59:08.459: INFO: Pod test-f09e4d35-5ed0-4358-b0e6-5980b186d3cb still exists Nov 14 01:59:08.459: INFO: Pod test-a07cbb24-4c60-40b4-a67b-604eec3899db still exists Nov 14 01:59:08.475: INFO: Pod test-5bb2a439-c514-4a20-8ce7-3ba4881239d1 still exists Nov 14 01:59:08.476: INFO: Pod test-0c50ff4b-9e2c-44e0-bcfa-239adcbdf61c still exists Nov 14 01:59:08.476: INFO: Pod test-95a64b16-da0f-473f-a2f4-1d577bff290e still exists Nov 14 01:59:08.476: INFO: Pod test-463599c1-4e84-404e-a9a8-c7f111ada20a still exists Nov 14 01:59:08.477: INFO: Pod test-84fea8a7-5186-4ad5-91fd-6c4d8acbf3c7 still exists Nov 14 01:59:38.427: INFO: Waiting for pod test-fc46dae6-756f-43c9-a9fd-ec0cbe5f0465 to disappear Nov 14 01:59:38.448: INFO: Waiting for pod test-9ec0d9ff-862e-42b3-911f-cd432ba90abc to disappear Nov 14 01:59:38.448: INFO: Waiting for pod test-a8b18361-0c2b-43ca-b80c-8b2a013781ac to disappear Nov 14 01:59:38.458: INFO: Pod test-fc46dae6-756f-43c9-a9fd-ec0cbe5f0465 no longer exists Nov 14 01:59:38.460: INFO: Waiting for pod test-f09e4d35-5ed0-4358-b0e6-5980b186d3cb to disappear Nov 14 01:59:38.460: INFO: Waiting for pod test-a07cbb24-4c60-40b4-a67b-604eec3899db to disappear Nov 14 01:59:38.476: INFO: Waiting for pod test-0c50ff4b-9e2c-44e0-bcfa-239adcbdf61c to disappear Nov 14 01:59:38.476: INFO: Waiting for pod test-5bb2a439-c514-4a20-8ce7-3ba4881239d1 to disappear Nov 14 01:59:38.476: INFO: Waiting for pod test-95a64b16-da0f-473f-a2f4-1d577bff290e to disappear Nov 14 01:59:38.477: INFO: Waiting for pod test-463599c1-4e84-404e-a9a8-c7f111ada20a to disappear Nov 14 01:59:38.478: INFO: Waiting for pod test-84fea8a7-5186-4ad5-91fd-6c4d8acbf3c7 to disappear Nov 14 01:59:38.480: INFO: Pod test-9ec0d9ff-862e-42b3-911f-cd432ba90abc no longer exists Nov 14 01:59:38.480: INFO: Pod test-a8b18361-0c2b-43ca-b80c-8b2a013781ac no longer exists Nov 14 01:59:38.508: INFO: Pod test-a07cbb24-4c60-40b4-a67b-604eec3899db no longer exists Nov 14 01:59:38.508: INFO: Pod test-f09e4d35-5ed0-4358-b0e6-5980b186d3cb no longer exists Nov 14 01:59:38.537: INFO: Pod test-463599c1-4e84-404e-a9a8-c7f111ada20a no longer exists Nov 14 01:59:38.537: INFO: Pod test-84fea8a7-5186-4ad5-91fd-6c4d8acbf3c7 no longer exists Nov 14 01:59:38.537: INFO: Pod test-95a64b16-da0f-473f-a2f4-1d577bff290e no longer exists Nov 14 01:59:38.537: INFO: Pod test-5bb2a439-c514-4a20-8ce7-3ba4881239d1 no longer exists Nov 14 01:59:38.537: INFO: Pod test-0c50ff4b-9e2c-44e0-bcfa-239adcbdf61c no longer exists [AfterEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 14 01:59:38.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Density [Serial] [Slow] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Density [Serial] [Slow] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "density-test-windows-3190" for this suite. �[38;5;243m11/14/22 01:59:38.654�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [60.573 seconds]�[0m [sig-windows] [Feature:Windows] Density [Serial] [Slow] �[38;5;243mtest/e2e/windows/framework.go:27�[0m create a batch of pods �[38;5;243mtest/e2e/windows/density.go:47�[0m latency/resource should be within limit when create 10 pods with 0s interval �[38;5;243mtest/e2e/windows/density.go:68�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:58:38.115�[0m Nov 14 01:58:38.116: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename density-test-windows �[38;5;243m11/14/22 01:58:38.117�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:58:38.214�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:58:38.273�[0m [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/metrics/init/init.go:31 [It] latency/resource should be within limit when create 10 pods with 0s interval test/e2e/windows/density.go:68 �[1mSTEP:�[0m Creating a batch of pods �[38;5;243m11/14/22 01:58:38.334�[0m �[1mSTEP:�[0m Waiting for all Pods to be observed by the watch... �[38;5;243m11/14/22 01:58:38.334�[0m Nov 14 01:59:08.373: INFO: Waiting for pod test-fc46dae6-756f-43c9-a9fd-ec0cbe5f0465 to disappear Nov 14 01:59:08.378: INFO: Waiting for pod test-f09e4d35-5ed0-4358-b0e6-5980b186d3cb to disappear Nov 14 01:59:08.378: INFO: Waiting for pod test-a07cbb24-4c60-40b4-a67b-604eec3899db to disappear Nov 14 01:59:08.379: INFO: Waiting for pod test-a8b18361-0c2b-43ca-b80c-8b2a013781ac to disappear Nov 14 01:59:08.380: INFO: Waiting for pod test-9ec0d9ff-862e-42b3-911f-cd432ba90abc to disappear Nov 14 01:59:08.410: INFO: Waiting for pod test-0c50ff4b-9e2c-44e0-bcfa-239adcbdf61c to disappear Nov 14 01:59:08.419: INFO: Waiting for pod test-84fea8a7-5186-4ad5-91fd-6c4d8acbf3c7 to disappear Nov 14 01:59:08.420: INFO: Waiting for pod test-5bb2a439-c514-4a20-8ce7-3ba4881239d1 to disappear Nov 14 01:59:08.421: INFO: Waiting for pod test-95a64b16-da0f-473f-a2f4-1d577bff290e to disappear Nov 14 01:59:08.421: INFO: Waiting for pod test-463599c1-4e84-404e-a9a8-c7f111ada20a to disappear Nov 14 01:59:08.427: INFO: Pod test-fc46dae6-756f-43c9-a9fd-ec0cbe5f0465 still exists Nov 14 01:59:08.448: INFO: Pod test-9ec0d9ff-862e-42b3-911f-cd432ba90abc still exists Nov 14 01:59:08.448: INFO: Pod test-a8b18361-0c2b-43ca-b80c-8b2a013781ac still exists Nov 14 01:59:08.459: INFO: Pod test-f09e4d35-5ed0-4358-b0e6-5980b186d3cb still exists Nov 14 01:59:08.459: INFO: Pod test-a07cbb24-4c60-40b4-a67b-604eec3899db still exists Nov 14 01:59:08.475: INFO: Pod test-5bb2a439-c514-4a20-8ce7-3ba4881239d1 still exists Nov 14 01:59:08.476: INFO: Pod test-0c50ff4b-9e2c-44e0-bcfa-239adcbdf61c still exists Nov 14 01:59:08.476: INFO: Pod test-95a64b16-da0f-473f-a2f4-1d577bff290e still exists Nov 14 01:59:08.476: INFO: Pod test-463599c1-4e84-404e-a9a8-c7f111ada20a still exists Nov 14 01:59:08.477: INFO: Pod test-84fea8a7-5186-4ad5-91fd-6c4d8acbf3c7 still exists Nov 14 01:59:38.427: INFO: Waiting for pod test-fc46dae6-756f-43c9-a9fd-ec0cbe5f0465 to disappear Nov 14 01:59:38.448: INFO: Waiting for pod test-9ec0d9ff-862e-42b3-911f-cd432ba90abc to disappear Nov 14 01:59:38.448: INFO: Waiting for pod test-a8b18361-0c2b-43ca-b80c-8b2a013781ac to disappear Nov 14 01:59:38.458: INFO: Pod test-fc46dae6-756f-43c9-a9fd-ec0cbe5f0465 no longer exists Nov 14 01:59:38.460: INFO: Waiting for pod test-f09e4d35-5ed0-4358-b0e6-5980b186d3cb to disappear Nov 14 01:59:38.460: INFO: Waiting for pod test-a07cbb24-4c60-40b4-a67b-604eec3899db to disappear Nov 14 01:59:38.476: INFO: Waiting for pod test-0c50ff4b-9e2c-44e0-bcfa-239adcbdf61c to disappear Nov 14 01:59:38.476: INFO: Waiting for pod test-5bb2a439-c514-4a20-8ce7-3ba4881239d1 to disappear Nov 14 01:59:38.476: INFO: Waiting for pod test-95a64b16-da0f-473f-a2f4-1d577bff290e to disappear Nov 14 01:59:38.477: INFO: Waiting for pod test-463599c1-4e84-404e-a9a8-c7f111ada20a to disappear Nov 14 01:59:38.478: INFO: Waiting for pod test-84fea8a7-5186-4ad5-91fd-6c4d8acbf3c7 to disappear Nov 14 01:59:38.480: INFO: Pod test-9ec0d9ff-862e-42b3-911f-cd432ba90abc no longer exists Nov 14 01:59:38.480: INFO: Pod test-a8b18361-0c2b-43ca-b80c-8b2a013781ac no longer exists Nov 14 01:59:38.508: INFO: Pod test-a07cbb24-4c60-40b4-a67b-604eec3899db no longer exists Nov 14 01:59:38.508: INFO: Pod test-f09e4d35-5ed0-4358-b0e6-5980b186d3cb no longer exists Nov 14 01:59:38.537: INFO: Pod test-463599c1-4e84-404e-a9a8-c7f111ada20a no longer exists Nov 14 01:59:38.537: INFO: Pod test-84fea8a7-5186-4ad5-91fd-6c4d8acbf3c7 no longer exists Nov 14 01:59:38.537: INFO: Pod test-95a64b16-da0f-473f-a2f4-1d577bff290e no longer exists Nov 14 01:59:38.537: INFO: Pod test-5bb2a439-c514-4a20-8ce7-3ba4881239d1 no longer exists Nov 14 01:59:38.537: INFO: Pod test-0c50ff4b-9e2c-44e0-bcfa-239adcbdf61c no longer exists [AfterEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 14 01:59:38.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Density [Serial] [Slow] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Density [Serial] [Slow] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "density-test-windows-3190" for this suite. �[38;5;243m11/14/22 01:59:38.654�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243m[Serial] [Slow] ReplicaSet�[0m �[1mShould scale from 1 pod to 3 pods and then from 3 pods to 5 pods�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:70�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:59:38.69�[0m Nov 14 01:59:38.690: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/14/22 01:59:38.691�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:59:38.791�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:59:38.851�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods test/e2e/autoscaling/horizontal_pod_autoscaling.go:70 Nov 14 01:59:38.912: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[38;5;243m11/14/22 01:59:38.913�[0m �[1mSTEP:�[0m Creating replicaset rs in namespace horizontal-pod-autoscaling-7645 �[38;5;243m11/14/22 01:59:38.957�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-7645 �[38;5;243m11/14/22 01:59:38.957�[0m I1114 01:59:38.991705 13 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-7645, replica count: 1 I1114 01:59:49.045202 13 runners.go:193] rs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/14/22 01:59:49.045�[0m �[1mSTEP:�[0m creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-7645 �[38;5;243m11/14/22 01:59:49.09�[0m I1114 01:59:49.126911 13 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-7645, replica count: 1 I1114 01:59:59.180292 13 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 14 02:00:04.180: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Nov 14 02:00:04.212: INFO: RC rs: consume 250 millicores in total Nov 14 02:00:04.212: INFO: RC rs: setting consumption to 250 millicores in total Nov 14 02:00:04.212: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:00:04.212: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:00:04.212: INFO: RC rs: consume 0 MB in total Nov 14 02:00:04.212: INFO: RC rs: disabling mem consumption Nov 14 02:00:04.212: INFO: RC rs: consume custom metric 0 in total Nov 14 02:00:04.212: INFO: RC rs: disabling consumption of custom metric QPS Nov 14 02:00:04.278: INFO: waiting for 3 replicas (current: 1) Nov 14 02:00:24.313: INFO: waiting for 3 replicas (current: 1) Nov 14 02:00:34.274: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:00:34.275: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:00:44.313: INFO: waiting for 3 replicas (current: 2) Nov 14 02:01:04.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:01:04.316: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:01:04.317: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:01:24.313: INFO: waiting for 3 replicas (current: 2) Nov 14 02:01:34.356: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:01:34.357: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:01:44.314: INFO: waiting for 3 replicas (current: 2) Nov 14 02:02:04.312: INFO: waiting for 3 replicas (current: 2) Nov 14 02:02:04.398: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:02:04.398: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:02:24.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:02:34.438: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:02:34.438: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:02:44.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:03:04.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:03:04.484: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:03:04.484: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:03:24.313: INFO: waiting for 3 replicas (current: 2) Nov 14 02:03:34.525: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:03:34.525: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:03:44.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:04:04.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:04:04.563: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:04:04.563: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:04:24.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:04:34.603: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:04:34.604: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:04:44.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:05:04.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:05:04.645: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:05:04.645: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:05:24.313: INFO: waiting for 3 replicas (current: 2) Nov 14 02:05:34.687: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:05:34.688: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:05:44.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:06:04.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:06:04.728: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:06:04.728: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:06:24.313: INFO: waiting for 3 replicas (current: 2) Nov 14 02:06:34.769: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:06:34.769: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:06:44.313: INFO: waiting for 3 replicas (current: 2) Nov 14 02:07:04.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:07:04.810: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:07:04.810: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:07:24.312: INFO: waiting for 3 replicas (current: 2) Nov 14 02:07:34.850: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:07:34.850: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:07:44.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:08:04.312: INFO: waiting for 3 replicas (current: 2) Nov 14 02:08:04.891: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:08:04.891: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:08:24.312: INFO: waiting for 3 replicas (current: 2) Nov 14 02:08:34.936: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:08:34.936: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:08:44.316: INFO: waiting for 3 replicas (current: 2) Nov 14 02:09:04.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:09:04.975: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:09:04.975: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:09:24.312: INFO: waiting for 3 replicas (current: 2) Nov 14 02:09:35.015: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:09:35.015: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:09:44.313: INFO: waiting for 3 replicas (current: 2) Nov 14 02:10:04.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:10:05.054: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:10:05.054: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:10:24.312: INFO: waiting for 3 replicas (current: 2) Nov 14 02:10:35.093: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:10:35.093: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:10:44.313: INFO: waiting for 3 replicas (current: 2) Nov 14 02:11:04.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:11:05.133: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:11:05.133: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:11:24.313: INFO: waiting for 3 replicas (current: 2) Nov 14 02:11:35.173: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:11:35.173: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:11:44.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:12:04.312: INFO: waiting for 3 replicas (current: 2) Nov 14 02:12:05.213: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:12:05.213: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:12:24.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:12:35.255: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:12:35.255: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:12:44.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:13:04.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:13:05.297: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:13:05.297: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:13:24.314: INFO: waiting for 3 replicas (current: 2) Nov 14 02:13:35.336: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:13:35.336: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:13:44.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:14:04.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:14:05.376: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:14:05.376: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:14:24.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:14:35.416: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:14:35.416: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:14:44.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:15:04.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:15:04.342: INFO: waiting for 3 replicas (current: 2) Nov 14 02:15:04.342: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 14 02:15:04.342: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc0031bfe68, {0x75aabd8?, 0xc0004fe660?}, {{0x75ac8f6, 0x4}, {0x75b5b16, 0x7}, {0x75bebc5, 0xa}}, 0xc000bece10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75aabd8?, 0x62ae505?}, {{0x75ac8f6, 0x4}, {0x75b5b16, 0x7}, {0x75bebc5, 0xa}}, {0x75abb3b, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.3.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:71 +0x88 �[1mSTEP:�[0m Removing consuming RC rs �[38;5;243m11/14/22 02:15:04.376�[0m Nov 14 02:15:04.376: INFO: RC rs: stopping metric consumer Nov 14 02:15:04.376: INFO: RC rs: stopping CPU consumer Nov 14 02:15:04.376: INFO: RC rs: stopping mem consumer �[1mSTEP:�[0m deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-7645, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 02:15:14.377�[0m Nov 14 02:15:14.530: INFO: Deleting ReplicaSet.apps rs took: 71.208195ms Nov 14 02:15:14.631: INFO: Terminating ReplicaSet.apps rs pods took: 100.530641ms �[1mSTEP:�[0m deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-7645, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 02:15:16.787�[0m Nov 14 02:15:16.903: INFO: Deleting ReplicationController rs-ctrl took: 34.328401ms Nov 14 02:15:17.004: INFO: Terminating ReplicationController rs-ctrl pods took: 101.014245ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 14 02:15:18.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/14/22 02:15:18.51�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-7645". �[38;5;243m11/14/22 02:15:18.51�[0m �[1mSTEP:�[0m Found 19 events. �[38;5;243m11/14/22 02:15:18.555�[0m Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:38 +0000 UTC - event for rs: {replicaset-controller } SuccessfulCreate: Created pod: rs-62zwn Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:39 +0000 UTC - event for rs-62zwn: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-7645/rs-62zwn to capz-conf-sq8nr Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:41 +0000 UTC - event for rs-62zwn: {kubelet capz-conf-sq8nr} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:41 +0000 UTC - event for rs-62zwn: {kubelet capz-conf-sq8nr} Created: Created container rs Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:42 +0000 UTC - event for rs-62zwn: {kubelet capz-conf-sq8nr} Started: Started container rs Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:49 +0000 UTC - event for rs-ctrl: {replication-controller } SuccessfulCreate: Created pod: rs-ctrl-5skxk Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:49 +0000 UTC - event for rs-ctrl-5skxk: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-7645/rs-ctrl-5skxk to capz-conf-bpf2r Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:51 +0000 UTC - event for rs-ctrl-5skxk: {kubelet capz-conf-bpf2r} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:51 +0000 UTC - event for rs-ctrl-5skxk: {kubelet capz-conf-bpf2r} Created: Created container rs-ctrl Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:52 +0000 UTC - event for rs-ctrl-5skxk: {kubelet capz-conf-bpf2r} Started: Started container rs-ctrl Nov 14 02:15:18.555: INFO: At 2022-11-14 02:00:34 +0000 UTC - event for rs: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 2; reason: cpu resource utilization (percentage of request) above target Nov 14 02:15:18.555: INFO: At 2022-11-14 02:00:34 +0000 UTC - event for rs: {replicaset-controller } SuccessfulCreate: Created pod: rs-t4grm Nov 14 02:15:18.555: INFO: At 2022-11-14 02:00:34 +0000 UTC - event for rs-t4grm: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-7645/rs-t4grm to capz-conf-bpf2r Nov 14 02:15:18.555: INFO: At 2022-11-14 02:00:36 +0000 UTC - event for rs-t4grm: {kubelet capz-conf-bpf2r} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 14 02:15:18.555: INFO: At 2022-11-14 02:00:36 +0000 UTC - event for rs-t4grm: {kubelet capz-conf-bpf2r} Created: Created container rs Nov 14 02:15:18.556: INFO: At 2022-11-14 02:00:37 +0000 UTC - event for rs-t4grm: {kubelet capz-conf-bpf2r} Started: Started container rs Nov 14 02:15:18.556: INFO: At 2022-11-14 02:15:14 +0000 UTC - event for rs-62zwn: {kubelet capz-conf-sq8nr} Killing: Stopping container rs Nov 14 02:15:18.556: INFO: At 2022-11-14 02:15:14 +0000 UTC - event for rs-t4grm: {kubelet capz-conf-bpf2r} Killing: Stopping container rs Nov 14 02:15:18.556: INFO: At 2022-11-14 02:15:16 +0000 UTC - event for rs-ctrl-5skxk: {kubelet capz-conf-bpf2r} Killing: Stopping container rs-ctrl Nov 14 02:15:18.590: INFO: POD NODE PHASE GRACE CONDITIONS Nov 14 02:15:18.590: INFO: Nov 14 02:15:18.622: INFO: Logging node info for node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:15:18.663: INFO: Node Info: &Node{ObjectMeta:{capz-conf-5alf7c-control-plane-hknpt a75ceb7e-c32f-458d-b53e-3b6c4a58b600 8591 0 2022-11-14 01:06:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-5alf7c-control-plane-hknpt kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-1] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-control-plane-xgnl5 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-5alf7c-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.133.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-14 01:06:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-14 01:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-14 01:06:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-14 01:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-14 01:07:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-14 02:13:53 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-5alf7c-control-plane-hknpt,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-14 01:07:09 +0000 UTC,LastTransitionTime:2022-11-14 01:07:09 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 02:13:53 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 02:13:53 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 02:13:53 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 02:13:53 +0000 UTC,LastTransitionTime:2022-11-14 01:07:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-5alf7c-control-plane-hknpt,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3ddd5ba1d7ea4f438c89ec4460eb4485,SystemUUID:9c65c2f7-ac82-7844-bea4-259d3ca85e49,BootID:e90a54a7-31c1-4896-bf10-fccff5507cc5,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-apiserver-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-apiserver:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:135156176,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-controller-manager-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-controller-manager:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:124986169,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-proxy-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-proxy:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:67201736,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-scheduler-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-scheduler:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:57656120,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 02:15:18.664: INFO: Logging kubelet events for node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:15:18.695: INFO: Logging pods the kubelet thinks is on node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:15:18.746: INFO: etcd-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:31 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:18.746: INFO: Container etcd ready: true, restart count 0 Nov 14 02:15:18.746: INFO: kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:18.746: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 02:15:18.746: INFO: calico-kube-controllers-657b584867-65vn5 started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:18.746: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 14 02:15:18.746: INFO: coredns-787d4945fb-dfwrp started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:18.746: INFO: Container coredns ready: true, restart count 0 Nov 14 02:15:18.746: INFO: kube-apiserver-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:18.746: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 02:15:18.746: INFO: kube-scheduler-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:18.746: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 02:15:18.746: INFO: kube-proxy-nvvcp started at 2022-11-14 01:06:31 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:18.746: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 02:15:18.746: INFO: calico-node-jwd52 started at 2022-11-14 01:06:48 +0000 UTC (2+1 container statuses recorded) Nov 14 02:15:18.747: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 14 02:15:18.747: INFO: Init container install-cni ready: true, restart count 0 Nov 14 02:15:18.747: INFO: Container calico-node ready: true, restart count 0 Nov 14 02:15:18.747: INFO: coredns-787d4945fb-qs9pc started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:18.747: INFO: Container coredns ready: true, restart count 0 Nov 14 02:15:18.747: INFO: metrics-server-c9574f845-p9ptg started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:18.747: INFO: Container metrics-server ready: true, restart count 0 Nov 14 02:15:18.910: INFO: Latency metrics for node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:15:18.910: INFO: Logging node info for node capz-conf-bpf2r Nov 14 02:15:18.941: INFO: Node Info: &Node{ObjectMeta:{capz-conf-bpf2r c45cb394-b969-49da-b171-8e075ea29d20 8487 0 2022-11-14 01:08:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-bpf2r kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-md-win-5c98d6f77b-lr6hr cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-5alf7c-md-win-5c98d6f77b kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.114.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:c9:39:af volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-14 01:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-14 01:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-14 01:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-14 01:09:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-14 01:10:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {kubelet.exe Update v1 2022-11-14 02:12:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-bpf2r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 02:12:47 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 02:12:47 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 02:12:47 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 02:12:47 +0000 UTC,LastTransitionTime:2022-11-14 01:09:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-bpf2r,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-bpf2r,SystemUUID:21083AEB-D819-4573-9CD1-AA772F09A374,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-beta.0.65_8e48df13531802-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 02:15:18.941: INFO: Logging kubelet events for node capz-conf-bpf2r Nov 14 02:15:18.972: INFO: Logging pods the kubelet thinks is on node capz-conf-bpf2r Nov 14 02:15:19.022: INFO: csi-proxy-76x9p started at 2022-11-14 01:09:18 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:19.022: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 02:15:19.022: INFO: kube-proxy-windows-nz2rt started at 2022-11-14 01:08:57 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:19.022: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 02:15:19.022: INFO: containerd-logger-bpt69 started at 2022-11-14 01:08:57 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:19.022: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 02:15:19.022: INFO: calico-node-windows-xk6bd started at 2022-11-14 01:08:57 +0000 UTC (1+2 container statuses recorded) Nov 14 02:15:19.022: INFO: Init container install-cni ready: true, restart count 0 Nov 14 02:15:19.022: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 02:15:19.022: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 02:15:19.165: INFO: Latency metrics for node capz-conf-bpf2r Nov 14 02:15:19.165: INFO: Logging node info for node capz-conf-sq8nr Nov 14 02:15:19.198: INFO: Node Info: &Node{ObjectMeta:{capz-conf-sq8nr 51b52b43-1941-43af-b740-46bccdd021dd 8670 0 2022-11-14 01:08:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-sq8nr kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-md-win-5c98d6f77b-pnhpc cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-5alf7c-md-win-5c98d6f77b kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.166.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:39:f9:57 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-14 01:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-14 01:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-14 01:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-14 01:09:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-14 01:09:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {kubelet.exe Update v1 2022-11-14 02:14:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-sq8nr,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 02:14:43 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 02:14:43 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 02:14:43 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 02:14:43 +0000 UTC,LastTransitionTime:2022-11-14 01:09:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-sq8nr,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-sq8nr,SystemUUID:9699376C-B5F7-4F5B-B48F-D84D2BD16580,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-beta.0.65_8e48df13531802-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 02:15:19.199: INFO: Logging kubelet events for node capz-conf-sq8nr Nov 14 02:15:19.230: INFO: Logging pods the kubelet thinks is on node capz-conf-sq8nr Nov 14 02:15:19.279: INFO: kube-proxy-windows-lldgb started at 2022-11-14 01:08:50 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:19.279: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 02:15:19.279: INFO: calico-node-windows-w6hn2 started at 2022-11-14 01:08:50 +0000 UTC (1+2 container statuses recorded) Nov 14 02:15:19.279: INFO: Init container install-cni ready: true, restart count 0 Nov 14 02:15:19.279: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 02:15:19.279: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 02:15:19.279: INFO: csi-proxy-fbwsw started at 2022-11-14 01:09:15 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:19.279: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 02:15:19.279: INFO: containerd-logger-bf8mz started at 2022-11-14 01:08:50 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:19.279: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 02:15:19.424: INFO: Latency metrics for node capz-conf-sq8nr [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-7645" for this suite. �[38;5;243m11/14/22 02:15:19.424�[0m �[38;5;243m------------------------------�[0m �[38;5;9m• [FAILED] [940.770 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] ReplicaSet �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:69�[0m �[38;5;9m�[1m[It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:70�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 01:59:38.69�[0m Nov 14 01:59:38.690: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/14/22 01:59:38.691�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 01:59:38.791�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 01:59:38.851�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods test/e2e/autoscaling/horizontal_pod_autoscaling.go:70 Nov 14 01:59:38.912: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[38;5;243m11/14/22 01:59:38.913�[0m �[1mSTEP:�[0m Creating replicaset rs in namespace horizontal-pod-autoscaling-7645 �[38;5;243m11/14/22 01:59:38.957�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-7645 �[38;5;243m11/14/22 01:59:38.957�[0m I1114 01:59:38.991705 13 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-7645, replica count: 1 I1114 01:59:49.045202 13 runners.go:193] rs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/14/22 01:59:49.045�[0m �[1mSTEP:�[0m creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-7645 �[38;5;243m11/14/22 01:59:49.09�[0m I1114 01:59:49.126911 13 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-7645, replica count: 1 I1114 01:59:59.180292 13 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 14 02:00:04.180: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Nov 14 02:00:04.212: INFO: RC rs: consume 250 millicores in total Nov 14 02:00:04.212: INFO: RC rs: setting consumption to 250 millicores in total Nov 14 02:00:04.212: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:00:04.212: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:00:04.212: INFO: RC rs: consume 0 MB in total Nov 14 02:00:04.212: INFO: RC rs: disabling mem consumption Nov 14 02:00:04.212: INFO: RC rs: consume custom metric 0 in total Nov 14 02:00:04.212: INFO: RC rs: disabling consumption of custom metric QPS Nov 14 02:00:04.278: INFO: waiting for 3 replicas (current: 1) Nov 14 02:00:24.313: INFO: waiting for 3 replicas (current: 1) Nov 14 02:00:34.274: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:00:34.275: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:00:44.313: INFO: waiting for 3 replicas (current: 2) Nov 14 02:01:04.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:01:04.316: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:01:04.317: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:01:24.313: INFO: waiting for 3 replicas (current: 2) Nov 14 02:01:34.356: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:01:34.357: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:01:44.314: INFO: waiting for 3 replicas (current: 2) Nov 14 02:02:04.312: INFO: waiting for 3 replicas (current: 2) Nov 14 02:02:04.398: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:02:04.398: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:02:24.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:02:34.438: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:02:34.438: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:02:44.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:03:04.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:03:04.484: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:03:04.484: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:03:24.313: INFO: waiting for 3 replicas (current: 2) Nov 14 02:03:34.525: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:03:34.525: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:03:44.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:04:04.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:04:04.563: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:04:04.563: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:04:24.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:04:34.603: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:04:34.604: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:04:44.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:05:04.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:05:04.645: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:05:04.645: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:05:24.313: INFO: waiting for 3 replicas (current: 2) Nov 14 02:05:34.687: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:05:34.688: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:05:44.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:06:04.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:06:04.728: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:06:04.728: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:06:24.313: INFO: waiting for 3 replicas (current: 2) Nov 14 02:06:34.769: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:06:34.769: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:06:44.313: INFO: waiting for 3 replicas (current: 2) Nov 14 02:07:04.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:07:04.810: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:07:04.810: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:07:24.312: INFO: waiting for 3 replicas (current: 2) Nov 14 02:07:34.850: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:07:34.850: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:07:44.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:08:04.312: INFO: waiting for 3 replicas (current: 2) Nov 14 02:08:04.891: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:08:04.891: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:08:24.312: INFO: waiting for 3 replicas (current: 2) Nov 14 02:08:34.936: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:08:34.936: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:08:44.316: INFO: waiting for 3 replicas (current: 2) Nov 14 02:09:04.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:09:04.975: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:09:04.975: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:09:24.312: INFO: waiting for 3 replicas (current: 2) Nov 14 02:09:35.015: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:09:35.015: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:09:44.313: INFO: waiting for 3 replicas (current: 2) Nov 14 02:10:04.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:10:05.054: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:10:05.054: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:10:24.312: INFO: waiting for 3 replicas (current: 2) Nov 14 02:10:35.093: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:10:35.093: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:10:44.313: INFO: waiting for 3 replicas (current: 2) Nov 14 02:11:04.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:11:05.133: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:11:05.133: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:11:24.313: INFO: waiting for 3 replicas (current: 2) Nov 14 02:11:35.173: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:11:35.173: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:11:44.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:12:04.312: INFO: waiting for 3 replicas (current: 2) Nov 14 02:12:05.213: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:12:05.213: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:12:24.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:12:35.255: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:12:35.255: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:12:44.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:13:04.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:13:05.297: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:13:05.297: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:13:24.314: INFO: waiting for 3 replicas (current: 2) Nov 14 02:13:35.336: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:13:35.336: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:13:44.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:14:04.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:14:05.376: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:14:05.376: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:14:24.311: INFO: waiting for 3 replicas (current: 2) Nov 14 02:14:35.416: INFO: RC rs: sending request to consume 250 millicores Nov 14 02:14:35.416: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7645/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:14:44.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:15:04.310: INFO: waiting for 3 replicas (current: 2) Nov 14 02:15:04.342: INFO: waiting for 3 replicas (current: 2) Nov 14 02:15:04.342: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 14 02:15:04.342: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc0031bfe68, {0x75aabd8?, 0xc0004fe660?}, {{0x75ac8f6, 0x4}, {0x75b5b16, 0x7}, {0x75bebc5, 0xa}}, 0xc000bece10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75aabd8?, 0x62ae505?}, {{0x75ac8f6, 0x4}, {0x75b5b16, 0x7}, {0x75bebc5, 0xa}}, {0x75abb3b, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.3.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:71 +0x88 �[1mSTEP:�[0m Removing consuming RC rs �[38;5;243m11/14/22 02:15:04.376�[0m Nov 14 02:15:04.376: INFO: RC rs: stopping metric consumer Nov 14 02:15:04.376: INFO: RC rs: stopping CPU consumer Nov 14 02:15:04.376: INFO: RC rs: stopping mem consumer �[1mSTEP:�[0m deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-7645, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 02:15:14.377�[0m Nov 14 02:15:14.530: INFO: Deleting ReplicaSet.apps rs took: 71.208195ms Nov 14 02:15:14.631: INFO: Terminating ReplicaSet.apps rs pods took: 100.530641ms �[1mSTEP:�[0m deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-7645, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 02:15:16.787�[0m Nov 14 02:15:16.903: INFO: Deleting ReplicationController rs-ctrl took: 34.328401ms Nov 14 02:15:17.004: INFO: Terminating ReplicationController rs-ctrl pods took: 101.014245ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 14 02:15:18.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/14/22 02:15:18.51�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-7645". �[38;5;243m11/14/22 02:15:18.51�[0m �[1mSTEP:�[0m Found 19 events. �[38;5;243m11/14/22 02:15:18.555�[0m Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:38 +0000 UTC - event for rs: {replicaset-controller } SuccessfulCreate: Created pod: rs-62zwn Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:39 +0000 UTC - event for rs-62zwn: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-7645/rs-62zwn to capz-conf-sq8nr Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:41 +0000 UTC - event for rs-62zwn: {kubelet capz-conf-sq8nr} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:41 +0000 UTC - event for rs-62zwn: {kubelet capz-conf-sq8nr} Created: Created container rs Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:42 +0000 UTC - event for rs-62zwn: {kubelet capz-conf-sq8nr} Started: Started container rs Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:49 +0000 UTC - event for rs-ctrl: {replication-controller } SuccessfulCreate: Created pod: rs-ctrl-5skxk Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:49 +0000 UTC - event for rs-ctrl-5skxk: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-7645/rs-ctrl-5skxk to capz-conf-bpf2r Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:51 +0000 UTC - event for rs-ctrl-5skxk: {kubelet capz-conf-bpf2r} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:51 +0000 UTC - event for rs-ctrl-5skxk: {kubelet capz-conf-bpf2r} Created: Created container rs-ctrl Nov 14 02:15:18.555: INFO: At 2022-11-14 01:59:52 +0000 UTC - event for rs-ctrl-5skxk: {kubelet capz-conf-bpf2r} Started: Started container rs-ctrl Nov 14 02:15:18.555: INFO: At 2022-11-14 02:00:34 +0000 UTC - event for rs: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 2; reason: cpu resource utilization (percentage of request) above target Nov 14 02:15:18.555: INFO: At 2022-11-14 02:00:34 +0000 UTC - event for rs: {replicaset-controller } SuccessfulCreate: Created pod: rs-t4grm Nov 14 02:15:18.555: INFO: At 2022-11-14 02:00:34 +0000 UTC - event for rs-t4grm: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-7645/rs-t4grm to capz-conf-bpf2r Nov 14 02:15:18.555: INFO: At 2022-11-14 02:00:36 +0000 UTC - event for rs-t4grm: {kubelet capz-conf-bpf2r} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 14 02:15:18.555: INFO: At 2022-11-14 02:00:36 +0000 UTC - event for rs-t4grm: {kubelet capz-conf-bpf2r} Created: Created container rs Nov 14 02:15:18.556: INFO: At 2022-11-14 02:00:37 +0000 UTC - event for rs-t4grm: {kubelet capz-conf-bpf2r} Started: Started container rs Nov 14 02:15:18.556: INFO: At 2022-11-14 02:15:14 +0000 UTC - event for rs-62zwn: {kubelet capz-conf-sq8nr} Killing: Stopping container rs Nov 14 02:15:18.556: INFO: At 2022-11-14 02:15:14 +0000 UTC - event for rs-t4grm: {kubelet capz-conf-bpf2r} Killing: Stopping container rs Nov 14 02:15:18.556: INFO: At 2022-11-14 02:15:16 +0000 UTC - event for rs-ctrl-5skxk: {kubelet capz-conf-bpf2r} Killing: Stopping container rs-ctrl Nov 14 02:15:18.590: INFO: POD NODE PHASE GRACE CONDITIONS Nov 14 02:15:18.590: INFO: Nov 14 02:15:18.622: INFO: Logging node info for node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:15:18.663: INFO: Node Info: &Node{ObjectMeta:{capz-conf-5alf7c-control-plane-hknpt a75ceb7e-c32f-458d-b53e-3b6c4a58b600 8591 0 2022-11-14 01:06:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-5alf7c-control-plane-hknpt kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-1] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-control-plane-xgnl5 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-5alf7c-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.133.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-14 01:06:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-14 01:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-14 01:06:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-14 01:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-14 01:07:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-14 02:13:53 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-5alf7c-control-plane-hknpt,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-14 01:07:09 +0000 UTC,LastTransitionTime:2022-11-14 01:07:09 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 02:13:53 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 02:13:53 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 02:13:53 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 02:13:53 +0000 UTC,LastTransitionTime:2022-11-14 01:07:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-5alf7c-control-plane-hknpt,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3ddd5ba1d7ea4f438c89ec4460eb4485,SystemUUID:9c65c2f7-ac82-7844-bea4-259d3ca85e49,BootID:e90a54a7-31c1-4896-bf10-fccff5507cc5,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-apiserver-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-apiserver:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:135156176,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-controller-manager-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-controller-manager:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:124986169,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-proxy-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-proxy:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:67201736,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-scheduler-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-scheduler:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:57656120,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 02:15:18.664: INFO: Logging kubelet events for node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:15:18.695: INFO: Logging pods the kubelet thinks is on node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:15:18.746: INFO: etcd-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:31 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:18.746: INFO: Container etcd ready: true, restart count 0 Nov 14 02:15:18.746: INFO: kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:18.746: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 02:15:18.746: INFO: calico-kube-controllers-657b584867-65vn5 started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:18.746: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 14 02:15:18.746: INFO: coredns-787d4945fb-dfwrp started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:18.746: INFO: Container coredns ready: true, restart count 0 Nov 14 02:15:18.746: INFO: kube-apiserver-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:18.746: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 02:15:18.746: INFO: kube-scheduler-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:18.746: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 02:15:18.746: INFO: kube-proxy-nvvcp started at 2022-11-14 01:06:31 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:18.746: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 02:15:18.746: INFO: calico-node-jwd52 started at 2022-11-14 01:06:48 +0000 UTC (2+1 container statuses recorded) Nov 14 02:15:18.747: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 14 02:15:18.747: INFO: Init container install-cni ready: true, restart count 0 Nov 14 02:15:18.747: INFO: Container calico-node ready: true, restart count 0 Nov 14 02:15:18.747: INFO: coredns-787d4945fb-qs9pc started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:18.747: INFO: Container coredns ready: true, restart count 0 Nov 14 02:15:18.747: INFO: metrics-server-c9574f845-p9ptg started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:18.747: INFO: Container metrics-server ready: true, restart count 0 Nov 14 02:15:18.910: INFO: Latency metrics for node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:15:18.910: INFO: Logging node info for node capz-conf-bpf2r Nov 14 02:15:18.941: INFO: Node Info: &Node{ObjectMeta:{capz-conf-bpf2r c45cb394-b969-49da-b171-8e075ea29d20 8487 0 2022-11-14 01:08:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-bpf2r kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-md-win-5c98d6f77b-lr6hr cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-5alf7c-md-win-5c98d6f77b kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.114.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:c9:39:af volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-14 01:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-14 01:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-14 01:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-14 01:09:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-14 01:10:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {kubelet.exe Update v1 2022-11-14 02:12:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-bpf2r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 02:12:47 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 02:12:47 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 02:12:47 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 02:12:47 +0000 UTC,LastTransitionTime:2022-11-14 01:09:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-bpf2r,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-bpf2r,SystemUUID:21083AEB-D819-4573-9CD1-AA772F09A374,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-beta.0.65_8e48df13531802-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 02:15:18.941: INFO: Logging kubelet events for node capz-conf-bpf2r Nov 14 02:15:18.972: INFO: Logging pods the kubelet thinks is on node capz-conf-bpf2r Nov 14 02:15:19.022: INFO: csi-proxy-76x9p started at 2022-11-14 01:09:18 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:19.022: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 02:15:19.022: INFO: kube-proxy-windows-nz2rt started at 2022-11-14 01:08:57 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:19.022: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 02:15:19.022: INFO: containerd-logger-bpt69 started at 2022-11-14 01:08:57 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:19.022: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 02:15:19.022: INFO: calico-node-windows-xk6bd started at 2022-11-14 01:08:57 +0000 UTC (1+2 container statuses recorded) Nov 14 02:15:19.022: INFO: Init container install-cni ready: true, restart count 0 Nov 14 02:15:19.022: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 02:15:19.022: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 02:15:19.165: INFO: Latency metrics for node capz-conf-bpf2r Nov 14 02:15:19.165: INFO: Logging node info for node capz-conf-sq8nr Nov 14 02:15:19.198: INFO: Node Info: &Node{ObjectMeta:{capz-conf-sq8nr 51b52b43-1941-43af-b740-46bccdd021dd 8670 0 2022-11-14 01:08:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-sq8nr kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-md-win-5c98d6f77b-pnhpc cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-5alf7c-md-win-5c98d6f77b kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.166.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:39:f9:57 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-14 01:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-14 01:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-14 01:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-14 01:09:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-14 01:09:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {kubelet.exe Update v1 2022-11-14 02:14:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-sq8nr,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 02:14:43 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 02:14:43 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 02:14:43 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 02:14:43 +0000 UTC,LastTransitionTime:2022-11-14 01:09:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-sq8nr,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-sq8nr,SystemUUID:9699376C-B5F7-4F5B-B48F-D84D2BD16580,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-beta.0.65_8e48df13531802-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 02:15:19.199: INFO: Logging kubelet events for node capz-conf-sq8nr Nov 14 02:15:19.230: INFO: Logging pods the kubelet thinks is on node capz-conf-sq8nr Nov 14 02:15:19.279: INFO: kube-proxy-windows-lldgb started at 2022-11-14 01:08:50 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:19.279: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 02:15:19.279: INFO: calico-node-windows-w6hn2 started at 2022-11-14 01:08:50 +0000 UTC (1+2 container statuses recorded) Nov 14 02:15:19.279: INFO: Init container install-cni ready: true, restart count 0 Nov 14 02:15:19.279: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 02:15:19.279: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 02:15:19.279: INFO: csi-proxy-fbwsw started at 2022-11-14 01:09:15 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:19.279: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 02:15:19.279: INFO: containerd-logger-bf8mz started at 2022-11-14 01:08:50 +0000 UTC (0+1 container statuses recorded) Nov 14 02:15:19.279: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 02:15:19.424: INFO: Latency metrics for node capz-conf-sq8nr [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-7645" for this suite. �[38;5;243m11/14/22 02:15:19.424�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;9mNov 14 02:15:04.342: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition�[0m �[38;5;9mIn �[1m[It]�[0m�[38;5;9m at: �[1mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:209�[0m �[38;5;9mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc0031bfe68, {0x75aabd8?, 0xc0004fe660?}, {{0x75ac8f6, 0x4}, {0x75b5b16, 0x7}, {0x75bebc5, 0xa}}, 0xc000bece10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75aabd8?, 0x62ae505?}, {{0x75ac8f6, 0x4}, {0x75b5b16, 0x7}, {0x75bebc5, 0xa}}, {0x75abb3b, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.3.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:71 +0x88 �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) �[38;5;243m[Serial] [Slow] Deployment (Container Resource)�[0m �[1mShould scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:166�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:15:19.464�[0m Nov 14 02:15:19.465: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/14/22 02:15:19.466�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:15:19.566�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:15:19.626�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:166 Nov 14 02:15:19.686: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/14/22 02:15:19.687�[0m �[1mSTEP:�[0m Creating deployment test-deployment in namespace horizontal-pod-autoscaling-9048 �[38;5;243m11/14/22 02:15:19.735�[0m I1114 02:15:19.775810 13 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-9048, replica count: 1 I1114 02:15:29.830057 13 runners.go:193] test-deployment Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/14/22 02:15:29.83�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-9048 �[38;5;243m11/14/22 02:15:29.875�[0m I1114 02:15:29.911498 13 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-9048, replica count: 1 I1114 02:15:39.962639 13 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 14 02:15:44.964: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Nov 14 02:15:44.995: INFO: RC test-deployment: consume 0 millicores in total Nov 14 02:15:44.995: INFO: RC test-deployment: disabling CPU consumption Nov 14 02:15:44.995: INFO: RC test-deployment: consume 250 MB in total Nov 14 02:15:44.995: INFO: RC test-deployment: consume custom metric 0 in total Nov 14 02:15:44.995: INFO: RC test-deployment: disabling consumption of custom metric QPS Nov 14 02:15:44.995: INFO: RC test-deployment: setting consumption to 250 MB in total Nov 14 02:15:45.061: INFO: waiting for 3 replicas (current: 1) Nov 14 02:16:05.095: INFO: waiting for 3 replicas (current: 1) Nov 14 02:16:14.996: INFO: RC test-deployment: sending request to consume 250 MB Nov 14 02:16:14.996: INFO: ConsumeMem URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9048/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 14 02:16:25.096: INFO: waiting for 3 replicas (current: 1) Nov 14 02:16:45.070: INFO: RC test-deployment: sending request to consume 250 MB Nov 14 02:16:45.071: INFO: ConsumeMem URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9048/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 14 02:16:45.095: INFO: waiting for 3 replicas (current: 3) Nov 14 02:16:45.095: INFO: RC test-deployment: consume 700 MB in total Nov 14 02:16:45.114: INFO: RC test-deployment: setting consumption to 700 MB in total Nov 14 02:16:45.144: INFO: waiting for 5 replicas (current: 3) Nov 14 02:17:05.180: INFO: waiting for 5 replicas (current: 4) Nov 14 02:17:15.114: INFO: RC test-deployment: sending request to consume 700 MB Nov 14 02:17:15.115: INFO: ConsumeMem URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9048/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=700&requestSizeMegabytes=100 } Nov 14 02:17:25.180: INFO: waiting for 5 replicas (current: 4) Nov 14 02:17:45.178: INFO: waiting for 5 replicas (current: 5) �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m11/14/22 02:17:45.213�[0m Nov 14 02:17:45.213: INFO: RC test-deployment: stopping metric consumer Nov 14 02:17:45.213: INFO: RC test-deployment: stopping CPU consumer Nov 14 02:17:45.213: INFO: RC test-deployment: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-9048, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 02:17:55.216�[0m Nov 14 02:17:55.332: INFO: Deleting Deployment.apps test-deployment took: 34.281604ms Nov 14 02:17:55.433: INFO: Terminating Deployment.apps test-deployment pods took: 100.924295ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-9048, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 02:17:58.001�[0m Nov 14 02:17:58.117: INFO: Deleting ReplicationController test-deployment-ctrl took: 34.333901ms Nov 14 02:17:58.217: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.758765ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/node/init/init.go:32 Nov 14 02:17:59.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-9048" for this suite. �[38;5;243m11/14/22 02:18:00.014�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [160.585 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] Deployment (Container Resource) �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:162�[0m Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:166�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:15:19.464�[0m Nov 14 02:15:19.465: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/14/22 02:15:19.466�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:15:19.566�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:15:19.626�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:166 Nov 14 02:15:19.686: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/14/22 02:15:19.687�[0m �[1mSTEP:�[0m Creating deployment test-deployment in namespace horizontal-pod-autoscaling-9048 �[38;5;243m11/14/22 02:15:19.735�[0m I1114 02:15:19.775810 13 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-9048, replica count: 1 I1114 02:15:29.830057 13 runners.go:193] test-deployment Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/14/22 02:15:29.83�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-9048 �[38;5;243m11/14/22 02:15:29.875�[0m I1114 02:15:29.911498 13 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-9048, replica count: 1 I1114 02:15:39.962639 13 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 14 02:15:44.964: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Nov 14 02:15:44.995: INFO: RC test-deployment: consume 0 millicores in total Nov 14 02:15:44.995: INFO: RC test-deployment: disabling CPU consumption Nov 14 02:15:44.995: INFO: RC test-deployment: consume 250 MB in total Nov 14 02:15:44.995: INFO: RC test-deployment: consume custom metric 0 in total Nov 14 02:15:44.995: INFO: RC test-deployment: disabling consumption of custom metric QPS Nov 14 02:15:44.995: INFO: RC test-deployment: setting consumption to 250 MB in total Nov 14 02:15:45.061: INFO: waiting for 3 replicas (current: 1) Nov 14 02:16:05.095: INFO: waiting for 3 replicas (current: 1) Nov 14 02:16:14.996: INFO: RC test-deployment: sending request to consume 250 MB Nov 14 02:16:14.996: INFO: ConsumeMem URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9048/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 14 02:16:25.096: INFO: waiting for 3 replicas (current: 1) Nov 14 02:16:45.070: INFO: RC test-deployment: sending request to consume 250 MB Nov 14 02:16:45.071: INFO: ConsumeMem URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9048/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 14 02:16:45.095: INFO: waiting for 3 replicas (current: 3) Nov 14 02:16:45.095: INFO: RC test-deployment: consume 700 MB in total Nov 14 02:16:45.114: INFO: RC test-deployment: setting consumption to 700 MB in total Nov 14 02:16:45.144: INFO: waiting for 5 replicas (current: 3) Nov 14 02:17:05.180: INFO: waiting for 5 replicas (current: 4) Nov 14 02:17:15.114: INFO: RC test-deployment: sending request to consume 700 MB Nov 14 02:17:15.115: INFO: ConsumeMem URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9048/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=700&requestSizeMegabytes=100 } Nov 14 02:17:25.180: INFO: waiting for 5 replicas (current: 4) Nov 14 02:17:45.178: INFO: waiting for 5 replicas (current: 5) �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m11/14/22 02:17:45.213�[0m Nov 14 02:17:45.213: INFO: RC test-deployment: stopping metric consumer Nov 14 02:17:45.213: INFO: RC test-deployment: stopping CPU consumer Nov 14 02:17:45.213: INFO: RC test-deployment: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-9048, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 02:17:55.216�[0m Nov 14 02:17:55.332: INFO: Deleting Deployment.apps test-deployment took: 34.281604ms Nov 14 02:17:55.433: INFO: Terminating Deployment.apps test-deployment pods took: 100.924295ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-9048, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 02:17:58.001�[0m Nov 14 02:17:58.117: INFO: Deleting ReplicationController test-deployment-ctrl took: 34.333901ms Nov 14 02:17:58.217: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.758765ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/node/init/init.go:32 Nov 14 02:17:59.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-9048" for this suite. �[38;5;243m11/14/22 02:18:00.014�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Cpu Resources [Serial] �[38;5;243mContainer limits�[0m �[1mshould not be exceeded after waiting 2 minutes�[0m �[38;5;243mtest/e2e/windows/cpu_limits.go:45�[0m [BeforeEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:18:00.055�[0m Nov 14 02:18:00.055: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename cpu-resources-test-windows �[38;5;243m11/14/22 02:18:00.056�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:18:00.153�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:18:00.213�[0m [BeforeEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should not be exceeded after waiting 2 minutes test/e2e/windows/cpu_limits.go:45 �[1mSTEP:�[0m Creating one pod with limit set to '0.5' �[38;5;243m11/14/22 02:18:00.274�[0m Nov 14 02:18:00.311: INFO: Waiting up to 5m0s for pod "cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4" in namespace "cpu-resources-test-windows-5530" to be "running and ready" Nov 14 02:18:00.342: INFO: Pod "cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4": Phase="Pending", Reason="", readiness=false. Elapsed: 31.131524ms Nov 14 02:18:00.342: INFO: The phase of Pod cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4 is Pending, waiting for it to be Running (with Ready = true) Nov 14 02:18:02.376: INFO: Pod "cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064804664s Nov 14 02:18:02.376: INFO: The phase of Pod cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4 is Pending, waiting for it to be Running (with Ready = true) Nov 14 02:18:04.374: INFO: Pod "cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062529355s Nov 14 02:18:04.374: INFO: The phase of Pod cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4 is Pending, waiting for it to be Running (with Ready = true) Nov 14 02:18:06.374: INFO: Pod "cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4": Phase="Running", Reason="", readiness=true. Elapsed: 6.06296202s Nov 14 02:18:06.374: INFO: The phase of Pod cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4 is Running (Ready = true) Nov 14 02:18:06.374: INFO: Pod "cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4" satisfied condition "running and ready" �[1mSTEP:�[0m Creating one pod with limit set to '500m' �[38;5;243m11/14/22 02:18:06.405�[0m Nov 14 02:18:06.439: INFO: Waiting up to 5m0s for pod "cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f" in namespace "cpu-resources-test-windows-5530" to be "running and ready" Nov 14 02:18:06.475: INFO: Pod "cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f": Phase="Pending", Reason="", readiness=false. Elapsed: 35.246343ms Nov 14 02:18:06.475: INFO: The phase of Pod cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f is Pending, waiting for it to be Running (with Ready = true) Nov 14 02:18:08.508: INFO: Pod "cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068704576s Nov 14 02:18:08.508: INFO: The phase of Pod cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f is Pending, waiting for it to be Running (with Ready = true) Nov 14 02:18:10.516: INFO: Pod "cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076695789s Nov 14 02:18:10.516: INFO: The phase of Pod cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f is Pending, waiting for it to be Running (with Ready = true) Nov 14 02:18:12.507: INFO: Pod "cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f": Phase="Running", Reason="", readiness=true. Elapsed: 6.067151033s Nov 14 02:18:12.507: INFO: The phase of Pod cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f is Running (Ready = true) Nov 14 02:18:12.507: INFO: Pod "cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f" satisfied condition "running and ready" �[1mSTEP:�[0m Waiting 2 minutes �[38;5;243m11/14/22 02:18:12.543�[0m �[1mSTEP:�[0m Ensuring pods are still running �[38;5;243m11/14/22 02:20:12.544�[0m �[1mSTEP:�[0m Ensuring cpu doesn't exceed limit by >5% �[38;5;243m11/14/22 02:20:12.747�[0m �[1mSTEP:�[0m Gathering node summary stats �[38;5;243m11/14/22 02:20:12.747�[0m Nov 14 02:20:12.847: INFO: Pod cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4 usage: 0.459822057 �[1mSTEP:�[0m Gathering node summary stats �[38;5;243m11/14/22 02:20:12.847�[0m Nov 14 02:20:12.945: INFO: Pod cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f usage: 0.481255023 [AfterEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 02:20:12.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Cpu Resources [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Cpu Resources [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Cpu Resources [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "cpu-resources-test-windows-5530" for this suite. �[38;5;243m11/14/22 02:20:12.99�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [132.969 seconds]�[0m [sig-windows] [Feature:Windows] Cpu Resources [Serial] �[38;5;243mtest/e2e/windows/framework.go:27�[0m Container limits �[38;5;243mtest/e2e/windows/cpu_limits.go:44�[0m should not be exceeded after waiting 2 minutes �[38;5;243mtest/e2e/windows/cpu_limits.go:45�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:18:00.055�[0m Nov 14 02:18:00.055: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename cpu-resources-test-windows �[38;5;243m11/14/22 02:18:00.056�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:18:00.153�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:18:00.213�[0m [BeforeEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should not be exceeded after waiting 2 minutes test/e2e/windows/cpu_limits.go:45 �[1mSTEP:�[0m Creating one pod with limit set to '0.5' �[38;5;243m11/14/22 02:18:00.274�[0m Nov 14 02:18:00.311: INFO: Waiting up to 5m0s for pod "cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4" in namespace "cpu-resources-test-windows-5530" to be "running and ready" Nov 14 02:18:00.342: INFO: Pod "cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4": Phase="Pending", Reason="", readiness=false. Elapsed: 31.131524ms Nov 14 02:18:00.342: INFO: The phase of Pod cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4 is Pending, waiting for it to be Running (with Ready = true) Nov 14 02:18:02.376: INFO: Pod "cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064804664s Nov 14 02:18:02.376: INFO: The phase of Pod cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4 is Pending, waiting for it to be Running (with Ready = true) Nov 14 02:18:04.374: INFO: Pod "cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062529355s Nov 14 02:18:04.374: INFO: The phase of Pod cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4 is Pending, waiting for it to be Running (with Ready = true) Nov 14 02:18:06.374: INFO: Pod "cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4": Phase="Running", Reason="", readiness=true. Elapsed: 6.06296202s Nov 14 02:18:06.374: INFO: The phase of Pod cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4 is Running (Ready = true) Nov 14 02:18:06.374: INFO: Pod "cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4" satisfied condition "running and ready" �[1mSTEP:�[0m Creating one pod with limit set to '500m' �[38;5;243m11/14/22 02:18:06.405�[0m Nov 14 02:18:06.439: INFO: Waiting up to 5m0s for pod "cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f" in namespace "cpu-resources-test-windows-5530" to be "running and ready" Nov 14 02:18:06.475: INFO: Pod "cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f": Phase="Pending", Reason="", readiness=false. Elapsed: 35.246343ms Nov 14 02:18:06.475: INFO: The phase of Pod cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f is Pending, waiting for it to be Running (with Ready = true) Nov 14 02:18:08.508: INFO: Pod "cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068704576s Nov 14 02:18:08.508: INFO: The phase of Pod cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f is Pending, waiting for it to be Running (with Ready = true) Nov 14 02:18:10.516: INFO: Pod "cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076695789s Nov 14 02:18:10.516: INFO: The phase of Pod cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f is Pending, waiting for it to be Running (with Ready = true) Nov 14 02:18:12.507: INFO: Pod "cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f": Phase="Running", Reason="", readiness=true. Elapsed: 6.067151033s Nov 14 02:18:12.507: INFO: The phase of Pod cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f is Running (Ready = true) Nov 14 02:18:12.507: INFO: Pod "cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f" satisfied condition "running and ready" �[1mSTEP:�[0m Waiting 2 minutes �[38;5;243m11/14/22 02:18:12.543�[0m �[1mSTEP:�[0m Ensuring pods are still running �[38;5;243m11/14/22 02:20:12.544�[0m �[1mSTEP:�[0m Ensuring cpu doesn't exceed limit by >5% �[38;5;243m11/14/22 02:20:12.747�[0m �[1mSTEP:�[0m Gathering node summary stats �[38;5;243m11/14/22 02:20:12.747�[0m Nov 14 02:20:12.847: INFO: Pod cpulimittest-8d0d9141-c89a-47a0-b31c-6e753b4e95e4 usage: 0.459822057 �[1mSTEP:�[0m Gathering node summary stats �[38;5;243m11/14/22 02:20:12.847�[0m Nov 14 02:20:12.945: INFO: Pod cpulimittest-3228db4b-7224-41d3-9e33-c4d185d1b09f usage: 0.481255023 [AfterEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 02:20:12.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Cpu Resources [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Cpu Resources [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Cpu Resources [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "cpu-resources-test-windows-5530" for this suite. �[38;5;243m11/14/22 02:20:12.99�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:550�[0m [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:20:13.037�[0m Nov 14 02:20:13.037: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m11/14/22 02:20:13.038�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:20:13.134�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:20:13.195�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] test/e2e/apimachinery/garbage_collector.go:550 �[1mSTEP:�[0m create the deployment �[38;5;243m11/14/22 02:20:13.256�[0m �[1mSTEP:�[0m Wait for the Deployment to create new ReplicaSet �[38;5;243m11/14/22 02:20:13.292�[0m �[1mSTEP:�[0m delete the deployment �[38;5;243m11/14/22 02:20:13.504�[0m �[1mSTEP:�[0m wait for deployment deletion to see if the garbage collector mistakenly deletes the rs �[38;5;243m11/14/22 02:20:13.54�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m11/14/22 02:20:14.239�[0m Nov 14 02:20:14.372: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" in namespace "kube-system" to be "running and ready" Nov 14 02:20:14.405: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt": Phase="Running", Reason="", readiness=true. Elapsed: 33.109497ms Nov 14 02:20:14.405: INFO: The phase of Pod kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt is Running (Ready = true) Nov 14 02:20:14.405: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" satisfied condition "running and ready" Nov 14 02:20:14.730: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 14 02:20:14.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gc-2341" for this suite. �[38;5;243m11/14/22 02:20:14.767�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [1.768 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:550�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:20:13.037�[0m Nov 14 02:20:13.037: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m11/14/22 02:20:13.038�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:20:13.134�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:20:13.195�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] test/e2e/apimachinery/garbage_collector.go:550 �[1mSTEP:�[0m create the deployment �[38;5;243m11/14/22 02:20:13.256�[0m �[1mSTEP:�[0m Wait for the Deployment to create new ReplicaSet �[38;5;243m11/14/22 02:20:13.292�[0m �[1mSTEP:�[0m delete the deployment �[38;5;243m11/14/22 02:20:13.504�[0m �[1mSTEP:�[0m wait for deployment deletion to see if the garbage collector mistakenly deletes the rs �[38;5;243m11/14/22 02:20:13.54�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m11/14/22 02:20:14.239�[0m Nov 14 02:20:14.372: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" in namespace "kube-system" to be "running and ready" Nov 14 02:20:14.405: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt": Phase="Running", Reason="", readiness=true. Elapsed: 33.109497ms Nov 14 02:20:14.405: INFO: The phase of Pod kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt is Running (Ready = true) Nov 14 02:20:14.405: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" satisfied condition "running and ready" Nov 14 02:20:14.730: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 14 02:20:14.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gc-2341" for this suite. �[38;5;243m11/14/22 02:20:14.767�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243m[Serial] [Slow] Deployment (Pod Resource)�[0m �[1mShould scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:49�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:20:14.805�[0m Nov 14 02:20:14.805: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/14/22 02:20:14.807�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:20:14.912�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:20:14.973�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:49 Nov 14 02:20:15.034: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/14/22 02:20:15.035�[0m �[1mSTEP:�[0m Creating deployment test-deployment in namespace horizontal-pod-autoscaling-4482 �[38;5;243m11/14/22 02:20:15.08�[0m I1114 02:20:15.118806 13 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-4482, replica count: 1 I1114 02:20:25.170318 13 runners.go:193] test-deployment Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/14/22 02:20:25.17�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-4482 �[38;5;243m11/14/22 02:20:25.212�[0m I1114 02:20:25.251984 13 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-4482, replica count: 1 I1114 02:20:35.303616 13 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 14 02:20:40.303: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Nov 14 02:20:40.335: INFO: RC test-deployment: consume 250 millicores in total Nov 14 02:20:40.335: INFO: RC test-deployment: setting consumption to 250 millicores in total Nov 14 02:20:40.335: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:20:40.335: INFO: RC test-deployment: consume 0 MB in total Nov 14 02:20:40.335: INFO: RC test-deployment: consume custom metric 0 in total Nov 14 02:20:40.335: INFO: RC test-deployment: disabling consumption of custom metric QPS Nov 14 02:20:40.335: INFO: RC test-deployment: disabling mem consumption Nov 14 02:20:40.335: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:20:40.402: INFO: waiting for 3 replicas (current: 1) Nov 14 02:21:00.435: INFO: waiting for 3 replicas (current: 1) Nov 14 02:21:10.401: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:21:10.401: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:21:20.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:21:40.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:21:40.447: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:21:40.447: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:22:00.436: INFO: waiting for 3 replicas (current: 2) Nov 14 02:22:13.485: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:22:13.485: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:22:20.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:22:40.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:22:43.527: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:22:43.527: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:23:00.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:23:13.568: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:23:13.568: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:23:20.436: INFO: waiting for 3 replicas (current: 2) Nov 14 02:23:40.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:23:43.611: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:23:43.611: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:24:00.438: INFO: waiting for 3 replicas (current: 2) Nov 14 02:24:13.652: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:24:13.653: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:24:20.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:24:40.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:24:43.692: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:24:43.692: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:25:00.438: INFO: waiting for 3 replicas (current: 2) Nov 14 02:25:13.733: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:25:13.733: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:25:20.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:25:40.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:25:43.772: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:25:43.773: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:26:00.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:26:13.814: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:26:13.814: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:26:20.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:26:40.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:26:43.856: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:26:43.856: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:27:00.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:27:13.899: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:27:13.899: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:27:20.436: INFO: waiting for 3 replicas (current: 2) Nov 14 02:27:40.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:27:43.940: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:27:43.940: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:28:00.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:28:13.981: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:28:13.981: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:28:20.438: INFO: waiting for 3 replicas (current: 2) Nov 14 02:28:40.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:28:44.021: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:28:44.021: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:29:00.436: INFO: waiting for 3 replicas (current: 2) Nov 14 02:29:14.071: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:29:14.071: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:29:20.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:29:40.436: INFO: waiting for 3 replicas (current: 2) Nov 14 02:29:44.110: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:29:44.111: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:30:00.436: INFO: waiting for 3 replicas (current: 2) Nov 14 02:30:14.151: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:30:14.152: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:30:20.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:30:40.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:30:44.194: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:30:44.194: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:31:00.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:31:14.236: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:31:14.236: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:31:20.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:31:40.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:31:44.276: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:31:44.276: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:32:00.436: INFO: waiting for 3 replicas (current: 2) Nov 14 02:32:14.320: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:32:14.320: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:32:20.436: INFO: waiting for 3 replicas (current: 2) Nov 14 02:32:40.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:32:44.361: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:32:44.361: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:33:00.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:33:14.404: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:33:14.404: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:33:20.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:33:40.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:33:44.446: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:33:44.446: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:34:00.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:34:14.489: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:34:14.489: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:34:20.438: INFO: waiting for 3 replicas (current: 2) Nov 14 02:34:40.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:34:44.536: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:34:44.536: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:35:00.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:35:14.576: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:35:14.576: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:35:20.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:35:40.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:35:40.467: INFO: waiting for 3 replicas (current: 2) Nov 14 02:35:40.467: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 14 02:35:40.467: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00317be68, {0x75d77c5?, 0xc00099f020?}, {{0x75ac8f6, 0x4}, {0x75b5b16, 0x7}, {0x75bdfe5, 0xa}}, 0xc000bece10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75d77c5?, 0x62ae505?}, {{0x75ac8f6, 0x4}, {0x75b5b16, 0x7}, {0x75bdfe5, 0xa}}, {0x75abb3b, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.1.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:50 +0x88 �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m11/14/22 02:35:40.502�[0m Nov 14 02:35:40.503: INFO: RC test-deployment: stopping metric consumer Nov 14 02:35:40.503: INFO: RC test-deployment: stopping CPU consumer Nov 14 02:35:40.503: INFO: RC test-deployment: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-4482, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 02:35:50.503�[0m Nov 14 02:35:50.622: INFO: Deleting Deployment.apps test-deployment took: 36.106123ms Nov 14 02:35:50.723: INFO: Terminating Deployment.apps test-deployment pods took: 101.042684ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-4482, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 02:35:52.986�[0m Nov 14 02:35:53.104: INFO: Deleting ReplicationController test-deployment-ctrl took: 36.015897ms Nov 14 02:35:53.205: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.86095ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 14 02:35:55.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/14/22 02:35:55.116�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-4482". �[38;5;243m11/14/22 02:35:55.116�[0m �[1mSTEP:�[0m Found 21 events. �[38;5;243m11/14/22 02:35:55.151�[0m Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:15 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-54fb67b787 to 1 Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:15 +0000 UTC - event for test-deployment-54fb67b787: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-54fb67b787-28glx Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:15 +0000 UTC - event for test-deployment-54fb67b787-28glx: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-4482/test-deployment-54fb67b787-28glx to capz-conf-bpf2r Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:17 +0000 UTC - event for test-deployment-54fb67b787-28glx: {kubelet capz-conf-bpf2r} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:18 +0000 UTC - event for test-deployment-54fb67b787-28glx: {kubelet capz-conf-bpf2r} Created: Created container test-deployment Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:19 +0000 UTC - event for test-deployment-54fb67b787-28glx: {kubelet capz-conf-bpf2r} Started: Started container test-deployment Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:25 +0000 UTC - event for test-deployment-ctrl: {replication-controller } SuccessfulCreate: Created pod: test-deployment-ctrl-j4xfh Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:25 +0000 UTC - event for test-deployment-ctrl-j4xfh: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-4482/test-deployment-ctrl-j4xfh to capz-conf-sq8nr Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:27 +0000 UTC - event for test-deployment-ctrl-j4xfh: {kubelet capz-conf-sq8nr} Created: Created container test-deployment-ctrl Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:27 +0000 UTC - event for test-deployment-ctrl-j4xfh: {kubelet capz-conf-sq8nr} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:29 +0000 UTC - event for test-deployment-ctrl-j4xfh: {kubelet capz-conf-sq8nr} Started: Started container test-deployment-ctrl Nov 14 02:35:55.151: INFO: At 2022-11-14 02:21:10 +0000 UTC - event for test-deployment: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 2; reason: cpu resource utilization (percentage of request) above target Nov 14 02:35:55.151: INFO: At 2022-11-14 02:21:10 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-54fb67b787 to 2 from 1 Nov 14 02:35:55.151: INFO: At 2022-11-14 02:21:10 +0000 UTC - event for test-deployment-54fb67b787: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-54fb67b787-ggs8l Nov 14 02:35:55.151: INFO: At 2022-11-14 02:21:10 +0000 UTC - event for test-deployment-54fb67b787-ggs8l: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-4482/test-deployment-54fb67b787-ggs8l to capz-conf-sq8nr Nov 14 02:35:55.151: INFO: At 2022-11-14 02:21:12 +0000 UTC - event for test-deployment-54fb67b787-ggs8l: {kubelet capz-conf-sq8nr} Created: Created container test-deployment Nov 14 02:35:55.151: INFO: At 2022-11-14 02:21:12 +0000 UTC - event for test-deployment-54fb67b787-ggs8l: {kubelet capz-conf-sq8nr} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 14 02:35:55.151: INFO: At 2022-11-14 02:21:13 +0000 UTC - event for test-deployment-54fb67b787-ggs8l: {kubelet capz-conf-sq8nr} Started: Started container test-deployment Nov 14 02:35:55.151: INFO: At 2022-11-14 02:35:50 +0000 UTC - event for test-deployment-54fb67b787-28glx: {kubelet capz-conf-bpf2r} Killing: Stopping container test-deployment Nov 14 02:35:55.151: INFO: At 2022-11-14 02:35:50 +0000 UTC - event for test-deployment-54fb67b787-ggs8l: {kubelet capz-conf-sq8nr} Killing: Stopping container test-deployment Nov 14 02:35:55.151: INFO: At 2022-11-14 02:35:53 +0000 UTC - event for test-deployment-ctrl-j4xfh: {kubelet capz-conf-sq8nr} Killing: Stopping container test-deployment-ctrl Nov 14 02:35:55.184: INFO: POD NODE PHASE GRACE CONDITIONS Nov 14 02:35:55.184: INFO: Nov 14 02:35:55.219: INFO: Logging node info for node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:35:55.250: INFO: Node Info: &Node{ObjectMeta:{capz-conf-5alf7c-control-plane-hknpt a75ceb7e-c32f-458d-b53e-3b6c4a58b600 11034 0 2022-11-14 01:06:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-5alf7c-control-plane-hknpt kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-1] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-control-plane-xgnl5 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-5alf7c-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.133.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-14 01:06:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-14 01:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-14 01:06:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-14 01:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-14 01:07:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-14 02:34:18 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-5alf7c-control-plane-hknpt,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-14 01:07:09 +0000 UTC,LastTransitionTime:2022-11-14 01:07:09 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 02:34:17 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 02:34:17 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 02:34:17 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 02:34:17 +0000 UTC,LastTransitionTime:2022-11-14 01:07:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-5alf7c-control-plane-hknpt,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3ddd5ba1d7ea4f438c89ec4460eb4485,SystemUUID:9c65c2f7-ac82-7844-bea4-259d3ca85e49,BootID:e90a54a7-31c1-4896-bf10-fccff5507cc5,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-apiserver-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-apiserver:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:135156176,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-controller-manager-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-controller-manager:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:124986169,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-proxy-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-proxy:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:67201736,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-scheduler-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-scheduler:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:57656120,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 02:35:55.251: INFO: Logging kubelet events for node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:35:55.285: INFO: Logging pods the kubelet thinks is on node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:35:55.343: INFO: coredns-787d4945fb-qs9pc started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Container coredns ready: true, restart count 0 Nov 14 02:35:55.343: INFO: metrics-server-c9574f845-p9ptg started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Container metrics-server ready: true, restart count 0 Nov 14 02:35:55.343: INFO: kube-apiserver-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 02:35:55.343: INFO: kube-scheduler-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 02:35:55.343: INFO: kube-proxy-nvvcp started at 2022-11-14 01:06:31 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 02:35:55.343: INFO: calico-node-jwd52 started at 2022-11-14 01:06:48 +0000 UTC (2+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 14 02:35:55.343: INFO: Init container install-cni ready: true, restart count 0 Nov 14 02:35:55.343: INFO: Container calico-node ready: true, restart count 0 Nov 14 02:35:55.343: INFO: etcd-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:31 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Container etcd ready: true, restart count 0 Nov 14 02:35:55.343: INFO: kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 02:35:55.343: INFO: calico-kube-controllers-657b584867-65vn5 started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 14 02:35:55.343: INFO: coredns-787d4945fb-dfwrp started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Container coredns ready: true, restart count 0 Nov 14 02:35:55.507: INFO: Latency metrics for node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:35:55.507: INFO: Logging node info for node capz-conf-bpf2r Nov 14 02:35:55.539: INFO: Node Info: &Node{ObjectMeta:{capz-conf-bpf2r c45cb394-b969-49da-b171-8e075ea29d20 10935 0 2022-11-14 01:08:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-bpf2r kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-md-win-5c98d6f77b-lr6hr cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-5alf7c-md-win-5c98d6f77b kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.114.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:c9:39:af volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-14 01:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-14 01:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-14 01:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-14 01:09:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-14 01:10:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {kubelet.exe Update v1 2022-11-14 02:33:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-bpf2r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 02:33:13 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 02:33:13 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 02:33:13 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 02:33:13 +0000 UTC,LastTransitionTime:2022-11-14 01:09:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-bpf2r,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-bpf2r,SystemUUID:21083AEB-D819-4573-9CD1-AA772F09A374,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-beta.0.65_8e48df13531802-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 02:35:55.539: INFO: Logging kubelet events for node capz-conf-bpf2r Nov 14 02:35:55.572: INFO: Logging pods the kubelet thinks is on node capz-conf-bpf2r Nov 14 02:35:55.624: INFO: calico-node-windows-xk6bd started at 2022-11-14 01:08:57 +0000 UTC (1+2 container statuses recorded) Nov 14 02:35:55.624: INFO: Init container install-cni ready: true, restart count 0 Nov 14 02:35:55.624: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 02:35:55.624: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 02:35:55.624: INFO: containerd-logger-bpt69 started at 2022-11-14 01:08:57 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.624: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 02:35:55.624: INFO: kube-proxy-windows-nz2rt started at 2022-11-14 01:08:57 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.624: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 02:35:55.624: INFO: csi-proxy-76x9p started at 2022-11-14 01:09:18 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.624: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 02:35:55.785: INFO: Latency metrics for node capz-conf-bpf2r Nov 14 02:35:55.785: INFO: Logging node info for node capz-conf-sq8nr Nov 14 02:35:55.817: INFO: Node Info: &Node{ObjectMeta:{capz-conf-sq8nr 51b52b43-1941-43af-b740-46bccdd021dd 11116 0 2022-11-14 01:08:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-sq8nr kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-md-win-5c98d6f77b-pnhpc cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-5alf7c-md-win-5c98d6f77b kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.166.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:39:f9:57 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-14 01:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-14 01:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-14 01:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-14 01:09:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-14 01:09:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {kubelet.exe Update v1 2022-11-14 02:35:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-sq8nr,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 02:35:09 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 02:35:09 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 02:35:09 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 02:35:09 +0000 UTC,LastTransitionTime:2022-11-14 01:09:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-sq8nr,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-sq8nr,SystemUUID:9699376C-B5F7-4F5B-B48F-D84D2BD16580,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-beta.0.65_8e48df13531802-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 02:35:55.818: INFO: Logging kubelet events for node capz-conf-sq8nr Nov 14 02:35:55.849: INFO: Logging pods the kubelet thinks is on node capz-conf-sq8nr Nov 14 02:35:55.899: INFO: containerd-logger-bf8mz started at 2022-11-14 01:08:50 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.899: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 02:35:55.899: INFO: kube-proxy-windows-lldgb started at 2022-11-14 01:08:50 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.899: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 02:35:55.899: INFO: calico-node-windows-w6hn2 started at 2022-11-14 01:08:50 +0000 UTC (1+2 container statuses recorded) Nov 14 02:35:55.899: INFO: Init container install-cni ready: true, restart count 0 Nov 14 02:35:55.899: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 02:35:55.899: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 02:35:55.899: INFO: csi-proxy-fbwsw started at 2022-11-14 01:09:15 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.899: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 02:35:56.057: INFO: Latency metrics for node capz-conf-sq8nr [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-4482" for this suite. �[38;5;243m11/14/22 02:35:56.057�[0m �[38;5;243m------------------------------�[0m �[38;5;9m• [FAILED] [941.288 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] Deployment (Pod Resource) �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:48�[0m �[38;5;9m�[1m[It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:49�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:20:14.805�[0m Nov 14 02:20:14.805: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/14/22 02:20:14.807�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:20:14.912�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:20:14.973�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:49 Nov 14 02:20:15.034: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/14/22 02:20:15.035�[0m �[1mSTEP:�[0m Creating deployment test-deployment in namespace horizontal-pod-autoscaling-4482 �[38;5;243m11/14/22 02:20:15.08�[0m I1114 02:20:15.118806 13 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-4482, replica count: 1 I1114 02:20:25.170318 13 runners.go:193] test-deployment Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/14/22 02:20:25.17�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-4482 �[38;5;243m11/14/22 02:20:25.212�[0m I1114 02:20:25.251984 13 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-4482, replica count: 1 I1114 02:20:35.303616 13 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 14 02:20:40.303: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Nov 14 02:20:40.335: INFO: RC test-deployment: consume 250 millicores in total Nov 14 02:20:40.335: INFO: RC test-deployment: setting consumption to 250 millicores in total Nov 14 02:20:40.335: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:20:40.335: INFO: RC test-deployment: consume 0 MB in total Nov 14 02:20:40.335: INFO: RC test-deployment: consume custom metric 0 in total Nov 14 02:20:40.335: INFO: RC test-deployment: disabling consumption of custom metric QPS Nov 14 02:20:40.335: INFO: RC test-deployment: disabling mem consumption Nov 14 02:20:40.335: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:20:40.402: INFO: waiting for 3 replicas (current: 1) Nov 14 02:21:00.435: INFO: waiting for 3 replicas (current: 1) Nov 14 02:21:10.401: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:21:10.401: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:21:20.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:21:40.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:21:40.447: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:21:40.447: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:22:00.436: INFO: waiting for 3 replicas (current: 2) Nov 14 02:22:13.485: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:22:13.485: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:22:20.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:22:40.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:22:43.527: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:22:43.527: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:23:00.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:23:13.568: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:23:13.568: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:23:20.436: INFO: waiting for 3 replicas (current: 2) Nov 14 02:23:40.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:23:43.611: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:23:43.611: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:24:00.438: INFO: waiting for 3 replicas (current: 2) Nov 14 02:24:13.652: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:24:13.653: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:24:20.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:24:40.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:24:43.692: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:24:43.692: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:25:00.438: INFO: waiting for 3 replicas (current: 2) Nov 14 02:25:13.733: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:25:13.733: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:25:20.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:25:40.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:25:43.772: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:25:43.773: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:26:00.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:26:13.814: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:26:13.814: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:26:20.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:26:40.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:26:43.856: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:26:43.856: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:27:00.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:27:13.899: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:27:13.899: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:27:20.436: INFO: waiting for 3 replicas (current: 2) Nov 14 02:27:40.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:27:43.940: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:27:43.940: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:28:00.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:28:13.981: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:28:13.981: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:28:20.438: INFO: waiting for 3 replicas (current: 2) Nov 14 02:28:40.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:28:44.021: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:28:44.021: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:29:00.436: INFO: waiting for 3 replicas (current: 2) Nov 14 02:29:14.071: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:29:14.071: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:29:20.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:29:40.436: INFO: waiting for 3 replicas (current: 2) Nov 14 02:29:44.110: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:29:44.111: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:30:00.436: INFO: waiting for 3 replicas (current: 2) Nov 14 02:30:14.151: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:30:14.152: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:30:20.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:30:40.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:30:44.194: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:30:44.194: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:31:00.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:31:14.236: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:31:14.236: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:31:20.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:31:40.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:31:44.276: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:31:44.276: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:32:00.436: INFO: waiting for 3 replicas (current: 2) Nov 14 02:32:14.320: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:32:14.320: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:32:20.436: INFO: waiting for 3 replicas (current: 2) Nov 14 02:32:40.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:32:44.361: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:32:44.361: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:33:00.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:33:14.404: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:33:14.404: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:33:20.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:33:40.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:33:44.446: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:33:44.446: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:34:00.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:34:14.489: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:34:14.489: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:34:20.438: INFO: waiting for 3 replicas (current: 2) Nov 14 02:34:40.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:34:44.536: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:34:44.536: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:35:00.434: INFO: waiting for 3 replicas (current: 2) Nov 14 02:35:14.576: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 02:35:14.576: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4482/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:35:20.437: INFO: waiting for 3 replicas (current: 2) Nov 14 02:35:40.435: INFO: waiting for 3 replicas (current: 2) Nov 14 02:35:40.467: INFO: waiting for 3 replicas (current: 2) Nov 14 02:35:40.467: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 14 02:35:40.467: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00317be68, {0x75d77c5?, 0xc00099f020?}, {{0x75ac8f6, 0x4}, {0x75b5b16, 0x7}, {0x75bdfe5, 0xa}}, 0xc000bece10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75d77c5?, 0x62ae505?}, {{0x75ac8f6, 0x4}, {0x75b5b16, 0x7}, {0x75bdfe5, 0xa}}, {0x75abb3b, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.1.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:50 +0x88 �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m11/14/22 02:35:40.502�[0m Nov 14 02:35:40.503: INFO: RC test-deployment: stopping metric consumer Nov 14 02:35:40.503: INFO: RC test-deployment: stopping CPU consumer Nov 14 02:35:40.503: INFO: RC test-deployment: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-4482, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 02:35:50.503�[0m Nov 14 02:35:50.622: INFO: Deleting Deployment.apps test-deployment took: 36.106123ms Nov 14 02:35:50.723: INFO: Terminating Deployment.apps test-deployment pods took: 101.042684ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-4482, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 02:35:52.986�[0m Nov 14 02:35:53.104: INFO: Deleting ReplicationController test-deployment-ctrl took: 36.015897ms Nov 14 02:35:53.205: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.86095ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 14 02:35:55.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/14/22 02:35:55.116�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-4482". �[38;5;243m11/14/22 02:35:55.116�[0m �[1mSTEP:�[0m Found 21 events. �[38;5;243m11/14/22 02:35:55.151�[0m Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:15 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-54fb67b787 to 1 Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:15 +0000 UTC - event for test-deployment-54fb67b787: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-54fb67b787-28glx Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:15 +0000 UTC - event for test-deployment-54fb67b787-28glx: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-4482/test-deployment-54fb67b787-28glx to capz-conf-bpf2r Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:17 +0000 UTC - event for test-deployment-54fb67b787-28glx: {kubelet capz-conf-bpf2r} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:18 +0000 UTC - event for test-deployment-54fb67b787-28glx: {kubelet capz-conf-bpf2r} Created: Created container test-deployment Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:19 +0000 UTC - event for test-deployment-54fb67b787-28glx: {kubelet capz-conf-bpf2r} Started: Started container test-deployment Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:25 +0000 UTC - event for test-deployment-ctrl: {replication-controller } SuccessfulCreate: Created pod: test-deployment-ctrl-j4xfh Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:25 +0000 UTC - event for test-deployment-ctrl-j4xfh: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-4482/test-deployment-ctrl-j4xfh to capz-conf-sq8nr Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:27 +0000 UTC - event for test-deployment-ctrl-j4xfh: {kubelet capz-conf-sq8nr} Created: Created container test-deployment-ctrl Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:27 +0000 UTC - event for test-deployment-ctrl-j4xfh: {kubelet capz-conf-sq8nr} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 14 02:35:55.151: INFO: At 2022-11-14 02:20:29 +0000 UTC - event for test-deployment-ctrl-j4xfh: {kubelet capz-conf-sq8nr} Started: Started container test-deployment-ctrl Nov 14 02:35:55.151: INFO: At 2022-11-14 02:21:10 +0000 UTC - event for test-deployment: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 2; reason: cpu resource utilization (percentage of request) above target Nov 14 02:35:55.151: INFO: At 2022-11-14 02:21:10 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-54fb67b787 to 2 from 1 Nov 14 02:35:55.151: INFO: At 2022-11-14 02:21:10 +0000 UTC - event for test-deployment-54fb67b787: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-54fb67b787-ggs8l Nov 14 02:35:55.151: INFO: At 2022-11-14 02:21:10 +0000 UTC - event for test-deployment-54fb67b787-ggs8l: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-4482/test-deployment-54fb67b787-ggs8l to capz-conf-sq8nr Nov 14 02:35:55.151: INFO: At 2022-11-14 02:21:12 +0000 UTC - event for test-deployment-54fb67b787-ggs8l: {kubelet capz-conf-sq8nr} Created: Created container test-deployment Nov 14 02:35:55.151: INFO: At 2022-11-14 02:21:12 +0000 UTC - event for test-deployment-54fb67b787-ggs8l: {kubelet capz-conf-sq8nr} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 14 02:35:55.151: INFO: At 2022-11-14 02:21:13 +0000 UTC - event for test-deployment-54fb67b787-ggs8l: {kubelet capz-conf-sq8nr} Started: Started container test-deployment Nov 14 02:35:55.151: INFO: At 2022-11-14 02:35:50 +0000 UTC - event for test-deployment-54fb67b787-28glx: {kubelet capz-conf-bpf2r} Killing: Stopping container test-deployment Nov 14 02:35:55.151: INFO: At 2022-11-14 02:35:50 +0000 UTC - event for test-deployment-54fb67b787-ggs8l: {kubelet capz-conf-sq8nr} Killing: Stopping container test-deployment Nov 14 02:35:55.151: INFO: At 2022-11-14 02:35:53 +0000 UTC - event for test-deployment-ctrl-j4xfh: {kubelet capz-conf-sq8nr} Killing: Stopping container test-deployment-ctrl Nov 14 02:35:55.184: INFO: POD NODE PHASE GRACE CONDITIONS Nov 14 02:35:55.184: INFO: Nov 14 02:35:55.219: INFO: Logging node info for node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:35:55.250: INFO: Node Info: &Node{ObjectMeta:{capz-conf-5alf7c-control-plane-hknpt a75ceb7e-c32f-458d-b53e-3b6c4a58b600 11034 0 2022-11-14 01:06:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-5alf7c-control-plane-hknpt kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-1] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-control-plane-xgnl5 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-5alf7c-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.133.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-14 01:06:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-14 01:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-14 01:06:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-14 01:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-14 01:07:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-14 02:34:18 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-5alf7c-control-plane-hknpt,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-14 01:07:09 +0000 UTC,LastTransitionTime:2022-11-14 01:07:09 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 02:34:17 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 02:34:17 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 02:34:17 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 02:34:17 +0000 UTC,LastTransitionTime:2022-11-14 01:07:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-5alf7c-control-plane-hknpt,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3ddd5ba1d7ea4f438c89ec4460eb4485,SystemUUID:9c65c2f7-ac82-7844-bea4-259d3ca85e49,BootID:e90a54a7-31c1-4896-bf10-fccff5507cc5,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-apiserver-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-apiserver:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:135156176,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-controller-manager-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-controller-manager:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:124986169,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-proxy-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-proxy:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:67201736,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-scheduler-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-scheduler:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:57656120,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 02:35:55.251: INFO: Logging kubelet events for node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:35:55.285: INFO: Logging pods the kubelet thinks is on node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:35:55.343: INFO: coredns-787d4945fb-qs9pc started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Container coredns ready: true, restart count 0 Nov 14 02:35:55.343: INFO: metrics-server-c9574f845-p9ptg started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Container metrics-server ready: true, restart count 0 Nov 14 02:35:55.343: INFO: kube-apiserver-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 02:35:55.343: INFO: kube-scheduler-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 02:35:55.343: INFO: kube-proxy-nvvcp started at 2022-11-14 01:06:31 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 02:35:55.343: INFO: calico-node-jwd52 started at 2022-11-14 01:06:48 +0000 UTC (2+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 14 02:35:55.343: INFO: Init container install-cni ready: true, restart count 0 Nov 14 02:35:55.343: INFO: Container calico-node ready: true, restart count 0 Nov 14 02:35:55.343: INFO: etcd-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:31 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Container etcd ready: true, restart count 0 Nov 14 02:35:55.343: INFO: kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 02:35:55.343: INFO: calico-kube-controllers-657b584867-65vn5 started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 14 02:35:55.343: INFO: coredns-787d4945fb-dfwrp started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.343: INFO: Container coredns ready: true, restart count 0 Nov 14 02:35:55.507: INFO: Latency metrics for node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:35:55.507: INFO: Logging node info for node capz-conf-bpf2r Nov 14 02:35:55.539: INFO: Node Info: &Node{ObjectMeta:{capz-conf-bpf2r c45cb394-b969-49da-b171-8e075ea29d20 10935 0 2022-11-14 01:08:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-bpf2r kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-md-win-5c98d6f77b-lr6hr cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-5alf7c-md-win-5c98d6f77b kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.114.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:c9:39:af volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-14 01:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-14 01:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-14 01:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-14 01:09:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-14 01:10:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {kubelet.exe Update v1 2022-11-14 02:33:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-bpf2r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 02:33:13 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 02:33:13 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 02:33:13 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 02:33:13 +0000 UTC,LastTransitionTime:2022-11-14 01:09:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-bpf2r,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-bpf2r,SystemUUID:21083AEB-D819-4573-9CD1-AA772F09A374,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-beta.0.65_8e48df13531802-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 02:35:55.539: INFO: Logging kubelet events for node capz-conf-bpf2r Nov 14 02:35:55.572: INFO: Logging pods the kubelet thinks is on node capz-conf-bpf2r Nov 14 02:35:55.624: INFO: calico-node-windows-xk6bd started at 2022-11-14 01:08:57 +0000 UTC (1+2 container statuses recorded) Nov 14 02:35:55.624: INFO: Init container install-cni ready: true, restart count 0 Nov 14 02:35:55.624: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 02:35:55.624: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 02:35:55.624: INFO: containerd-logger-bpt69 started at 2022-11-14 01:08:57 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.624: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 02:35:55.624: INFO: kube-proxy-windows-nz2rt started at 2022-11-14 01:08:57 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.624: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 02:35:55.624: INFO: csi-proxy-76x9p started at 2022-11-14 01:09:18 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.624: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 02:35:55.785: INFO: Latency metrics for node capz-conf-bpf2r Nov 14 02:35:55.785: INFO: Logging node info for node capz-conf-sq8nr Nov 14 02:35:55.817: INFO: Node Info: &Node{ObjectMeta:{capz-conf-sq8nr 51b52b43-1941-43af-b740-46bccdd021dd 11116 0 2022-11-14 01:08:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-sq8nr kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-md-win-5c98d6f77b-pnhpc cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-5alf7c-md-win-5c98d6f77b kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.166.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:39:f9:57 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-14 01:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-14 01:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-14 01:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-14 01:09:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-14 01:09:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {kubelet.exe Update v1 2022-11-14 02:35:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-sq8nr,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 02:35:09 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 02:35:09 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 02:35:09 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 02:35:09 +0000 UTC,LastTransitionTime:2022-11-14 01:09:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-sq8nr,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-sq8nr,SystemUUID:9699376C-B5F7-4F5B-B48F-D84D2BD16580,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-beta.0.65_8e48df13531802-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 02:35:55.818: INFO: Logging kubelet events for node capz-conf-sq8nr Nov 14 02:35:55.849: INFO: Logging pods the kubelet thinks is on node capz-conf-sq8nr Nov 14 02:35:55.899: INFO: containerd-logger-bf8mz started at 2022-11-14 01:08:50 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.899: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 02:35:55.899: INFO: kube-proxy-windows-lldgb started at 2022-11-14 01:08:50 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.899: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 02:35:55.899: INFO: calico-node-windows-w6hn2 started at 2022-11-14 01:08:50 +0000 UTC (1+2 container statuses recorded) Nov 14 02:35:55.899: INFO: Init container install-cni ready: true, restart count 0 Nov 14 02:35:55.899: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 02:35:55.899: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 02:35:55.899: INFO: csi-proxy-fbwsw started at 2022-11-14 01:09:15 +0000 UTC (0+1 container statuses recorded) Nov 14 02:35:55.899: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 02:35:56.057: INFO: Latency metrics for node capz-conf-sq8nr [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-4482" for this suite. �[38;5;243m11/14/22 02:35:56.057�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;9mNov 14 02:35:40.467: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition�[0m �[38;5;9mIn �[1m[It]�[0m�[38;5;9m at: �[1mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:209�[0m �[38;5;9mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00317be68, {0x75d77c5?, 0xc00099f020?}, {{0x75ac8f6, 0x4}, {0x75b5b16, 0x7}, {0x75bdfe5, 0xa}}, 0xc000bece10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75d77c5?, 0x62ae505?}, {{0x75ac8f6, 0x4}, {0x75b5b16, 0x7}, {0x75bdfe5, 0xa}}, {0x75abb3b, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.1.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:50 +0x88 �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] �[38;5;243mGMSA support�[0m �[1mcan read and write file to remote SMB folder�[0m �[38;5;243mtest/e2e/windows/gmsa_full.go:168�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:35:56.096�[0m Nov 14 02:35:56.096: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gmsa-full-test-windows �[38;5;243m11/14/22 02:35:56.097�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:35:56.193�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:35:56.254�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:31 [It] can read and write file to remote SMB folder test/e2e/windows/gmsa_full.go:168 �[1mSTEP:�[0m finding the worker node that fulfills this test's assumptions �[38;5;243m11/14/22 02:35:56.315�[0m Nov 14 02:35:56.347: INFO: Expected to find exactly one node with the "agentpool=windowsgmsa" label, found 0 [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 14 02:35:56.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gmsa-full-test-windows-7172" for this suite. �[38;5;243m11/14/22 02:35:56.383�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS [SKIPPED] [0.325 seconds]�[0m [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] �[38;5;243mtest/e2e/windows/framework.go:27�[0m GMSA support �[38;5;243mtest/e2e/windows/gmsa_full.go:97�[0m �[38;5;14m�[1m[It] can read and write file to remote SMB folder�[0m �[38;5;243mtest/e2e/windows/gmsa_full.go:168�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:35:56.096�[0m Nov 14 02:35:56.096: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gmsa-full-test-windows �[38;5;243m11/14/22 02:35:56.097�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:35:56.193�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:35:56.254�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:31 [It] can read and write file to remote SMB folder test/e2e/windows/gmsa_full.go:168 �[1mSTEP:�[0m finding the worker node that fulfills this test's assumptions �[38;5;243m11/14/22 02:35:56.315�[0m Nov 14 02:35:56.347: INFO: Expected to find exactly one node with the "agentpool=windowsgmsa" label, found 0 [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 14 02:35:56.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gmsa-full-test-windows-7172" for this suite. �[38;5;243m11/14/22 02:35:56.383�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;14mExpected to find exactly one node with the "agentpool=windowsgmsa" label, found 0�[0m �[38;5;14mIn �[1m[It]�[0m�[38;5;14m at: �[1mtest/e2e/windows/gmsa_full.go:174�[0m �[38;5;14mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e/windows.glob..func5.1.2() test/e2e/windows/gmsa_full.go:174 +0x665 �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[1mvalidates lower priority pod preemption by critical pod [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/preemption.go:222�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:35:56.423�[0m Nov 14 02:35:56.423: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m11/14/22 02:35:56.424�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:35:56.522�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:35:56.583�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:96 Nov 14 02:35:56.747: INFO: Waiting up to 1m0s for all nodes to be ready Nov 14 02:36:57.036: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] test/e2e/scheduling/preemption.go:222 �[1mSTEP:�[0m Create pods that use 4/5 of node resources. �[38;5;243m11/14/22 02:36:57.067�[0m Nov 14 02:36:57.154: INFO: Created pod: pod0-0-sched-preemption-low-priority Nov 14 02:36:57.191: INFO: Created pod: pod0-1-sched-preemption-medium-priority Nov 14 02:36:57.268: INFO: Created pod: pod1-0-sched-preemption-medium-priority Nov 14 02:36:57.303: INFO: Created pod: pod1-1-sched-preemption-medium-priority �[1mSTEP:�[0m Wait for pods to be scheduled. �[38;5;243m11/14/22 02:36:57.303�[0m Nov 14 02:36:57.303: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-69" to be "running" Nov 14 02:36:57.334: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 30.96007ms Nov 14 02:36:59.369: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066105636s Nov 14 02:37:01.367: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064017888s Nov 14 02:37:03.368: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065145448s Nov 14 02:37:05.368: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064493165s Nov 14 02:37:07.367: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063956141s Nov 14 02:37:09.368: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 12.064619043s Nov 14 02:37:11.367: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 14.063683847s Nov 14 02:37:13.368: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 16.064421758s Nov 14 02:37:13.368: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Nov 14 02:37:13.368: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-69" to be "running" Nov 14 02:37:13.400: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 32.414332ms Nov 14 02:37:13.400: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Nov 14 02:37:13.400: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-69" to be "running" Nov 14 02:37:13.431: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 30.946913ms Nov 14 02:37:13.431: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Nov 14 02:37:13.431: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-69" to be "running" Nov 14 02:37:13.462: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 31.043945ms Nov 14 02:37:13.462: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" �[1mSTEP:�[0m Run a critical pod that use same resources as that of a lower priority pod �[38;5;243m11/14/22 02:37:13.462�[0m Nov 14 02:37:13.504: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" Nov 14 02:37:13.546: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 42.07829ms Nov 14 02:37:15.579: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074798498s Nov 14 02:37:17.579: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074477918s Nov 14 02:37:19.578: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074060164s Nov 14 02:37:21.579: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.074604678s Nov 14 02:37:21.579: INFO: Pod "critical-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 02:37:21.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-preemption-69" for this suite. �[38;5;243m11/14/22 02:37:22.035�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [85.651 seconds]�[0m [sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m validates lower priority pod preemption by critical pod [Conformance] �[38;5;243mtest/e2e/scheduling/preemption.go:222�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:35:56.423�[0m Nov 14 02:35:56.423: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m11/14/22 02:35:56.424�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:35:56.522�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:35:56.583�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:96 Nov 14 02:35:56.747: INFO: Waiting up to 1m0s for all nodes to be ready Nov 14 02:36:57.036: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] test/e2e/scheduling/preemption.go:222 �[1mSTEP:�[0m Create pods that use 4/5 of node resources. �[38;5;243m11/14/22 02:36:57.067�[0m Nov 14 02:36:57.154: INFO: Created pod: pod0-0-sched-preemption-low-priority Nov 14 02:36:57.191: INFO: Created pod: pod0-1-sched-preemption-medium-priority Nov 14 02:36:57.268: INFO: Created pod: pod1-0-sched-preemption-medium-priority Nov 14 02:36:57.303: INFO: Created pod: pod1-1-sched-preemption-medium-priority �[1mSTEP:�[0m Wait for pods to be scheduled. �[38;5;243m11/14/22 02:36:57.303�[0m Nov 14 02:36:57.303: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-69" to be "running" Nov 14 02:36:57.334: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 30.96007ms Nov 14 02:36:59.369: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066105636s Nov 14 02:37:01.367: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064017888s Nov 14 02:37:03.368: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065145448s Nov 14 02:37:05.368: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064493165s Nov 14 02:37:07.367: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063956141s Nov 14 02:37:09.368: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 12.064619043s Nov 14 02:37:11.367: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 14.063683847s Nov 14 02:37:13.368: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 16.064421758s Nov 14 02:37:13.368: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Nov 14 02:37:13.368: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-69" to be "running" Nov 14 02:37:13.400: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 32.414332ms Nov 14 02:37:13.400: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Nov 14 02:37:13.400: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-69" to be "running" Nov 14 02:37:13.431: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 30.946913ms Nov 14 02:37:13.431: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Nov 14 02:37:13.431: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-69" to be "running" Nov 14 02:37:13.462: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 31.043945ms Nov 14 02:37:13.462: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" �[1mSTEP:�[0m Run a critical pod that use same resources as that of a lower priority pod �[38;5;243m11/14/22 02:37:13.462�[0m Nov 14 02:37:13.504: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" Nov 14 02:37:13.546: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 42.07829ms Nov 14 02:37:15.579: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074798498s Nov 14 02:37:17.579: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074477918s Nov 14 02:37:19.578: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074060164s Nov 14 02:37:21.579: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.074604678s Nov 14 02:37:21.579: INFO: Pod "critical-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 02:37:21.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-preemption-69" for this suite. �[38;5;243m11/14/22 02:37:22.035�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Namespaces [Serial]�[0m �[1mshould apply an update to a Namespace [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/namespace.go:366�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:37:22.078�[0m Nov 14 02:37:22.078: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m11/14/22 02:37:22.079�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:37:22.176�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:37:22.237�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should apply an update to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:366 �[1mSTEP:�[0m Updating Namespace "namespaces-4724" �[38;5;243m11/14/22 02:37:22.298�[0m Nov 14 02:37:22.367: INFO: Namespace "namespaces-4724" now has labels, map[string]string{"e2e-framework":"namespaces", "e2e-run":"f888dcd1-3d1c-4e00-bec6-4a96a19df9f1", "kubernetes.io/metadata.name":"namespaces-4724", "namespaces-4724":"updated", "pod-security.kubernetes.io/enforce":"baseline"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 02:37:22.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "namespaces-4724" for this suite. �[38;5;243m11/14/22 02:37:22.403�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [0.359 seconds]�[0m [sig-api-machinery] Namespaces [Serial] �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should apply an update to a Namespace [Conformance] �[38;5;243mtest/e2e/apimachinery/namespace.go:366�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:37:22.078�[0m Nov 14 02:37:22.078: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m11/14/22 02:37:22.079�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:37:22.176�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:37:22.237�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should apply an update to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:366 �[1mSTEP:�[0m Updating Namespace "namespaces-4724" �[38;5;243m11/14/22 02:37:22.298�[0m Nov 14 02:37:22.367: INFO: Namespace "namespaces-4724" now has labels, map[string]string{"e2e-framework":"namespaces", "e2e-run":"f888dcd1-3d1c-4e00-bec6-4a96a19df9f1", "kubernetes.io/metadata.name":"namespaces-4724", "namespaces-4724":"updated", "pod-security.kubernetes.io/enforce":"baseline"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 02:37:22.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "namespaces-4724" for this suite. �[38;5;243m11/14/22 02:37:22.403�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243m[Serial] [Slow] ReplicationController�[0m �[1mShould scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:80�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:37:22.442�[0m Nov 14 02:37:22.442: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/14/22 02:37:22.443�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:37:22.546�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:37:22.606�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability test/e2e/autoscaling/horizontal_pod_autoscaling.go:80 Nov 14 02:37:22.669: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC rc via /v1, Kind=ReplicationController with 1 replicas �[38;5;243m11/14/22 02:37:22.67�[0m �[1mSTEP:�[0m creating replication controller rc in namespace horizontal-pod-autoscaling-4624 �[38;5;243m11/14/22 02:37:22.721�[0m I1114 02:37:22.762023 13 runners.go:193] Created replication controller with name: rc, namespace: horizontal-pod-autoscaling-4624, replica count: 1 I1114 02:37:32.813989 13 runners.go:193] rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/14/22 02:37:32.814�[0m �[1mSTEP:�[0m creating replication controller rc-ctrl in namespace horizontal-pod-autoscaling-4624 �[38;5;243m11/14/22 02:37:32.861�[0m I1114 02:37:32.901933 13 runners.go:193] Created replication controller with name: rc-ctrl, namespace: horizontal-pod-autoscaling-4624, replica count: 1 I1114 02:37:42.953430 13 runners.go:193] rc-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 14 02:37:47.955: INFO: Waiting for amount of service:rc-ctrl endpoints to be 1 Nov 14 02:37:47.986: INFO: RC rc: consume 250 millicores in total Nov 14 02:37:47.986: INFO: RC rc: setting consumption to 250 millicores in total Nov 14 02:37:47.986: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:37:47.986: INFO: RC rc: consume 0 MB in total Nov 14 02:37:47.986: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:37:47.986: INFO: RC rc: consume custom metric 0 in total Nov 14 02:37:47.987: INFO: RC rc: disabling consumption of custom metric QPS Nov 14 02:37:47.986: INFO: RC rc: disabling mem consumption Nov 14 02:37:48.052: INFO: waiting for 3 replicas (current: 1) Nov 14 02:38:08.086: INFO: waiting for 3 replicas (current: 1) Nov 14 02:38:21.051: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:38:21.051: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:38:28.084: INFO: waiting for 3 replicas (current: 2) Nov 14 02:38:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:38:51.097: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:38:51.097: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:39:08.086: INFO: waiting for 3 replicas (current: 2) Nov 14 02:39:21.142: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:39:21.142: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:39:28.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:39:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:39:51.184: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:39:51.184: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:40:08.086: INFO: waiting for 3 replicas (current: 2) Nov 14 02:40:21.225: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:40:21.225: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:40:28.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:40:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:40:51.265: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:40:51.266: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:41:08.086: INFO: waiting for 3 replicas (current: 2) Nov 14 02:41:21.305: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:41:21.305: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:41:28.087: INFO: waiting for 3 replicas (current: 2) Nov 14 02:41:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:41:51.345: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:41:51.345: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:42:08.090: INFO: waiting for 3 replicas (current: 2) Nov 14 02:42:21.387: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:42:21.387: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:42:28.086: INFO: waiting for 3 replicas (current: 2) Nov 14 02:42:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:42:51.442: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:42:51.442: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:43:08.088: INFO: waiting for 3 replicas (current: 2) Nov 14 02:43:21.486: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:43:21.486: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:43:28.084: INFO: waiting for 3 replicas (current: 2) Nov 14 02:43:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:43:51.529: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:43:51.529: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:44:08.088: INFO: waiting for 3 replicas (current: 2) Nov 14 02:44:21.568: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:44:21.568: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:44:28.086: INFO: waiting for 3 replicas (current: 2) Nov 14 02:44:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:44:51.607: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:44:51.607: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:45:08.088: INFO: waiting for 3 replicas (current: 2) Nov 14 02:45:21.651: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:45:21.651: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:45:28.087: INFO: waiting for 3 replicas (current: 2) Nov 14 02:45:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:45:51.694: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:45:51.694: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:46:08.089: INFO: waiting for 3 replicas (current: 2) Nov 14 02:46:21.743: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:46:21.743: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:46:28.087: INFO: waiting for 3 replicas (current: 2) Nov 14 02:46:48.088: INFO: waiting for 3 replicas (current: 2) Nov 14 02:46:51.788: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:46:51.788: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:47:08.087: INFO: waiting for 3 replicas (current: 2) Nov 14 02:47:21.831: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:47:21.831: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:47:28.088: INFO: waiting for 3 replicas (current: 2) Nov 14 02:47:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:47:51.875: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:47:51.875: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:48:08.084: INFO: waiting for 3 replicas (current: 2) Nov 14 02:48:21.915: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:48:21.916: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:48:28.087: INFO: waiting for 3 replicas (current: 2) Nov 14 02:48:48.084: INFO: waiting for 3 replicas (current: 2) Nov 14 02:48:51.955: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:48:51.955: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:49:08.086: INFO: waiting for 3 replicas (current: 2) Nov 14 02:49:21.995: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:49:21.996: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:49:28.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:49:48.089: INFO: waiting for 3 replicas (current: 2) Nov 14 02:49:52.037: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:49:52.037: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:50:08.088: INFO: waiting for 3 replicas (current: 2) Nov 14 02:50:22.085: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:50:22.085: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:50:28.088: INFO: waiting for 3 replicas (current: 2) Nov 14 02:50:48.088: INFO: waiting for 3 replicas (current: 2) Nov 14 02:50:52.129: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:50:52.129: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:51:08.087: INFO: waiting for 3 replicas (current: 2) Nov 14 02:51:22.173: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:51:22.173: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:51:28.086: INFO: waiting for 3 replicas (current: 2) Nov 14 02:51:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:51:52.214: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:51:52.214: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:52:08.086: INFO: waiting for 3 replicas (current: 2) Nov 14 02:52:22.253: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:52:22.253: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:52:28.086: INFO: waiting for 3 replicas (current: 2) Nov 14 02:52:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:52:48.116: INFO: waiting for 3 replicas (current: 2) Nov 14 02:52:48.116: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 14 02:52:48.116: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc003ba1e68, {0x75aabd2?, 0xc00035d980?}, {{0x0, 0x0}, {0x75aac1c, 0x2}, {0x75fb175, 0x15}}, 0xc000bece10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75aabd2?, 0x62ae505?}, {{0x0, 0x0}, {0x75aac1c, 0x2}, {0x75fb175, 0x15}}, {0x75abb3b, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.4.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:81 +0x8b �[1mSTEP:�[0m Removing consuming RC rc �[38;5;243m11/14/22 02:52:48.152�[0m Nov 14 02:52:48.152: INFO: RC rc: stopping metric consumer Nov 14 02:52:48.152: INFO: RC rc: stopping CPU consumer Nov 14 02:52:48.152: INFO: RC rc: stopping mem consumer �[1mSTEP:�[0m deleting ReplicationController rc in namespace horizontal-pod-autoscaling-4624, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 02:52:58.152�[0m Nov 14 02:52:58.273: INFO: Deleting ReplicationController rc took: 36.946572ms Nov 14 02:52:58.373: INFO: Terminating ReplicationController rc pods took: 100.334813ms �[1mSTEP:�[0m deleting ReplicationController rc-ctrl in namespace horizontal-pod-autoscaling-4624, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 02:53:00.531�[0m Nov 14 02:53:00.649: INFO: Deleting ReplicationController rc-ctrl took: 35.937386ms Nov 14 02:53:00.749: INFO: Terminating ReplicationController rc-ctrl pods took: 100.508266ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 14 02:53:02.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/14/22 02:53:02.643�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-4624". �[38;5;243m11/14/22 02:53:02.643�[0m �[1mSTEP:�[0m Found 19 events. �[38;5;243m11/14/22 02:53:02.678�[0m Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:22 +0000 UTC - event for rc: {replication-controller } SuccessfulCreate: Created pod: rc-xxs4r Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:22 +0000 UTC - event for rc-xxs4r: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-4624/rc-xxs4r to capz-conf-bpf2r Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:24 +0000 UTC - event for rc-xxs4r: {kubelet capz-conf-bpf2r} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:25 +0000 UTC - event for rc-xxs4r: {kubelet capz-conf-bpf2r} Created: Created container rc Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:26 +0000 UTC - event for rc-xxs4r: {kubelet capz-conf-bpf2r} Started: Started container rc Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:32 +0000 UTC - event for rc-ctrl: {replication-controller } SuccessfulCreate: Created pod: rc-ctrl-f96fq Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:32 +0000 UTC - event for rc-ctrl-f96fq: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-4624/rc-ctrl-f96fq to capz-conf-sq8nr Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:34 +0000 UTC - event for rc-ctrl-f96fq: {kubelet capz-conf-sq8nr} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:35 +0000 UTC - event for rc-ctrl-f96fq: {kubelet capz-conf-sq8nr} Created: Created container rc-ctrl Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:36 +0000 UTC - event for rc-ctrl-f96fq: {kubelet capz-conf-sq8nr} Started: Started container rc-ctrl Nov 14 02:53:02.678: INFO: At 2022-11-14 02:38:18 +0000 UTC - event for rc: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 2; reason: cpu resource utilization (percentage of request) above target Nov 14 02:53:02.678: INFO: At 2022-11-14 02:38:18 +0000 UTC - event for rc: {replication-controller } SuccessfulCreate: Created pod: rc-vn9jz Nov 14 02:53:02.678: INFO: At 2022-11-14 02:38:18 +0000 UTC - event for rc-vn9jz: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-4624/rc-vn9jz to capz-conf-sq8nr Nov 14 02:53:02.679: INFO: At 2022-11-14 02:38:20 +0000 UTC - event for rc-vn9jz: {kubelet capz-conf-sq8nr} Created: Created container rc Nov 14 02:53:02.679: INFO: At 2022-11-14 02:38:20 +0000 UTC - event for rc-vn9jz: {kubelet capz-conf-sq8nr} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 14 02:53:02.679: INFO: At 2022-11-14 02:38:22 +0000 UTC - event for rc-vn9jz: {kubelet capz-conf-sq8nr} Started: Started container rc Nov 14 02:53:02.679: INFO: At 2022-11-14 02:52:58 +0000 UTC - event for rc-vn9jz: {kubelet capz-conf-sq8nr} Killing: Stopping container rc Nov 14 02:53:02.679: INFO: At 2022-11-14 02:52:58 +0000 UTC - event for rc-xxs4r: {kubelet capz-conf-bpf2r} Killing: Stopping container rc Nov 14 02:53:02.679: INFO: At 2022-11-14 02:53:00 +0000 UTC - event for rc-ctrl-f96fq: {kubelet capz-conf-sq8nr} Killing: Stopping container rc-ctrl Nov 14 02:53:02.710: INFO: POD NODE PHASE GRACE CONDITIONS Nov 14 02:53:02.710: INFO: Nov 14 02:53:02.743: INFO: Logging node info for node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:53:02.776: INFO: Node Info: &Node{ObjectMeta:{capz-conf-5alf7c-control-plane-hknpt a75ceb7e-c32f-458d-b53e-3b6c4a58b600 12771 0 2022-11-14 01:06:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-5alf7c-control-plane-hknpt kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-1] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-control-plane-xgnl5 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-5alf7c-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.133.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-14 01:06:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-14 01:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-14 01:06:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-14 01:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-14 01:07:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-14 02:49:36 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-5alf7c-control-plane-hknpt,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-14 01:07:09 +0000 UTC,LastTransitionTime:2022-11-14 01:07:09 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 02:49:36 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 02:49:36 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 02:49:36 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 02:49:36 +0000 UTC,LastTransitionTime:2022-11-14 01:07:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-5alf7c-control-plane-hknpt,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3ddd5ba1d7ea4f438c89ec4460eb4485,SystemUUID:9c65c2f7-ac82-7844-bea4-259d3ca85e49,BootID:e90a54a7-31c1-4896-bf10-fccff5507cc5,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-apiserver-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-apiserver:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:135156176,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-controller-manager-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-controller-manager:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:124986169,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-proxy-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-proxy:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:67201736,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-scheduler-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-scheduler:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:57656120,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 02:53:02.776: INFO: Logging kubelet events for node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:53:02.808: INFO: Logging pods the kubelet thinks is on node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:53:02.864: INFO: etcd-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:31 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Container etcd ready: true, restart count 0 Nov 14 02:53:02.864: INFO: kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 02:53:02.864: INFO: calico-kube-controllers-657b584867-65vn5 started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 14 02:53:02.864: INFO: coredns-787d4945fb-dfwrp started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Container coredns ready: true, restart count 0 Nov 14 02:53:02.864: INFO: kube-apiserver-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 02:53:02.864: INFO: kube-scheduler-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 02:53:02.864: INFO: kube-proxy-nvvcp started at 2022-11-14 01:06:31 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 02:53:02.864: INFO: calico-node-jwd52 started at 2022-11-14 01:06:48 +0000 UTC (2+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 14 02:53:02.864: INFO: Init container install-cni ready: true, restart count 0 Nov 14 02:53:02.864: INFO: Container calico-node ready: true, restart count 0 Nov 14 02:53:02.864: INFO: coredns-787d4945fb-qs9pc started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Container coredns ready: true, restart count 0 Nov 14 02:53:02.864: INFO: metrics-server-c9574f845-p9ptg started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Container metrics-server ready: true, restart count 0 Nov 14 02:53:03.026: INFO: Latency metrics for node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:53:03.026: INFO: Logging node info for node capz-conf-bpf2r Nov 14 02:53:03.059: INFO: Node Info: &Node{ObjectMeta:{capz-conf-bpf2r c45cb394-b969-49da-b171-8e075ea29d20 13034 0 2022-11-14 01:08:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-bpf2r kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-md-win-5c98d6f77b-lr6hr cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-5alf7c-md-win-5c98d6f77b kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.114.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:c9:39:af volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-14 01:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-14 01:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-14 01:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-14 01:09:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-14 01:10:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-14 02:36:57 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}} status} {kubelet.exe Update v1 2022-11-14 02:52:25 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-bpf2r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 02:52:25 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 02:52:25 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 02:52:25 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 02:52:25 +0000 UTC,LastTransitionTime:2022-11-14 01:09:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-bpf2r,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-bpf2r,SystemUUID:21083AEB-D819-4573-9CD1-AA772F09A374,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-beta.0.65_8e48df13531802-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 02:53:03.060: INFO: Logging kubelet events for node capz-conf-bpf2r Nov 14 02:53:03.092: INFO: Logging pods the kubelet thinks is on node capz-conf-bpf2r Nov 14 02:53:03.143: INFO: kube-proxy-windows-nz2rt started at 2022-11-14 01:08:57 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:03.143: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 02:53:03.143: INFO: csi-proxy-76x9p started at 2022-11-14 01:09:18 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:03.143: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 02:53:03.143: INFO: calico-node-windows-xk6bd started at 2022-11-14 01:08:57 +0000 UTC (1+2 container statuses recorded) Nov 14 02:53:03.143: INFO: Init container install-cni ready: true, restart count 0 Nov 14 02:53:03.143: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 02:53:03.143: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 02:53:03.143: INFO: containerd-logger-bpt69 started at 2022-11-14 01:08:57 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:03.143: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 02:53:03.296: INFO: Latency metrics for node capz-conf-bpf2r Nov 14 02:53:03.296: INFO: Logging node info for node capz-conf-sq8nr Nov 14 02:53:03.328: INFO: Node Info: &Node{ObjectMeta:{capz-conf-sq8nr 51b52b43-1941-43af-b740-46bccdd021dd 13024 0 2022-11-14 01:08:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-sq8nr kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-md-win-5c98d6f77b-pnhpc cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-5alf7c-md-win-5c98d6f77b kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.166.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:39:f9:57 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-14 01:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-14 01:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-14 01:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-14 01:09:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-14 01:09:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-14 02:36:57 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}} status} {kubelet.exe Update v1 2022-11-14 02:52:19 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-sq8nr,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 02:52:19 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 02:52:19 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 02:52:19 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 02:52:19 +0000 UTC,LastTransitionTime:2022-11-14 01:09:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-sq8nr,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-sq8nr,SystemUUID:9699376C-B5F7-4F5B-B48F-D84D2BD16580,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-beta.0.65_8e48df13531802-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 02:53:03.329: INFO: Logging kubelet events for node capz-conf-sq8nr Nov 14 02:53:03.361: INFO: Logging pods the kubelet thinks is on node capz-conf-sq8nr Nov 14 02:53:03.411: INFO: containerd-logger-bf8mz started at 2022-11-14 01:08:50 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:03.411: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 02:53:03.411: INFO: kube-proxy-windows-lldgb started at 2022-11-14 01:08:50 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:03.411: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 02:53:03.411: INFO: calico-node-windows-w6hn2 started at 2022-11-14 01:08:50 +0000 UTC (1+2 container statuses recorded) Nov 14 02:53:03.411: INFO: Init container install-cni ready: true, restart count 0 Nov 14 02:53:03.411: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 02:53:03.411: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 02:53:03.411: INFO: csi-proxy-fbwsw started at 2022-11-14 01:09:15 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:03.411: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 02:53:03.571: INFO: Latency metrics for node capz-conf-sq8nr [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-4624" for this suite. �[38;5;243m11/14/22 02:53:03.571�[0m �[38;5;243m------------------------------�[0m �[38;5;9m• [FAILED] [941.166 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] ReplicationController �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:79�[0m �[38;5;9m�[1m[It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:80�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:37:22.442�[0m Nov 14 02:37:22.442: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/14/22 02:37:22.443�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:37:22.546�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:37:22.606�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability test/e2e/autoscaling/horizontal_pod_autoscaling.go:80 Nov 14 02:37:22.669: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC rc via /v1, Kind=ReplicationController with 1 replicas �[38;5;243m11/14/22 02:37:22.67�[0m �[1mSTEP:�[0m creating replication controller rc in namespace horizontal-pod-autoscaling-4624 �[38;5;243m11/14/22 02:37:22.721�[0m I1114 02:37:22.762023 13 runners.go:193] Created replication controller with name: rc, namespace: horizontal-pod-autoscaling-4624, replica count: 1 I1114 02:37:32.813989 13 runners.go:193] rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/14/22 02:37:32.814�[0m �[1mSTEP:�[0m creating replication controller rc-ctrl in namespace horizontal-pod-autoscaling-4624 �[38;5;243m11/14/22 02:37:32.861�[0m I1114 02:37:32.901933 13 runners.go:193] Created replication controller with name: rc-ctrl, namespace: horizontal-pod-autoscaling-4624, replica count: 1 I1114 02:37:42.953430 13 runners.go:193] rc-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 14 02:37:47.955: INFO: Waiting for amount of service:rc-ctrl endpoints to be 1 Nov 14 02:37:47.986: INFO: RC rc: consume 250 millicores in total Nov 14 02:37:47.986: INFO: RC rc: setting consumption to 250 millicores in total Nov 14 02:37:47.986: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:37:47.986: INFO: RC rc: consume 0 MB in total Nov 14 02:37:47.986: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:37:47.986: INFO: RC rc: consume custom metric 0 in total Nov 14 02:37:47.987: INFO: RC rc: disabling consumption of custom metric QPS Nov 14 02:37:47.986: INFO: RC rc: disabling mem consumption Nov 14 02:37:48.052: INFO: waiting for 3 replicas (current: 1) Nov 14 02:38:08.086: INFO: waiting for 3 replicas (current: 1) Nov 14 02:38:21.051: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:38:21.051: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:38:28.084: INFO: waiting for 3 replicas (current: 2) Nov 14 02:38:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:38:51.097: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:38:51.097: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:39:08.086: INFO: waiting for 3 replicas (current: 2) Nov 14 02:39:21.142: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:39:21.142: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:39:28.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:39:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:39:51.184: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:39:51.184: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:40:08.086: INFO: waiting for 3 replicas (current: 2) Nov 14 02:40:21.225: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:40:21.225: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:40:28.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:40:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:40:51.265: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:40:51.266: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:41:08.086: INFO: waiting for 3 replicas (current: 2) Nov 14 02:41:21.305: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:41:21.305: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:41:28.087: INFO: waiting for 3 replicas (current: 2) Nov 14 02:41:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:41:51.345: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:41:51.345: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:42:08.090: INFO: waiting for 3 replicas (current: 2) Nov 14 02:42:21.387: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:42:21.387: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:42:28.086: INFO: waiting for 3 replicas (current: 2) Nov 14 02:42:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:42:51.442: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:42:51.442: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:43:08.088: INFO: waiting for 3 replicas (current: 2) Nov 14 02:43:21.486: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:43:21.486: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:43:28.084: INFO: waiting for 3 replicas (current: 2) Nov 14 02:43:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:43:51.529: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:43:51.529: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:44:08.088: INFO: waiting for 3 replicas (current: 2) Nov 14 02:44:21.568: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:44:21.568: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:44:28.086: INFO: waiting for 3 replicas (current: 2) Nov 14 02:44:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:44:51.607: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:44:51.607: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:45:08.088: INFO: waiting for 3 replicas (current: 2) Nov 14 02:45:21.651: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:45:21.651: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:45:28.087: INFO: waiting for 3 replicas (current: 2) Nov 14 02:45:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:45:51.694: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:45:51.694: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:46:08.089: INFO: waiting for 3 replicas (current: 2) Nov 14 02:46:21.743: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:46:21.743: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:46:28.087: INFO: waiting for 3 replicas (current: 2) Nov 14 02:46:48.088: INFO: waiting for 3 replicas (current: 2) Nov 14 02:46:51.788: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:46:51.788: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:47:08.087: INFO: waiting for 3 replicas (current: 2) Nov 14 02:47:21.831: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:47:21.831: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:47:28.088: INFO: waiting for 3 replicas (current: 2) Nov 14 02:47:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:47:51.875: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:47:51.875: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:48:08.084: INFO: waiting for 3 replicas (current: 2) Nov 14 02:48:21.915: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:48:21.916: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:48:28.087: INFO: waiting for 3 replicas (current: 2) Nov 14 02:48:48.084: INFO: waiting for 3 replicas (current: 2) Nov 14 02:48:51.955: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:48:51.955: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:49:08.086: INFO: waiting for 3 replicas (current: 2) Nov 14 02:49:21.995: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:49:21.996: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:49:28.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:49:48.089: INFO: waiting for 3 replicas (current: 2) Nov 14 02:49:52.037: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:49:52.037: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:50:08.088: INFO: waiting for 3 replicas (current: 2) Nov 14 02:50:22.085: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:50:22.085: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:50:28.088: INFO: waiting for 3 replicas (current: 2) Nov 14 02:50:48.088: INFO: waiting for 3 replicas (current: 2) Nov 14 02:50:52.129: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:50:52.129: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:51:08.087: INFO: waiting for 3 replicas (current: 2) Nov 14 02:51:22.173: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:51:22.173: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:51:28.086: INFO: waiting for 3 replicas (current: 2) Nov 14 02:51:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:51:52.214: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:51:52.214: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:52:08.086: INFO: waiting for 3 replicas (current: 2) Nov 14 02:52:22.253: INFO: RC rc: sending request to consume 250 millicores Nov 14 02:52:22.253: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4624/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 02:52:28.086: INFO: waiting for 3 replicas (current: 2) Nov 14 02:52:48.085: INFO: waiting for 3 replicas (current: 2) Nov 14 02:52:48.116: INFO: waiting for 3 replicas (current: 2) Nov 14 02:52:48.116: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 14 02:52:48.116: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc003ba1e68, {0x75aabd2?, 0xc00035d980?}, {{0x0, 0x0}, {0x75aac1c, 0x2}, {0x75fb175, 0x15}}, 0xc000bece10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75aabd2?, 0x62ae505?}, {{0x0, 0x0}, {0x75aac1c, 0x2}, {0x75fb175, 0x15}}, {0x75abb3b, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.4.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:81 +0x8b �[1mSTEP:�[0m Removing consuming RC rc �[38;5;243m11/14/22 02:52:48.152�[0m Nov 14 02:52:48.152: INFO: RC rc: stopping metric consumer Nov 14 02:52:48.152: INFO: RC rc: stopping CPU consumer Nov 14 02:52:48.152: INFO: RC rc: stopping mem consumer �[1mSTEP:�[0m deleting ReplicationController rc in namespace horizontal-pod-autoscaling-4624, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 02:52:58.152�[0m Nov 14 02:52:58.273: INFO: Deleting ReplicationController rc took: 36.946572ms Nov 14 02:52:58.373: INFO: Terminating ReplicationController rc pods took: 100.334813ms �[1mSTEP:�[0m deleting ReplicationController rc-ctrl in namespace horizontal-pod-autoscaling-4624, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 02:53:00.531�[0m Nov 14 02:53:00.649: INFO: Deleting ReplicationController rc-ctrl took: 35.937386ms Nov 14 02:53:00.749: INFO: Terminating ReplicationController rc-ctrl pods took: 100.508266ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 14 02:53:02.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/14/22 02:53:02.643�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-4624". �[38;5;243m11/14/22 02:53:02.643�[0m �[1mSTEP:�[0m Found 19 events. �[38;5;243m11/14/22 02:53:02.678�[0m Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:22 +0000 UTC - event for rc: {replication-controller } SuccessfulCreate: Created pod: rc-xxs4r Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:22 +0000 UTC - event for rc-xxs4r: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-4624/rc-xxs4r to capz-conf-bpf2r Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:24 +0000 UTC - event for rc-xxs4r: {kubelet capz-conf-bpf2r} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:25 +0000 UTC - event for rc-xxs4r: {kubelet capz-conf-bpf2r} Created: Created container rc Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:26 +0000 UTC - event for rc-xxs4r: {kubelet capz-conf-bpf2r} Started: Started container rc Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:32 +0000 UTC - event for rc-ctrl: {replication-controller } SuccessfulCreate: Created pod: rc-ctrl-f96fq Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:32 +0000 UTC - event for rc-ctrl-f96fq: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-4624/rc-ctrl-f96fq to capz-conf-sq8nr Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:34 +0000 UTC - event for rc-ctrl-f96fq: {kubelet capz-conf-sq8nr} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:35 +0000 UTC - event for rc-ctrl-f96fq: {kubelet capz-conf-sq8nr} Created: Created container rc-ctrl Nov 14 02:53:02.678: INFO: At 2022-11-14 02:37:36 +0000 UTC - event for rc-ctrl-f96fq: {kubelet capz-conf-sq8nr} Started: Started container rc-ctrl Nov 14 02:53:02.678: INFO: At 2022-11-14 02:38:18 +0000 UTC - event for rc: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 2; reason: cpu resource utilization (percentage of request) above target Nov 14 02:53:02.678: INFO: At 2022-11-14 02:38:18 +0000 UTC - event for rc: {replication-controller } SuccessfulCreate: Created pod: rc-vn9jz Nov 14 02:53:02.678: INFO: At 2022-11-14 02:38:18 +0000 UTC - event for rc-vn9jz: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-4624/rc-vn9jz to capz-conf-sq8nr Nov 14 02:53:02.679: INFO: At 2022-11-14 02:38:20 +0000 UTC - event for rc-vn9jz: {kubelet capz-conf-sq8nr} Created: Created container rc Nov 14 02:53:02.679: INFO: At 2022-11-14 02:38:20 +0000 UTC - event for rc-vn9jz: {kubelet capz-conf-sq8nr} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 14 02:53:02.679: INFO: At 2022-11-14 02:38:22 +0000 UTC - event for rc-vn9jz: {kubelet capz-conf-sq8nr} Started: Started container rc Nov 14 02:53:02.679: INFO: At 2022-11-14 02:52:58 +0000 UTC - event for rc-vn9jz: {kubelet capz-conf-sq8nr} Killing: Stopping container rc Nov 14 02:53:02.679: INFO: At 2022-11-14 02:52:58 +0000 UTC - event for rc-xxs4r: {kubelet capz-conf-bpf2r} Killing: Stopping container rc Nov 14 02:53:02.679: INFO: At 2022-11-14 02:53:00 +0000 UTC - event for rc-ctrl-f96fq: {kubelet capz-conf-sq8nr} Killing: Stopping container rc-ctrl Nov 14 02:53:02.710: INFO: POD NODE PHASE GRACE CONDITIONS Nov 14 02:53:02.710: INFO: Nov 14 02:53:02.743: INFO: Logging node info for node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:53:02.776: INFO: Node Info: &Node{ObjectMeta:{capz-conf-5alf7c-control-plane-hknpt a75ceb7e-c32f-458d-b53e-3b6c4a58b600 12771 0 2022-11-14 01:06:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-5alf7c-control-plane-hknpt kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-1] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-control-plane-xgnl5 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-5alf7c-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.133.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-14 01:06:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-14 01:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-14 01:06:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-14 01:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-14 01:07:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-14 02:49:36 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-5alf7c-control-plane-hknpt,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-14 01:07:09 +0000 UTC,LastTransitionTime:2022-11-14 01:07:09 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 02:49:36 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 02:49:36 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 02:49:36 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 02:49:36 +0000 UTC,LastTransitionTime:2022-11-14 01:07:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-5alf7c-control-plane-hknpt,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3ddd5ba1d7ea4f438c89ec4460eb4485,SystemUUID:9c65c2f7-ac82-7844-bea4-259d3ca85e49,BootID:e90a54a7-31c1-4896-bf10-fccff5507cc5,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-apiserver-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-apiserver:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:135156176,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-controller-manager-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-controller-manager:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:124986169,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-proxy-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-proxy:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:67201736,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-scheduler-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-scheduler:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:57656120,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 02:53:02.776: INFO: Logging kubelet events for node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:53:02.808: INFO: Logging pods the kubelet thinks is on node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:53:02.864: INFO: etcd-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:31 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Container etcd ready: true, restart count 0 Nov 14 02:53:02.864: INFO: kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 02:53:02.864: INFO: calico-kube-controllers-657b584867-65vn5 started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 14 02:53:02.864: INFO: coredns-787d4945fb-dfwrp started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Container coredns ready: true, restart count 0 Nov 14 02:53:02.864: INFO: kube-apiserver-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 02:53:02.864: INFO: kube-scheduler-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 02:53:02.864: INFO: kube-proxy-nvvcp started at 2022-11-14 01:06:31 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 02:53:02.864: INFO: calico-node-jwd52 started at 2022-11-14 01:06:48 +0000 UTC (2+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 14 02:53:02.864: INFO: Init container install-cni ready: true, restart count 0 Nov 14 02:53:02.864: INFO: Container calico-node ready: true, restart count 0 Nov 14 02:53:02.864: INFO: coredns-787d4945fb-qs9pc started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Container coredns ready: true, restart count 0 Nov 14 02:53:02.864: INFO: metrics-server-c9574f845-p9ptg started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:02.864: INFO: Container metrics-server ready: true, restart count 0 Nov 14 02:53:03.026: INFO: Latency metrics for node capz-conf-5alf7c-control-plane-hknpt Nov 14 02:53:03.026: INFO: Logging node info for node capz-conf-bpf2r Nov 14 02:53:03.059: INFO: Node Info: &Node{ObjectMeta:{capz-conf-bpf2r c45cb394-b969-49da-b171-8e075ea29d20 13034 0 2022-11-14 01:08:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-bpf2r kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-md-win-5c98d6f77b-lr6hr cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-5alf7c-md-win-5c98d6f77b kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.114.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:c9:39:af volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-14 01:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-14 01:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-14 01:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-14 01:09:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-14 01:10:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-14 02:36:57 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}} status} {kubelet.exe Update v1 2022-11-14 02:52:25 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-bpf2r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 02:52:25 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 02:52:25 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 02:52:25 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 02:52:25 +0000 UTC,LastTransitionTime:2022-11-14 01:09:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-bpf2r,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-bpf2r,SystemUUID:21083AEB-D819-4573-9CD1-AA772F09A374,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-beta.0.65_8e48df13531802-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 02:53:03.060: INFO: Logging kubelet events for node capz-conf-bpf2r Nov 14 02:53:03.092: INFO: Logging pods the kubelet thinks is on node capz-conf-bpf2r Nov 14 02:53:03.143: INFO: kube-proxy-windows-nz2rt started at 2022-11-14 01:08:57 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:03.143: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 02:53:03.143: INFO: csi-proxy-76x9p started at 2022-11-14 01:09:18 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:03.143: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 02:53:03.143: INFO: calico-node-windows-xk6bd started at 2022-11-14 01:08:57 +0000 UTC (1+2 container statuses recorded) Nov 14 02:53:03.143: INFO: Init container install-cni ready: true, restart count 0 Nov 14 02:53:03.143: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 02:53:03.143: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 02:53:03.143: INFO: containerd-logger-bpt69 started at 2022-11-14 01:08:57 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:03.143: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 02:53:03.296: INFO: Latency metrics for node capz-conf-bpf2r Nov 14 02:53:03.296: INFO: Logging node info for node capz-conf-sq8nr Nov 14 02:53:03.328: INFO: Node Info: &Node{ObjectMeta:{capz-conf-sq8nr 51b52b43-1941-43af-b740-46bccdd021dd 13024 0 2022-11-14 01:08:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-sq8nr kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-md-win-5c98d6f77b-pnhpc cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-5alf7c-md-win-5c98d6f77b kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.166.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:39:f9:57 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-14 01:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-14 01:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-14 01:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-14 01:09:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-14 01:09:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-14 02:36:57 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}} status} {kubelet.exe Update v1 2022-11-14 02:52:19 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-sq8nr,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 02:52:19 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 02:52:19 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 02:52:19 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 02:52:19 +0000 UTC,LastTransitionTime:2022-11-14 01:09:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-sq8nr,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-sq8nr,SystemUUID:9699376C-B5F7-4F5B-B48F-D84D2BD16580,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-beta.0.65_8e48df13531802-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 02:53:03.329: INFO: Logging kubelet events for node capz-conf-sq8nr Nov 14 02:53:03.361: INFO: Logging pods the kubelet thinks is on node capz-conf-sq8nr Nov 14 02:53:03.411: INFO: containerd-logger-bf8mz started at 2022-11-14 01:08:50 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:03.411: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 02:53:03.411: INFO: kube-proxy-windows-lldgb started at 2022-11-14 01:08:50 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:03.411: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 02:53:03.411: INFO: calico-node-windows-w6hn2 started at 2022-11-14 01:08:50 +0000 UTC (1+2 container statuses recorded) Nov 14 02:53:03.411: INFO: Init container install-cni ready: true, restart count 0 Nov 14 02:53:03.411: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 02:53:03.411: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 02:53:03.411: INFO: csi-proxy-fbwsw started at 2022-11-14 01:09:15 +0000 UTC (0+1 container statuses recorded) Nov 14 02:53:03.411: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 02:53:03.571: INFO: Latency metrics for node capz-conf-sq8nr [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-4624" for this suite. �[38;5;243m11/14/22 02:53:03.571�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;9mNov 14 02:52:48.116: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition�[0m �[38;5;9mIn �[1m[It]�[0m�[38;5;9m at: �[1mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:209�[0m �[38;5;9mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc003ba1e68, {0x75aabd2?, 0xc00035d980?}, {{0x0, 0x0}, {0x75aac1c, 0x2}, {0x75fb175, 0x15}}, 0xc000bece10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75aabd2?, 0x62ae505?}, {{0x0, 0x0}, {0x75aac1c, 0x2}, {0x75fb175, 0x15}}, {0x75abb3b, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.4.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:81 +0x8b �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould orphan pods created by rc if delete options say so [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:370�[0m [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:53:03.617�[0m Nov 14 02:53:03.617: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m11/14/22 02:53:03.619�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:53:03.715�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:53:03.776�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 [It] should orphan pods created by rc if delete options say so [Conformance] test/e2e/apimachinery/garbage_collector.go:370 �[1mSTEP:�[0m create the rc �[38;5;243m11/14/22 02:53:03.872�[0m �[1mSTEP:�[0m delete the rc �[38;5;243m11/14/22 02:53:08.938�[0m �[1mSTEP:�[0m wait for the rc to be deleted �[38;5;243m11/14/22 02:53:08.988�[0m �[1mSTEP:�[0m wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[38;5;243m11/14/22 02:53:14.022�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m11/14/22 02:53:44.067�[0m Nov 14 02:53:44.163: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" in namespace "kube-system" to be "running and ready" Nov 14 02:53:44.195: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt": Phase="Running", Reason="", readiness=true. Elapsed: 31.929425ms Nov 14 02:53:44.195: INFO: The phase of Pod kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt is Running (Ready = true) Nov 14 02:53:44.195: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" satisfied condition "running and ready" Nov 14 02:53:44.557: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Nov 14 02:53:44.557: INFO: Deleting pod "simpletest.rc-27ljn" in namespace "gc-3930" Nov 14 02:53:44.600: INFO: Deleting pod "simpletest.rc-2bbgd" in namespace "gc-3930" Nov 14 02:53:44.643: INFO: Deleting pod "simpletest.rc-2dv77" in namespace "gc-3930" Nov 14 02:53:44.685: INFO: Deleting pod "simpletest.rc-2sj4r" in namespace "gc-3930" Nov 14 02:53:44.725: INFO: Deleting pod "simpletest.rc-2t7j7" in namespace "gc-3930" Nov 14 02:53:44.767: INFO: Deleting pod "simpletest.rc-47ts5" in namespace "gc-3930" Nov 14 02:53:44.816: INFO: Deleting pod "simpletest.rc-4ckpp" in namespace "gc-3930" Nov 14 02:53:44.859: INFO: Deleting pod "simpletest.rc-52c2z" in namespace "gc-3930" Nov 14 02:53:44.900: INFO: Deleting pod "simpletest.rc-55rs9" in namespace "gc-3930" Nov 14 02:53:44.939: INFO: Deleting pod "simpletest.rc-5bkp6" in namespace "gc-3930" Nov 14 02:53:44.979: INFO: Deleting pod "simpletest.rc-5qqsj" in namespace "gc-3930" Nov 14 02:53:45.020: INFO: Deleting pod "simpletest.rc-5qwlv" in namespace "gc-3930" Nov 14 02:53:45.071: INFO: Deleting pod "simpletest.rc-5r8lx" in namespace "gc-3930" Nov 14 02:53:45.112: INFO: Deleting pod "simpletest.rc-6fnkh" in namespace "gc-3930" Nov 14 02:53:45.158: INFO: Deleting pod "simpletest.rc-6tkg4" in namespace "gc-3930" Nov 14 02:53:45.204: INFO: Deleting pod "simpletest.rc-78m7b" in namespace "gc-3930" Nov 14 02:53:45.246: INFO: Deleting pod "simpletest.rc-79swd" in namespace "gc-3930" Nov 14 02:53:45.308: INFO: Deleting pod "simpletest.rc-7t4sg" in namespace "gc-3930" Nov 14 02:53:45.351: INFO: Deleting pod "simpletest.rc-7zk9q" in namespace "gc-3930" Nov 14 02:53:45.398: INFO: Deleting pod "simpletest.rc-855qw" in namespace "gc-3930" Nov 14 02:53:45.439: INFO: Deleting pod "simpletest.rc-8lxw9" in namespace "gc-3930" Nov 14 02:53:45.484: INFO: Deleting pod "simpletest.rc-8pz5j" in namespace "gc-3930" Nov 14 02:53:45.528: INFO: Deleting pod "simpletest.rc-8w825" in namespace "gc-3930" Nov 14 02:53:45.575: INFO: Deleting pod "simpletest.rc-9xk8s" in namespace "gc-3930" Nov 14 02:53:45.621: INFO: Deleting pod "simpletest.rc-b5k6c" in namespace "gc-3930" Nov 14 02:53:45.666: INFO: Deleting pod "simpletest.rc-b7b8w" in namespace "gc-3930" Nov 14 02:53:45.706: INFO: Deleting pod "simpletest.rc-bc9h8" in namespace "gc-3930" Nov 14 02:53:45.752: INFO: Deleting pod "simpletest.rc-bdqn7" in namespace "gc-3930" Nov 14 02:53:45.795: INFO: Deleting pod "simpletest.rc-bhm7r" in namespace "gc-3930" Nov 14 02:53:45.836: INFO: Deleting pod "simpletest.rc-bx2kw" in namespace "gc-3930" Nov 14 02:53:45.880: INFO: Deleting pod "simpletest.rc-c6fxm" in namespace "gc-3930" Nov 14 02:53:45.928: INFO: Deleting pod "simpletest.rc-cgcw8" in namespace "gc-3930" Nov 14 02:53:45.973: INFO: Deleting pod "simpletest.rc-cjbtv" in namespace "gc-3930" Nov 14 02:53:46.020: INFO: Deleting pod "simpletest.rc-cxxkt" in namespace "gc-3930" Nov 14 02:53:46.061: INFO: Deleting pod "simpletest.rc-d6wps" in namespace "gc-3930" Nov 14 02:53:46.101: INFO: Deleting pod "simpletest.rc-dq9zt" in namespace "gc-3930" Nov 14 02:53:46.146: INFO: Deleting pod "simpletest.rc-dslg9" in namespace "gc-3930" Nov 14 02:53:46.193: INFO: Deleting pod "simpletest.rc-fgbpz" in namespace "gc-3930" Nov 14 02:53:46.239: INFO: Deleting pod "simpletest.rc-ftf9s" in namespace "gc-3930" Nov 14 02:53:46.285: INFO: Deleting pod "simpletest.rc-fwrcg" in namespace "gc-3930" Nov 14 02:53:46.339: INFO: Deleting pod "simpletest.rc-g6487" in namespace "gc-3930" Nov 14 02:53:46.381: INFO: Deleting pod "simpletest.rc-gtt7f" in namespace "gc-3930" Nov 14 02:53:46.425: INFO: Deleting pod "simpletest.rc-gvm92" in namespace "gc-3930" Nov 14 02:53:46.472: INFO: Deleting pod "simpletest.rc-gw2lj" in namespace "gc-3930" Nov 14 02:53:46.516: INFO: Deleting pod "simpletest.rc-h5drj" in namespace "gc-3930" Nov 14 02:53:46.557: INFO: Deleting pod "simpletest.rc-h77v2" in namespace "gc-3930" Nov 14 02:53:46.602: INFO: Deleting pod "simpletest.rc-h8vjb" in namespace "gc-3930" Nov 14 02:53:46.650: INFO: Deleting pod "simpletest.rc-hhpk6" in namespace "gc-3930" Nov 14 02:53:46.695: INFO: Deleting pod "simpletest.rc-hmkp4" in namespace "gc-3930" Nov 14 02:53:46.736: INFO: Deleting pod "simpletest.rc-j8q99" in namespace "gc-3930" Nov 14 02:53:46.792: INFO: Deleting pod "simpletest.rc-jlsgw" in namespace "gc-3930" Nov 14 02:53:46.839: INFO: Deleting pod "simpletest.rc-jn2m9" in namespace "gc-3930" Nov 14 02:53:46.891: INFO: Deleting pod "simpletest.rc-jsvhz" in namespace "gc-3930" Nov 14 02:53:46.931: INFO: Deleting pod "simpletest.rc-k6svj" in namespace "gc-3930" Nov 14 02:53:46.971: INFO: Deleting pod "simpletest.rc-k9ljs" in namespace "gc-3930" Nov 14 02:53:47.017: INFO: Deleting pod "simpletest.rc-lzl4b" in namespace "gc-3930" Nov 14 02:53:47.060: INFO: Deleting pod "simpletest.rc-mc6zl" in namespace "gc-3930" Nov 14 02:53:47.108: INFO: Deleting pod "simpletest.rc-mfnkm" in namespace "gc-3930" Nov 14 02:53:47.155: INFO: Deleting pod "simpletest.rc-mj4rg" in namespace "gc-3930" Nov 14 02:53:47.196: INFO: Deleting pod "simpletest.rc-mn4bt" in namespace "gc-3930" Nov 14 02:53:47.239: INFO: Deleting pod "simpletest.rc-mqznp" in namespace "gc-3930" Nov 14 02:53:47.280: INFO: Deleting pod "simpletest.rc-mtjwz" in namespace "gc-3930" Nov 14 02:53:47.326: INFO: Deleting pod "simpletest.rc-mxr89" in namespace "gc-3930" Nov 14 02:53:47.375: INFO: Deleting pod "simpletest.rc-n82mh" in namespace "gc-3930" Nov 14 02:53:47.424: INFO: Deleting pod "simpletest.rc-n8qnx" in namespace "gc-3930" Nov 14 02:53:47.473: INFO: Deleting pod "simpletest.rc-nbp5n" in namespace "gc-3930" Nov 14 02:53:47.520: INFO: Deleting pod "simpletest.rc-njfjx" in namespace "gc-3930" Nov 14 02:53:47.567: INFO: Deleting pod "simpletest.rc-nsq2g" in namespace "gc-3930" Nov 14 02:53:47.611: INFO: Deleting pod "simpletest.rc-pbn92" in namespace "gc-3930" Nov 14 02:53:47.656: INFO: Deleting pod "simpletest.rc-pzwhv" in namespace "gc-3930" Nov 14 02:53:47.701: INFO: Deleting pod "simpletest.rc-q5h49" in namespace "gc-3930" Nov 14 02:53:47.754: INFO: Deleting pod "simpletest.rc-qkshp" in namespace "gc-3930" Nov 14 02:53:47.799: INFO: Deleting pod "simpletest.rc-r2bwf" in namespace "gc-3930" Nov 14 02:53:47.839: INFO: Deleting pod "simpletest.rc-rfsbm" in namespace "gc-3930" Nov 14 02:53:47.884: INFO: Deleting pod "simpletest.rc-rz8rm" in namespace "gc-3930" Nov 14 02:53:47.926: INFO: Deleting pod "simpletest.rc-s7xmg" in namespace "gc-3930" Nov 14 02:53:47.972: INFO: Deleting pod "simpletest.rc-sh8rn" in namespace "gc-3930" Nov 14 02:53:48.012: INFO: Deleting pod "simpletest.rc-spz4w" in namespace "gc-3930" Nov 14 02:53:48.065: INFO: Deleting pod "simpletest.rc-tj6d9" in namespace "gc-3930" Nov 14 02:53:48.105: INFO: Deleting pod "simpletest.rc-tqjs9" in namespace "gc-3930" Nov 14 02:53:48.151: INFO: Deleting pod "simpletest.rc-ttmcx" in namespace "gc-3930" Nov 14 02:53:48.199: INFO: Deleting pod "simpletest.rc-v6qpq" in namespace "gc-3930" Nov 14 02:53:48.241: INFO: Deleting pod "simpletest.rc-v7kzq" in namespace "gc-3930" Nov 14 02:53:48.281: INFO: Deleting pod "simpletest.rc-vdzc4" in namespace "gc-3930" Nov 14 02:53:48.322: INFO: Deleting pod "simpletest.rc-vhfdl" in namespace "gc-3930" Nov 14 02:53:48.365: INFO: Deleting pod "simpletest.rc-vkjjr" in namespace "gc-3930" Nov 14 02:53:48.409: INFO: Deleting pod "simpletest.rc-vmmbs" in namespace "gc-3930" Nov 14 02:53:48.454: INFO: Deleting pod "simpletest.rc-vqpsq" in namespace "gc-3930" Nov 14 02:53:48.496: INFO: Deleting pod "simpletest.rc-vxq9j" in namespace "gc-3930" Nov 14 02:53:48.557: INFO: Deleting pod "simpletest.rc-w2q4n" in namespace "gc-3930" Nov 14 02:53:48.608: INFO: Deleting pod "simpletest.rc-w4td6" in namespace "gc-3930" Nov 14 02:53:48.655: INFO: Deleting pod "simpletest.rc-wh2x2" in namespace "gc-3930" Nov 14 02:53:48.707: INFO: Deleting pod "simpletest.rc-wkswb" in namespace "gc-3930" Nov 14 02:53:48.751: INFO: Deleting pod "simpletest.rc-wz656" in namespace "gc-3930" Nov 14 02:53:48.799: INFO: Deleting pod "simpletest.rc-xb8kh" in namespace "gc-3930" Nov 14 02:53:48.842: INFO: Deleting pod "simpletest.rc-xglct" in namespace "gc-3930" Nov 14 02:53:48.884: INFO: Deleting pod "simpletest.rc-xkbbf" in namespace "gc-3930" Nov 14 02:53:48.926: INFO: Deleting pod "simpletest.rc-z7dkh" in namespace "gc-3930" Nov 14 02:53:48.974: INFO: Deleting pod "simpletest.rc-z8snz" in namespace "gc-3930" Nov 14 02:53:49.021: INFO: Deleting pod "simpletest.rc-z9wkj" in namespace "gc-3930" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 14 02:53:49.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gc-3930" for this suite. �[38;5;243m11/14/22 02:53:49.102�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [45.520 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should orphan pods created by rc if delete options say so [Conformance] �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:370�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:53:03.617�[0m Nov 14 02:53:03.617: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m11/14/22 02:53:03.619�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:53:03.715�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:53:03.776�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 [It] should orphan pods created by rc if delete options say so [Conformance] test/e2e/apimachinery/garbage_collector.go:370 �[1mSTEP:�[0m create the rc �[38;5;243m11/14/22 02:53:03.872�[0m �[1mSTEP:�[0m delete the rc �[38;5;243m11/14/22 02:53:08.938�[0m �[1mSTEP:�[0m wait for the rc to be deleted �[38;5;243m11/14/22 02:53:08.988�[0m �[1mSTEP:�[0m wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[38;5;243m11/14/22 02:53:14.022�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m11/14/22 02:53:44.067�[0m Nov 14 02:53:44.163: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" in namespace "kube-system" to be "running and ready" Nov 14 02:53:44.195: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt": Phase="Running", Reason="", readiness=true. Elapsed: 31.929425ms Nov 14 02:53:44.195: INFO: The phase of Pod kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt is Running (Ready = true) Nov 14 02:53:44.195: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" satisfied condition "running and ready" Nov 14 02:53:44.557: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Nov 14 02:53:44.557: INFO: Deleting pod "simpletest.rc-27ljn" in namespace "gc-3930" Nov 14 02:53:44.600: INFO: Deleting pod "simpletest.rc-2bbgd" in namespace "gc-3930" Nov 14 02:53:44.643: INFO: Deleting pod "simpletest.rc-2dv77" in namespace "gc-3930" Nov 14 02:53:44.685: INFO: Deleting pod "simpletest.rc-2sj4r" in namespace "gc-3930" Nov 14 02:53:44.725: INFO: Deleting pod "simpletest.rc-2t7j7" in namespace "gc-3930" Nov 14 02:53:44.767: INFO: Deleting pod "simpletest.rc-47ts5" in namespace "gc-3930" Nov 14 02:53:44.816: INFO: Deleting pod "simpletest.rc-4ckpp" in namespace "gc-3930" Nov 14 02:53:44.859: INFO: Deleting pod "simpletest.rc-52c2z" in namespace "gc-3930" Nov 14 02:53:44.900: INFO: Deleting pod "simpletest.rc-55rs9" in namespace "gc-3930" Nov 14 02:53:44.939: INFO: Deleting pod "simpletest.rc-5bkp6" in namespace "gc-3930" Nov 14 02:53:44.979: INFO: Deleting pod "simpletest.rc-5qqsj" in namespace "gc-3930" Nov 14 02:53:45.020: INFO: Deleting pod "simpletest.rc-5qwlv" in namespace "gc-3930" Nov 14 02:53:45.071: INFO: Deleting pod "simpletest.rc-5r8lx" in namespace "gc-3930" Nov 14 02:53:45.112: INFO: Deleting pod "simpletest.rc-6fnkh" in namespace "gc-3930" Nov 14 02:53:45.158: INFO: Deleting pod "simpletest.rc-6tkg4" in namespace "gc-3930" Nov 14 02:53:45.204: INFO: Deleting pod "simpletest.rc-78m7b" in namespace "gc-3930" Nov 14 02:53:45.246: INFO: Deleting pod "simpletest.rc-79swd" in namespace "gc-3930" Nov 14 02:53:45.308: INFO: Deleting pod "simpletest.rc-7t4sg" in namespace "gc-3930" Nov 14 02:53:45.351: INFO: Deleting pod "simpletest.rc-7zk9q" in namespace "gc-3930" Nov 14 02:53:45.398: INFO: Deleting pod "simpletest.rc-855qw" in namespace "gc-3930" Nov 14 02:53:45.439: INFO: Deleting pod "simpletest.rc-8lxw9" in namespace "gc-3930" Nov 14 02:53:45.484: INFO: Deleting pod "simpletest.rc-8pz5j" in namespace "gc-3930" Nov 14 02:53:45.528: INFO: Deleting pod "simpletest.rc-8w825" in namespace "gc-3930" Nov 14 02:53:45.575: INFO: Deleting pod "simpletest.rc-9xk8s" in namespace "gc-3930" Nov 14 02:53:45.621: INFO: Deleting pod "simpletest.rc-b5k6c" in namespace "gc-3930" Nov 14 02:53:45.666: INFO: Deleting pod "simpletest.rc-b7b8w" in namespace "gc-3930" Nov 14 02:53:45.706: INFO: Deleting pod "simpletest.rc-bc9h8" in namespace "gc-3930" Nov 14 02:53:45.752: INFO: Deleting pod "simpletest.rc-bdqn7" in namespace "gc-3930" Nov 14 02:53:45.795: INFO: Deleting pod "simpletest.rc-bhm7r" in namespace "gc-3930" Nov 14 02:53:45.836: INFO: Deleting pod "simpletest.rc-bx2kw" in namespace "gc-3930" Nov 14 02:53:45.880: INFO: Deleting pod "simpletest.rc-c6fxm" in namespace "gc-3930" Nov 14 02:53:45.928: INFO: Deleting pod "simpletest.rc-cgcw8" in namespace "gc-3930" Nov 14 02:53:45.973: INFO: Deleting pod "simpletest.rc-cjbtv" in namespace "gc-3930" Nov 14 02:53:46.020: INFO: Deleting pod "simpletest.rc-cxxkt" in namespace "gc-3930" Nov 14 02:53:46.061: INFO: Deleting pod "simpletest.rc-d6wps" in namespace "gc-3930" Nov 14 02:53:46.101: INFO: Deleting pod "simpletest.rc-dq9zt" in namespace "gc-3930" Nov 14 02:53:46.146: INFO: Deleting pod "simpletest.rc-dslg9" in namespace "gc-3930" Nov 14 02:53:46.193: INFO: Deleting pod "simpletest.rc-fgbpz" in namespace "gc-3930" Nov 14 02:53:46.239: INFO: Deleting pod "simpletest.rc-ftf9s" in namespace "gc-3930" Nov 14 02:53:46.285: INFO: Deleting pod "simpletest.rc-fwrcg" in namespace "gc-3930" Nov 14 02:53:46.339: INFO: Deleting pod "simpletest.rc-g6487" in namespace "gc-3930" Nov 14 02:53:46.381: INFO: Deleting pod "simpletest.rc-gtt7f" in namespace "gc-3930" Nov 14 02:53:46.425: INFO: Deleting pod "simpletest.rc-gvm92" in namespace "gc-3930" Nov 14 02:53:46.472: INFO: Deleting pod "simpletest.rc-gw2lj" in namespace "gc-3930" Nov 14 02:53:46.516: INFO: Deleting pod "simpletest.rc-h5drj" in namespace "gc-3930" Nov 14 02:53:46.557: INFO: Deleting pod "simpletest.rc-h77v2" in namespace "gc-3930" Nov 14 02:53:46.602: INFO: Deleting pod "simpletest.rc-h8vjb" in namespace "gc-3930" Nov 14 02:53:46.650: INFO: Deleting pod "simpletest.rc-hhpk6" in namespace "gc-3930" Nov 14 02:53:46.695: INFO: Deleting pod "simpletest.rc-hmkp4" in namespace "gc-3930" Nov 14 02:53:46.736: INFO: Deleting pod "simpletest.rc-j8q99" in namespace "gc-3930" Nov 14 02:53:46.792: INFO: Deleting pod "simpletest.rc-jlsgw" in namespace "gc-3930" Nov 14 02:53:46.839: INFO: Deleting pod "simpletest.rc-jn2m9" in namespace "gc-3930" Nov 14 02:53:46.891: INFO: Deleting pod "simpletest.rc-jsvhz" in namespace "gc-3930" Nov 14 02:53:46.931: INFO: Deleting pod "simpletest.rc-k6svj" in namespace "gc-3930" Nov 14 02:53:46.971: INFO: Deleting pod "simpletest.rc-k9ljs" in namespace "gc-3930" Nov 14 02:53:47.017: INFO: Deleting pod "simpletest.rc-lzl4b" in namespace "gc-3930" Nov 14 02:53:47.060: INFO: Deleting pod "simpletest.rc-mc6zl" in namespace "gc-3930" Nov 14 02:53:47.108: INFO: Deleting pod "simpletest.rc-mfnkm" in namespace "gc-3930" Nov 14 02:53:47.155: INFO: Deleting pod "simpletest.rc-mj4rg" in namespace "gc-3930" Nov 14 02:53:47.196: INFO: Deleting pod "simpletest.rc-mn4bt" in namespace "gc-3930" Nov 14 02:53:47.239: INFO: Deleting pod "simpletest.rc-mqznp" in namespace "gc-3930" Nov 14 02:53:47.280: INFO: Deleting pod "simpletest.rc-mtjwz" in namespace "gc-3930" Nov 14 02:53:47.326: INFO: Deleting pod "simpletest.rc-mxr89" in namespace "gc-3930" Nov 14 02:53:47.375: INFO: Deleting pod "simpletest.rc-n82mh" in namespace "gc-3930" Nov 14 02:53:47.424: INFO: Deleting pod "simpletest.rc-n8qnx" in namespace "gc-3930" Nov 14 02:53:47.473: INFO: Deleting pod "simpletest.rc-nbp5n" in namespace "gc-3930" Nov 14 02:53:47.520: INFO: Deleting pod "simpletest.rc-njfjx" in namespace "gc-3930" Nov 14 02:53:47.567: INFO: Deleting pod "simpletest.rc-nsq2g" in namespace "gc-3930" Nov 14 02:53:47.611: INFO: Deleting pod "simpletest.rc-pbn92" in namespace "gc-3930" Nov 14 02:53:47.656: INFO: Deleting pod "simpletest.rc-pzwhv" in namespace "gc-3930" Nov 14 02:53:47.701: INFO: Deleting pod "simpletest.rc-q5h49" in namespace "gc-3930" Nov 14 02:53:47.754: INFO: Deleting pod "simpletest.rc-qkshp" in namespace "gc-3930" Nov 14 02:53:47.799: INFO: Deleting pod "simpletest.rc-r2bwf" in namespace "gc-3930" Nov 14 02:53:47.839: INFO: Deleting pod "simpletest.rc-rfsbm" in namespace "gc-3930" Nov 14 02:53:47.884: INFO: Deleting pod "simpletest.rc-rz8rm" in namespace "gc-3930" Nov 14 02:53:47.926: INFO: Deleting pod "simpletest.rc-s7xmg" in namespace "gc-3930" Nov 14 02:53:47.972: INFO: Deleting pod "simpletest.rc-sh8rn" in namespace "gc-3930" Nov 14 02:53:48.012: INFO: Deleting pod "simpletest.rc-spz4w" in namespace "gc-3930" Nov 14 02:53:48.065: INFO: Deleting pod "simpletest.rc-tj6d9" in namespace "gc-3930" Nov 14 02:53:48.105: INFO: Deleting pod "simpletest.rc-tqjs9" in namespace "gc-3930" Nov 14 02:53:48.151: INFO: Deleting pod "simpletest.rc-ttmcx" in namespace "gc-3930" Nov 14 02:53:48.199: INFO: Deleting pod "simpletest.rc-v6qpq" in namespace "gc-3930" Nov 14 02:53:48.241: INFO: Deleting pod "simpletest.rc-v7kzq" in namespace "gc-3930" Nov 14 02:53:48.281: INFO: Deleting pod "simpletest.rc-vdzc4" in namespace "gc-3930" Nov 14 02:53:48.322: INFO: Deleting pod "simpletest.rc-vhfdl" in namespace "gc-3930" Nov 14 02:53:48.365: INFO: Deleting pod "simpletest.rc-vkjjr" in namespace "gc-3930" Nov 14 02:53:48.409: INFO: Deleting pod "simpletest.rc-vmmbs" in namespace "gc-3930" Nov 14 02:53:48.454: INFO: Deleting pod "simpletest.rc-vqpsq" in namespace "gc-3930" Nov 14 02:53:48.496: INFO: Deleting pod "simpletest.rc-vxq9j" in namespace "gc-3930" Nov 14 02:53:48.557: INFO: Deleting pod "simpletest.rc-w2q4n" in namespace "gc-3930" Nov 14 02:53:48.608: INFO: Deleting pod "simpletest.rc-w4td6" in namespace "gc-3930" Nov 14 02:53:48.655: INFO: Deleting pod "simpletest.rc-wh2x2" in namespace "gc-3930" Nov 14 02:53:48.707: INFO: Deleting pod "simpletest.rc-wkswb" in namespace "gc-3930" Nov 14 02:53:48.751: INFO: Deleting pod "simpletest.rc-wz656" in namespace "gc-3930" Nov 14 02:53:48.799: INFO: Deleting pod "simpletest.rc-xb8kh" in namespace "gc-3930" Nov 14 02:53:48.842: INFO: Deleting pod "simpletest.rc-xglct" in namespace "gc-3930" Nov 14 02:53:48.884: INFO: Deleting pod "simpletest.rc-xkbbf" in namespace "gc-3930" Nov 14 02:53:48.926: INFO: Deleting pod "simpletest.rc-z7dkh" in namespace "gc-3930" Nov 14 02:53:48.974: INFO: Deleting pod "simpletest.rc-z8snz" in namespace "gc-3930" Nov 14 02:53:49.021: INFO: Deleting pod "simpletest.rc-z9wkj" in namespace "gc-3930" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 14 02:53:49.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gc-3930" for this suite. �[38;5;243m11/14/22 02:53:49.102�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mPreemptionExecutionPath�[0m �[1mruns ReplicaSets to verify preemption running path [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/preemption.go:616�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:53:49.155�[0m Nov 14 02:53:49.155: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m11/14/22 02:53:49.157�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:53:49.262�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:53:49.322�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:96 Nov 14 02:53:49.486: INFO: Waiting up to 1m0s for all nodes to be ready Nov 14 02:54:49.771: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:54:49.802�[0m Nov 14 02:54:49.802: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption-path �[38;5;243m11/14/22 02:54:49.804�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:54:49.903�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:54:49.963�[0m [BeforeEach] PreemptionExecutionPath test/e2e/framework/metrics/init/init.go:31 [BeforeEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:569 �[1mSTEP:�[0m Finding an available node �[38;5;243m11/14/22 02:54:50.025�[0m �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m11/14/22 02:54:50.025�[0m Nov 14 02:54:50.064: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-5138" to be "running" Nov 14 02:54:50.095: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 30.865623ms Nov 14 02:54:52.127: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063106012s Nov 14 02:54:54.135: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070638326s Nov 14 02:54:56.127: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062743689s Nov 14 02:54:58.135: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070883202s Nov 14 02:55:00.128: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 10.063784755s Nov 14 02:55:00.128: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m11/14/22 02:55:00.16�[0m Nov 14 02:55:00.210: INFO: found a healthy node: capz-conf-bpf2r [It] runs ReplicaSets to verify preemption running path [Conformance] test/e2e/scheduling/preemption.go:616 Nov 14 02:55:22.748: INFO: pods created so far: [1 1 1] Nov 14 02:55:22.748: INFO: length of pods created so far: 3 Nov 14 02:55:26.820: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath test/e2e/framework/node/init/init.go:32 Nov 14 02:55:33.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:543 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 02:55:34.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] PreemptionExecutionPath test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] PreemptionExecutionPath dump namespaces | framework.go:196 [DeferCleanup (Each)] PreemptionExecutionPath tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-preemption-path-5138" for this suite. �[38;5;243m11/14/22 02:55:34.263�[0m [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-preemption-1446" for this suite. �[38;5;243m11/14/22 02:55:34.305�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [105.187 seconds]�[0m [sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m PreemptionExecutionPath �[38;5;243mtest/e2e/scheduling/preemption.go:531�[0m runs ReplicaSets to verify preemption running path [Conformance] �[38;5;243mtest/e2e/scheduling/preemption.go:616�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:53:49.155�[0m Nov 14 02:53:49.155: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m11/14/22 02:53:49.157�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:53:49.262�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:53:49.322�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:96 Nov 14 02:53:49.486: INFO: Waiting up to 1m0s for all nodes to be ready Nov 14 02:54:49.771: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:54:49.802�[0m Nov 14 02:54:49.802: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption-path �[38;5;243m11/14/22 02:54:49.804�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:54:49.903�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:54:49.963�[0m [BeforeEach] PreemptionExecutionPath test/e2e/framework/metrics/init/init.go:31 [BeforeEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:569 �[1mSTEP:�[0m Finding an available node �[38;5;243m11/14/22 02:54:50.025�[0m �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m11/14/22 02:54:50.025�[0m Nov 14 02:54:50.064: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-5138" to be "running" Nov 14 02:54:50.095: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 30.865623ms Nov 14 02:54:52.127: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063106012s Nov 14 02:54:54.135: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070638326s Nov 14 02:54:56.127: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062743689s Nov 14 02:54:58.135: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070883202s Nov 14 02:55:00.128: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 10.063784755s Nov 14 02:55:00.128: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m11/14/22 02:55:00.16�[0m Nov 14 02:55:00.210: INFO: found a healthy node: capz-conf-bpf2r [It] runs ReplicaSets to verify preemption running path [Conformance] test/e2e/scheduling/preemption.go:616 Nov 14 02:55:22.748: INFO: pods created so far: [1 1 1] Nov 14 02:55:22.748: INFO: length of pods created so far: 3 Nov 14 02:55:26.820: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath test/e2e/framework/node/init/init.go:32 Nov 14 02:55:33.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:543 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 02:55:34.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] PreemptionExecutionPath test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] PreemptionExecutionPath dump namespaces | framework.go:196 [DeferCleanup (Each)] PreemptionExecutionPath tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-preemption-path-5138" for this suite. �[38;5;243m11/14/22 02:55:34.263�[0m [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-preemption-1446" for this suite. �[38;5;243m11/14/22 02:55:34.305�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]�[0m �[38;5;243mtest/e2e/common/node/expansion.go:225�[0m [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:55:34.344�[0m Nov 14 02:55:34.345: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m11/14/22 02:55:34.346�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:55:34.444�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:55:34.505�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] test/e2e/common/node/expansion.go:225 �[1mSTEP:�[0m creating the pod with failed condition �[38;5;243m11/14/22 02:55:34.567�[0m Nov 14 02:55:34.606: INFO: Waiting up to 2m0s for pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a" in namespace "var-expansion-5671" to be "running" Nov 14 02:55:34.638: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.802288ms Nov 14 02:55:36.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065589952s Nov 14 02:55:38.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065555795s Nov 14 02:55:40.670: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064474455s Nov 14 02:55:42.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065713425s Nov 14 02:55:44.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.06561456s Nov 14 02:55:46.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.065715414s Nov 14 02:55:48.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.065563964s Nov 14 02:55:50.670: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.064789919s Nov 14 02:55:52.673: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.067196876s Nov 14 02:55:54.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.065793174s Nov 14 02:55:56.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.066099045s Nov 14 02:55:58.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.065064767s Nov 14 02:56:00.670: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 26.064762931s Nov 14 02:56:02.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.065323392s Nov 14 02:56:04.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.065835324s Nov 14 02:56:06.673: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.067227376s Nov 14 02:56:08.670: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.064830557s Nov 14 02:56:10.670: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 36.064656718s Nov 14 02:56:12.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 38.065189552s Nov 14 02:56:14.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 40.065757677s Nov 14 02:56:16.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 42.065457019s Nov 14 02:56:18.670: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 44.064578536s Nov 14 02:56:20.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 46.066444482s Nov 14 02:56:22.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 48.065123577s Nov 14 02:56:24.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 50.066699969s Nov 14 02:56:26.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 52.066173133s Nov 14 02:56:28.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 54.066153213s Nov 14 02:56:30.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 56.065032224s Nov 14 02:56:32.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 58.066597372s Nov 14 02:56:34.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.066775243s Nov 14 02:56:36.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.065456052s Nov 14 02:56:38.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.065715761s Nov 14 02:56:40.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.066640559s Nov 14 02:56:42.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.066672372s Nov 14 02:56:44.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.065288925s Nov 14 02:56:46.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.065592464s Nov 14 02:56:48.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.06665965s Nov 14 02:56:50.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.06550784s Nov 14 02:56:52.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.065474775s Nov 14 02:56:54.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.065687917s Nov 14 02:56:56.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.064996808s Nov 14 02:56:58.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.06687071s Nov 14 02:57:00.670: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.064765122s Nov 14 02:57:02.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.065179404s Nov 14 02:57:04.670: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.064695015s Nov 14 02:57:06.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.065849584s Nov 14 02:57:08.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.065611644s Nov 14 02:57:10.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.065453563s Nov 14 02:57:12.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.065355438s Nov 14 02:57:14.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.065440574s Nov 14 02:57:16.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.065250803s Nov 14 02:57:18.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.065722935s Nov 14 02:57:20.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.066884214s Nov 14 02:57:22.670: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.064853011s Nov 14 02:57:24.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.065887532s Nov 14 02:57:26.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.065882196s Nov 14 02:57:28.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.0666495s Nov 14 02:57:30.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.066535715s Nov 14 02:57:32.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.066783732s Nov 14 02:57:34.673: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.066928699s Nov 14 02:57:34.704: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.098154979s �[1mSTEP:�[0m updating the pod �[38;5;243m11/14/22 02:57:34.704�[0m Nov 14 02:57:35.277: INFO: Successfully updated pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a" �[1mSTEP:�[0m waiting for pod running �[38;5;243m11/14/22 02:57:35.277�[0m Nov 14 02:57:35.277: INFO: Waiting up to 2m0s for pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a" in namespace "var-expansion-5671" to be "running" Nov 14 02:57:35.308: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 31.24872ms Nov 14 02:57:37.340: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063210015s Nov 14 02:57:39.342: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064789276s Nov 14 02:57:41.341: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063824669s Nov 14 02:57:43.341: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064071639s Nov 14 02:57:45.340: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063018559s Nov 14 02:57:47.342: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Running", Reason="", readiness=true. Elapsed: 12.064491571s Nov 14 02:57:47.342: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a" satisfied condition "running" �[1mSTEP:�[0m deleting the pod gracefully �[38;5;243m11/14/22 02:57:47.342�[0m Nov 14 02:57:47.342: INFO: Deleting pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a" in namespace "var-expansion-5671" Nov 14 02:57:47.380: INFO: Wait up to 5m0s for pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 14 02:57:51.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "var-expansion-5671" for this suite. �[38;5;243m11/14/22 02:57:51.478�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [137.170 seconds]�[0m [sig-node] Variable Expansion �[38;5;243mtest/e2e/common/node/framework.go:23�[0m should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] �[38;5;243mtest/e2e/common/node/expansion.go:225�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:55:34.344�[0m Nov 14 02:55:34.345: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m11/14/22 02:55:34.346�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:55:34.444�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:55:34.505�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] test/e2e/common/node/expansion.go:225 �[1mSTEP:�[0m creating the pod with failed condition �[38;5;243m11/14/22 02:55:34.567�[0m Nov 14 02:55:34.606: INFO: Waiting up to 2m0s for pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a" in namespace "var-expansion-5671" to be "running" Nov 14 02:55:34.638: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.802288ms Nov 14 02:55:36.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065589952s Nov 14 02:55:38.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065555795s Nov 14 02:55:40.670: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064474455s Nov 14 02:55:42.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065713425s Nov 14 02:55:44.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.06561456s Nov 14 02:55:46.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.065715414s Nov 14 02:55:48.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.065563964s Nov 14 02:55:50.670: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.064789919s Nov 14 02:55:52.673: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.067196876s Nov 14 02:55:54.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.065793174s Nov 14 02:55:56.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.066099045s Nov 14 02:55:58.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.065064767s Nov 14 02:56:00.670: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 26.064762931s Nov 14 02:56:02.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.065323392s Nov 14 02:56:04.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.065835324s Nov 14 02:56:06.673: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.067227376s Nov 14 02:56:08.670: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.064830557s Nov 14 02:56:10.670: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 36.064656718s Nov 14 02:56:12.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 38.065189552s Nov 14 02:56:14.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 40.065757677s Nov 14 02:56:16.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 42.065457019s Nov 14 02:56:18.670: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 44.064578536s Nov 14 02:56:20.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 46.066444482s Nov 14 02:56:22.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 48.065123577s Nov 14 02:56:24.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 50.066699969s Nov 14 02:56:26.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 52.066173133s Nov 14 02:56:28.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 54.066153213s Nov 14 02:56:30.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 56.065032224s Nov 14 02:56:32.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 58.066597372s Nov 14 02:56:34.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.066775243s Nov 14 02:56:36.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.065456052s Nov 14 02:56:38.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.065715761s Nov 14 02:56:40.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.066640559s Nov 14 02:56:42.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.066672372s Nov 14 02:56:44.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.065288925s Nov 14 02:56:46.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.065592464s Nov 14 02:56:48.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.06665965s Nov 14 02:56:50.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.06550784s Nov 14 02:56:52.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.065474775s Nov 14 02:56:54.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.065687917s Nov 14 02:56:56.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.064996808s Nov 14 02:56:58.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.06687071s Nov 14 02:57:00.670: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.064765122s Nov 14 02:57:02.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.065179404s Nov 14 02:57:04.670: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.064695015s Nov 14 02:57:06.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.065849584s Nov 14 02:57:08.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.065611644s Nov 14 02:57:10.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.065453563s Nov 14 02:57:12.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.065355438s Nov 14 02:57:14.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.065440574s Nov 14 02:57:16.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.065250803s Nov 14 02:57:18.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.065722935s Nov 14 02:57:20.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.066884214s Nov 14 02:57:22.670: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.064853011s Nov 14 02:57:24.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.065887532s Nov 14 02:57:26.671: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.065882196s Nov 14 02:57:28.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.0666495s Nov 14 02:57:30.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.066535715s Nov 14 02:57:32.672: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.066783732s Nov 14 02:57:34.673: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.066928699s Nov 14 02:57:34.704: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.098154979s �[1mSTEP:�[0m updating the pod �[38;5;243m11/14/22 02:57:34.704�[0m Nov 14 02:57:35.277: INFO: Successfully updated pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a" �[1mSTEP:�[0m waiting for pod running �[38;5;243m11/14/22 02:57:35.277�[0m Nov 14 02:57:35.277: INFO: Waiting up to 2m0s for pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a" in namespace "var-expansion-5671" to be "running" Nov 14 02:57:35.308: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 31.24872ms Nov 14 02:57:37.340: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063210015s Nov 14 02:57:39.342: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064789276s Nov 14 02:57:41.341: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063824669s Nov 14 02:57:43.341: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064071639s Nov 14 02:57:45.340: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063018559s Nov 14 02:57:47.342: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a": Phase="Running", Reason="", readiness=true. Elapsed: 12.064491571s Nov 14 02:57:47.342: INFO: Pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a" satisfied condition "running" �[1mSTEP:�[0m deleting the pod gracefully �[38;5;243m11/14/22 02:57:47.342�[0m Nov 14 02:57:47.342: INFO: Deleting pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a" in namespace "var-expansion-5671" Nov 14 02:57:47.380: INFO: Wait up to 5m0s for pod "var-expansion-a1b9ae52-4fdd-4263-8996-b52c208aec3a" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 14 02:57:51.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "var-expansion-5671" for this suite. �[38;5;243m11/14/22 02:57:51.478�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[1mvalidates pod disruption condition is added to the preempted pod�[0m �[38;5;243mtest/e2e/scheduling/preemption.go:324�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:57:51.521�[0m Nov 14 02:57:51.521: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m11/14/22 02:57:51.522�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:57:51.623�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:57:51.684�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:96 Nov 14 02:57:51.855: INFO: Waiting up to 1m0s for all nodes to be ready Nov 14 02:58:52.142: INFO: Waiting for terminating namespaces to be deleted... [It] validates pod disruption condition is added to the preempted pod test/e2e/scheduling/preemption.go:324 �[1mSTEP:�[0m Select a node to run the lower and higher priority pods �[38;5;243m11/14/22 02:58:52.174�[0m �[1mSTEP:�[0m Create a low priority pod that consumes 1/1 of node resources �[38;5;243m11/14/22 02:58:52.222�[0m Nov 14 02:58:52.263: INFO: Created pod: victim-pod �[1mSTEP:�[0m Wait for the victim pod to be scheduled �[38;5;243m11/14/22 02:58:52.263�[0m Nov 14 02:58:52.263: INFO: Waiting up to 5m0s for pod "victim-pod" in namespace "sched-preemption-8889" to be "running" Nov 14 02:58:52.294: INFO: Pod "victim-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 31.003535ms Nov 14 02:58:54.326: INFO: Pod "victim-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063088625s Nov 14 02:58:56.327: INFO: Pod "victim-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063465013s Nov 14 02:58:58.331: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.067212485s Nov 14 02:58:58.331: INFO: Pod "victim-pod" satisfied condition "running" �[1mSTEP:�[0m Create a high priority pod to trigger preemption of the lower priority pod �[38;5;243m11/14/22 02:58:58.331�[0m Nov 14 02:58:58.365: INFO: Created pod: preemptor-pod �[1mSTEP:�[0m Waiting for the victim pod to be terminating �[38;5;243m11/14/22 02:58:58.366�[0m Nov 14 02:58:58.366: INFO: Waiting up to 5m0s for pod "victim-pod" in namespace "sched-preemption-8889" to be "is terminating" Nov 14 02:58:58.402: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 36.1638ms Nov 14 02:58:58.402: INFO: Pod "victim-pod" satisfied condition "is terminating" �[1mSTEP:�[0m Verifying the pod has the pod disruption condition �[38;5;243m11/14/22 02:58:58.402�[0m Nov 14 02:58:58.438: INFO: Removing pod's "victim-pod" finalizer: "example.com/test-finalizer" Nov 14 02:58:59.011: INFO: Successfully updated pod "victim-pod" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 02:58:59.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-preemption-8889" for this suite. �[38;5;243m11/14/22 02:58:59.238�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [67.754 seconds]�[0m [sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m validates pod disruption condition is added to the preempted pod �[38;5;243mtest/e2e/scheduling/preemption.go:324�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:57:51.521�[0m Nov 14 02:57:51.521: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m11/14/22 02:57:51.522�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:57:51.623�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:57:51.684�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:96 Nov 14 02:57:51.855: INFO: Waiting up to 1m0s for all nodes to be ready Nov 14 02:58:52.142: INFO: Waiting for terminating namespaces to be deleted... [It] validates pod disruption condition is added to the preempted pod test/e2e/scheduling/preemption.go:324 �[1mSTEP:�[0m Select a node to run the lower and higher priority pods �[38;5;243m11/14/22 02:58:52.174�[0m �[1mSTEP:�[0m Create a low priority pod that consumes 1/1 of node resources �[38;5;243m11/14/22 02:58:52.222�[0m Nov 14 02:58:52.263: INFO: Created pod: victim-pod �[1mSTEP:�[0m Wait for the victim pod to be scheduled �[38;5;243m11/14/22 02:58:52.263�[0m Nov 14 02:58:52.263: INFO: Waiting up to 5m0s for pod "victim-pod" in namespace "sched-preemption-8889" to be "running" Nov 14 02:58:52.294: INFO: Pod "victim-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 31.003535ms Nov 14 02:58:54.326: INFO: Pod "victim-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063088625s Nov 14 02:58:56.327: INFO: Pod "victim-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063465013s Nov 14 02:58:58.331: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.067212485s Nov 14 02:58:58.331: INFO: Pod "victim-pod" satisfied condition "running" �[1mSTEP:�[0m Create a high priority pod to trigger preemption of the lower priority pod �[38;5;243m11/14/22 02:58:58.331�[0m Nov 14 02:58:58.365: INFO: Created pod: preemptor-pod �[1mSTEP:�[0m Waiting for the victim pod to be terminating �[38;5;243m11/14/22 02:58:58.366�[0m Nov 14 02:58:58.366: INFO: Waiting up to 5m0s for pod "victim-pod" in namespace "sched-preemption-8889" to be "is terminating" Nov 14 02:58:58.402: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 36.1638ms Nov 14 02:58:58.402: INFO: Pod "victim-pod" satisfied condition "is terminating" �[1mSTEP:�[0m Verifying the pod has the pod disruption condition �[38;5;243m11/14/22 02:58:58.402�[0m Nov 14 02:58:58.438: INFO: Removing pod's "victim-pod" finalizer: "example.com/test-finalizer" Nov 14 02:58:59.011: INFO: Successfully updated pod "victim-pod" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 02:58:59.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-preemption-8889" for this suite. �[38;5;243m11/14/22 02:58:59.238�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] GMSA Kubelet [Slow] �[38;5;243mkubelet GMSA support �[0mwhen creating a pod with correct GMSA credential specs�[0m �[1mpasses the credential specs down to the Pod's containers�[0m �[38;5;243mtest/e2e/windows/gmsa_kubelet.go:47�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:58:59.276�[0m Nov 14 02:58:59.277: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gmsa-kubelet-test-windows �[38;5;243m11/14/22 02:58:59.278�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:58:59.376�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:58:59.437�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/metrics/init/init.go:31 [It] passes the credential specs down to the Pod's containers test/e2e/windows/gmsa_kubelet.go:47 �[1mSTEP:�[0m creating a pod with correct GMSA specs �[38;5;243m11/14/22 02:58:59.499�[0m Nov 14 02:58:59.540: INFO: Waiting up to 5m0s for pod "with-correct-gmsa-specs" in namespace "gmsa-kubelet-test-windows-2281" to be "running and ready" Nov 14 02:58:59.571: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 31.239351ms Nov 14 02:58:59.571: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Nov 14 02:59:01.604: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063866172s Nov 14 02:59:01.604: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Nov 14 02:59:03.605: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065201567s Nov 14 02:59:03.605: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Nov 14 02:59:05.604: INFO: Pod "with-correct-gmsa-specs": Phase="Running", Reason="", readiness=true. Elapsed: 6.063998621s Nov 14 02:59:05.604: INFO: The phase of Pod with-correct-gmsa-specs is Running (Ready = true) Nov 14 02:59:05.604: INFO: Pod "with-correct-gmsa-specs" satisfied condition "running and ready" �[1mSTEP:�[0m checking the domain reported by nltest in the containers �[38;5;243m11/14/22 02:59:05.636�[0m Nov 14 02:59:05.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=gmsa-kubelet-test-windows-2281 exec --namespace=gmsa-kubelet-test-windows-2281 with-correct-gmsa-specs --container=container1 -- nltest /PARENTDOMAIN' Nov 14 02:59:06.377: INFO: stderr: "" Nov 14 02:59:06.377: INFO: stdout: "acme.com. (1)\r\nThe command completed successfully\r\n" Nov 14 02:59:06.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=gmsa-kubelet-test-windows-2281 exec --namespace=gmsa-kubelet-test-windows-2281 with-correct-gmsa-specs --container=container2 -- nltest /PARENTDOMAIN' Nov 14 02:59:06.888: INFO: stderr: "" Nov 14 02:59:06.888: INFO: stdout: "contoso.org. (1)\r\nThe command completed successfully\r\n" [AfterEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/node/init/init.go:32 Nov 14 02:59:06.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gmsa-kubelet-test-windows-2281" for this suite. �[38;5;243m11/14/22 02:59:06.922�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [7.680 seconds]�[0m [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] �[38;5;243mtest/e2e/windows/framework.go:27�[0m kubelet GMSA support �[38;5;243mtest/e2e/windows/gmsa_kubelet.go:45�[0m when creating a pod with correct GMSA credential specs �[38;5;243mtest/e2e/windows/gmsa_kubelet.go:46�[0m passes the credential specs down to the Pod's containers �[38;5;243mtest/e2e/windows/gmsa_kubelet.go:47�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:58:59.276�[0m Nov 14 02:58:59.277: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gmsa-kubelet-test-windows �[38;5;243m11/14/22 02:58:59.278�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:58:59.376�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:58:59.437�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/metrics/init/init.go:31 [It] passes the credential specs down to the Pod's containers test/e2e/windows/gmsa_kubelet.go:47 �[1mSTEP:�[0m creating a pod with correct GMSA specs �[38;5;243m11/14/22 02:58:59.499�[0m Nov 14 02:58:59.540: INFO: Waiting up to 5m0s for pod "with-correct-gmsa-specs" in namespace "gmsa-kubelet-test-windows-2281" to be "running and ready" Nov 14 02:58:59.571: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 31.239351ms Nov 14 02:58:59.571: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Nov 14 02:59:01.604: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063866172s Nov 14 02:59:01.604: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Nov 14 02:59:03.605: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065201567s Nov 14 02:59:03.605: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Nov 14 02:59:05.604: INFO: Pod "with-correct-gmsa-specs": Phase="Running", Reason="", readiness=true. Elapsed: 6.063998621s Nov 14 02:59:05.604: INFO: The phase of Pod with-correct-gmsa-specs is Running (Ready = true) Nov 14 02:59:05.604: INFO: Pod "with-correct-gmsa-specs" satisfied condition "running and ready" �[1mSTEP:�[0m checking the domain reported by nltest in the containers �[38;5;243m11/14/22 02:59:05.636�[0m Nov 14 02:59:05.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=gmsa-kubelet-test-windows-2281 exec --namespace=gmsa-kubelet-test-windows-2281 with-correct-gmsa-specs --container=container1 -- nltest /PARENTDOMAIN' Nov 14 02:59:06.377: INFO: stderr: "" Nov 14 02:59:06.377: INFO: stdout: "acme.com. (1)\r\nThe command completed successfully\r\n" Nov 14 02:59:06.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=gmsa-kubelet-test-windows-2281 exec --namespace=gmsa-kubelet-test-windows-2281 with-correct-gmsa-specs --container=container2 -- nltest /PARENTDOMAIN' Nov 14 02:59:06.888: INFO: stderr: "" Nov 14 02:59:06.888: INFO: stdout: "contoso.org. (1)\r\nThe command completed successfully\r\n" [AfterEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/node/init/init.go:32 Nov 14 02:59:06.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gmsa-kubelet-test-windows-2281" for this suite. �[38;5;243m11/14/22 02:59:06.922�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mwith scale limited by number of Pods rate�[0m �[1mshould scale up no more than given number of Pods per minute�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:216�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:59:06.961�[0m Nov 14 02:59:06.961: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/14/22 02:59:06.963�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:59:07.06�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:59:07.122�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:31 [It] should scale up no more than given number of Pods per minute test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:216 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m11/14/22 02:59:07.184�[0m Nov 14 02:59:07.184: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/14/22 02:59:07.186�[0m �[1mSTEP:�[0m Creating deployment consumer in namespace horizontal-pod-autoscaling-7714 �[38;5;243m11/14/22 02:59:07.236�[0m I1114 02:59:07.271874 13 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-7714, replica count: 1 I1114 02:59:17.322915 13 runners.go:193] consumer Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/14/22 02:59:17.323�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-7714 �[38;5;243m11/14/22 02:59:17.38�[0m I1114 02:59:17.417761 13 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-7714, replica count: 1 I1114 02:59:27.468570 13 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 14 02:59:32.468: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Nov 14 02:59:32.500: INFO: RC consumer: consume 45 millicores in total Nov 14 02:59:32.500: INFO: RC consumer: consume 0 MB in total Nov 14 02:59:32.500: INFO: RC consumer: disabling mem consumption Nov 14 02:59:32.500: INFO: RC consumer: setting consumption to 45 millicores in total Nov 14 02:59:32.500: INFO: RC consumer: consume custom metric 0 in total Nov 14 02:59:32.501: INFO: RC consumer: disabling consumption of custom metric QPS �[1mSTEP:�[0m triggering scale up by increasing consumption �[38;5;243m11/14/22 02:59:32.537�[0m Nov 14 02:59:32.537: INFO: RC consumer: consume 135 millicores in total Nov 14 02:59:32.538: INFO: RC consumer: setting consumption to 135 millicores in total Nov 14 02:59:32.568: INFO: waiting for 2 replicas (current: 1) Nov 14 02:59:52.601: INFO: waiting for 2 replicas (current: 1) Nov 14 03:00:02.501: INFO: RC consumer: sending request to consume 135 millicores Nov 14 03:00:02.501: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7714/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=135&requestSizeMillicores=100 } Nov 14 03:00:12.602: INFO: waiting for 2 replicas (current: 1) Nov 14 03:00:32.563: INFO: RC consumer: sending request to consume 135 millicores Nov 14 03:00:32.563: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7714/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=135&requestSizeMillicores=100 } Nov 14 03:00:32.600: INFO: waiting for 2 replicas (current: 2) Nov 14 03:00:32.632: INFO: waiting for 3 replicas (current: 2) Nov 14 03:00:52.666: INFO: waiting for 3 replicas (current: 2) Nov 14 03:01:02.604: INFO: RC consumer: sending request to consume 135 millicores Nov 14 03:01:02.604: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7714/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=135&requestSizeMillicores=100 } Nov 14 03:01:12.665: INFO: waiting for 3 replicas (current: 2) Nov 14 03:01:32.646: INFO: RC consumer: sending request to consume 135 millicores Nov 14 03:01:32.647: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7714/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=135&requestSizeMillicores=100 } Nov 14 03:01:32.666: INFO: waiting for 3 replicas (current: 3) �[1mSTEP:�[0m verifying time waited for a scale up to 2 replicas �[38;5;243m11/14/22 03:01:32.666�[0m �[1mSTEP:�[0m verifying time waited for a scale up to 3 replicas �[38;5;243m11/14/22 03:01:32.666�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m11/14/22 03:01:32.701�[0m Nov 14 03:01:32.701: INFO: RC consumer: stopping metric consumer Nov 14 03:01:32.701: INFO: RC consumer: stopping mem consumer Nov 14 03:01:33.037: INFO: RC consumer: stopping CPU consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-7714, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 03:01:43.037�[0m Nov 14 03:01:43.158: INFO: Deleting Deployment.apps consumer took: 36.982137ms Nov 14 03:01:43.258: INFO: Terminating Deployment.apps consumer pods took: 100.287734ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-7714, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 03:01:46.017�[0m Nov 14 03:01:46.138: INFO: Deleting ReplicationController consumer-ctrl took: 37.89276ms Nov 14 03:01:46.239: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.839125ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/node/init/init.go:32 Nov 14 03:01:47.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-7714" for this suite. �[38;5;243m11/14/22 03:01:47.741�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [160.814 seconds]�[0m [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m with scale limited by number of Pods rate �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:211�[0m should scale up no more than given number of Pods per minute �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:216�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 02:59:06.961�[0m Nov 14 02:59:06.961: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/14/22 02:59:06.963�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 02:59:07.06�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 02:59:07.122�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:31 [It] should scale up no more than given number of Pods per minute test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:216 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m11/14/22 02:59:07.184�[0m Nov 14 02:59:07.184: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/14/22 02:59:07.186�[0m �[1mSTEP:�[0m Creating deployment consumer in namespace horizontal-pod-autoscaling-7714 �[38;5;243m11/14/22 02:59:07.236�[0m I1114 02:59:07.271874 13 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-7714, replica count: 1 I1114 02:59:17.322915 13 runners.go:193] consumer Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/14/22 02:59:17.323�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-7714 �[38;5;243m11/14/22 02:59:17.38�[0m I1114 02:59:17.417761 13 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-7714, replica count: 1 I1114 02:59:27.468570 13 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 14 02:59:32.468: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Nov 14 02:59:32.500: INFO: RC consumer: consume 45 millicores in total Nov 14 02:59:32.500: INFO: RC consumer: consume 0 MB in total Nov 14 02:59:32.500: INFO: RC consumer: disabling mem consumption Nov 14 02:59:32.500: INFO: RC consumer: setting consumption to 45 millicores in total Nov 14 02:59:32.500: INFO: RC consumer: consume custom metric 0 in total Nov 14 02:59:32.501: INFO: RC consumer: disabling consumption of custom metric QPS �[1mSTEP:�[0m triggering scale up by increasing consumption �[38;5;243m11/14/22 02:59:32.537�[0m Nov 14 02:59:32.537: INFO: RC consumer: consume 135 millicores in total Nov 14 02:59:32.538: INFO: RC consumer: setting consumption to 135 millicores in total Nov 14 02:59:32.568: INFO: waiting for 2 replicas (current: 1) Nov 14 02:59:52.601: INFO: waiting for 2 replicas (current: 1) Nov 14 03:00:02.501: INFO: RC consumer: sending request to consume 135 millicores Nov 14 03:00:02.501: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7714/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=135&requestSizeMillicores=100 } Nov 14 03:00:12.602: INFO: waiting for 2 replicas (current: 1) Nov 14 03:00:32.563: INFO: RC consumer: sending request to consume 135 millicores Nov 14 03:00:32.563: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7714/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=135&requestSizeMillicores=100 } Nov 14 03:00:32.600: INFO: waiting for 2 replicas (current: 2) Nov 14 03:00:32.632: INFO: waiting for 3 replicas (current: 2) Nov 14 03:00:52.666: INFO: waiting for 3 replicas (current: 2) Nov 14 03:01:02.604: INFO: RC consumer: sending request to consume 135 millicores Nov 14 03:01:02.604: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7714/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=135&requestSizeMillicores=100 } Nov 14 03:01:12.665: INFO: waiting for 3 replicas (current: 2) Nov 14 03:01:32.646: INFO: RC consumer: sending request to consume 135 millicores Nov 14 03:01:32.647: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7714/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=135&requestSizeMillicores=100 } Nov 14 03:01:32.666: INFO: waiting for 3 replicas (current: 3) �[1mSTEP:�[0m verifying time waited for a scale up to 2 replicas �[38;5;243m11/14/22 03:01:32.666�[0m �[1mSTEP:�[0m verifying time waited for a scale up to 3 replicas �[38;5;243m11/14/22 03:01:32.666�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m11/14/22 03:01:32.701�[0m Nov 14 03:01:32.701: INFO: RC consumer: stopping metric consumer Nov 14 03:01:32.701: INFO: RC consumer: stopping mem consumer Nov 14 03:01:33.037: INFO: RC consumer: stopping CPU consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-7714, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 03:01:43.037�[0m Nov 14 03:01:43.158: INFO: Deleting Deployment.apps consumer took: 36.982137ms Nov 14 03:01:43.258: INFO: Terminating Deployment.apps consumer pods took: 100.287734ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-7714, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 03:01:46.017�[0m Nov 14 03:01:46.138: INFO: Deleting ReplicationController consumer-ctrl took: 37.89276ms Nov 14 03:01:46.239: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.839125ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/node/init/init.go:32 Nov 14 03:01:47.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-7714" for this suite. �[38;5;243m11/14/22 03:01:47.741�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[1mvalidates basic preemption works [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/preemption.go:129�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:01:47.781�[0m Nov 14 03:01:47.782: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m11/14/22 03:01:47.783�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 03:01:47.878�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 03:01:47.939�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:96 Nov 14 03:01:48.105: INFO: Waiting up to 1m0s for all nodes to be ready Nov 14 03:02:48.394: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:129 �[1mSTEP:�[0m Create pods that use 4/5 of node resources. �[38;5;243m11/14/22 03:02:48.426�[0m Nov 14 03:02:48.513: INFO: Created pod: pod0-0-sched-preemption-low-priority Nov 14 03:02:48.562: INFO: Created pod: pod0-1-sched-preemption-medium-priority Nov 14 03:02:48.667: INFO: Created pod: pod1-0-sched-preemption-medium-priority Nov 14 03:02:48.704: INFO: Created pod: pod1-1-sched-preemption-medium-priority �[1mSTEP:�[0m Wait for pods to be scheduled. �[38;5;243m11/14/22 03:02:48.704�[0m Nov 14 03:02:48.705: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-8039" to be "running" Nov 14 03:02:48.738: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 32.850529ms Nov 14 03:02:50.770: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064988278s Nov 14 03:02:52.772: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066887338s Nov 14 03:02:54.772: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 6.066723534s Nov 14 03:02:54.772: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Nov 14 03:02:54.772: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-8039" to be "running" Nov 14 03:02:54.804: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 31.484337ms Nov 14 03:02:54.804: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Nov 14 03:02:54.804: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-8039" to be "running" Nov 14 03:02:54.835: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 30.92829ms Nov 14 03:02:56.868: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064023394s Nov 14 03:02:58.867: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.062930666s Nov 14 03:02:58.867: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Nov 14 03:02:58.867: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-8039" to be "running" Nov 14 03:02:58.898: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 31.529579ms Nov 14 03:02:58.898: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" �[1mSTEP:�[0m Run a high priority pod that has same requirements as that of lower priority pod �[38;5;243m11/14/22 03:02:58.898�[0m Nov 14 03:02:58.936: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-8039" to be "running" Nov 14 03:02:58.976: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 39.687417ms Nov 14 03:03:01.009: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072638575s Nov 14 03:03:03.010: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074095389s Nov 14 03:03:05.009: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.073019703s Nov 14 03:03:05.009: INFO: Pod "preemptor-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 03:03:05.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-preemption-8039" for this suite. �[38;5;243m11/14/22 03:03:05.39�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [77.649 seconds]�[0m [sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m validates basic preemption works [Conformance] �[38;5;243mtest/e2e/scheduling/preemption.go:129�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:01:47.781�[0m Nov 14 03:01:47.782: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m11/14/22 03:01:47.783�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 03:01:47.878�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 03:01:47.939�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:96 Nov 14 03:01:48.105: INFO: Waiting up to 1m0s for all nodes to be ready Nov 14 03:02:48.394: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:129 �[1mSTEP:�[0m Create pods that use 4/5 of node resources. �[38;5;243m11/14/22 03:02:48.426�[0m Nov 14 03:02:48.513: INFO: Created pod: pod0-0-sched-preemption-low-priority Nov 14 03:02:48.562: INFO: Created pod: pod0-1-sched-preemption-medium-priority Nov 14 03:02:48.667: INFO: Created pod: pod1-0-sched-preemption-medium-priority Nov 14 03:02:48.704: INFO: Created pod: pod1-1-sched-preemption-medium-priority �[1mSTEP:�[0m Wait for pods to be scheduled. �[38;5;243m11/14/22 03:02:48.704�[0m Nov 14 03:02:48.705: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-8039" to be "running" Nov 14 03:02:48.738: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 32.850529ms Nov 14 03:02:50.770: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064988278s Nov 14 03:02:52.772: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066887338s Nov 14 03:02:54.772: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 6.066723534s Nov 14 03:02:54.772: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Nov 14 03:02:54.772: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-8039" to be "running" Nov 14 03:02:54.804: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 31.484337ms Nov 14 03:02:54.804: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Nov 14 03:02:54.804: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-8039" to be "running" Nov 14 03:02:54.835: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 30.92829ms Nov 14 03:02:56.868: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064023394s Nov 14 03:02:58.867: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.062930666s Nov 14 03:02:58.867: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Nov 14 03:02:58.867: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-8039" to be "running" Nov 14 03:02:58.898: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 31.529579ms Nov 14 03:02:58.898: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" �[1mSTEP:�[0m Run a high priority pod that has same requirements as that of lower priority pod �[38;5;243m11/14/22 03:02:58.898�[0m Nov 14 03:02:58.936: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-8039" to be "running" Nov 14 03:02:58.976: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 39.687417ms Nov 14 03:03:01.009: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072638575s Nov 14 03:03:03.010: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074095389s Nov 14 03:03:05.009: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.073019703s Nov 14 03:03:05.009: INFO: Pod "preemptor-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 03:03:05.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-preemption-8039" for this suite. �[38;5;243m11/14/22 03:03:05.39�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPredicates [Serial]�[0m �[1mvalidates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/predicates.go:704�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:03:05.435�[0m Nov 14 03:03:05.435: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-pred �[38;5;243m11/14/22 03:03:05.436�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 03:03:05.533�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 03:03:05.595�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Nov 14 03:03:05.662: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 14 03:03:05.731: INFO: Waiting for terminating namespaces to be deleted... Nov 14 03:03:05.763: INFO: Logging pods the apiserver thinks is on node capz-conf-bpf2r before test Nov 14 03:03:05.799: INFO: calico-node-windows-xk6bd from kube-system started at 2022-11-14 01:08:57 +0000 UTC (2 container statuses recorded) Nov 14 03:03:05.800: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 03:03:05.800: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 03:03:05.800: INFO: containerd-logger-bpt69 from kube-system started at 2022-11-14 01:08:57 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.800: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 03:03:05.800: INFO: csi-proxy-76x9p from kube-system started at 2022-11-14 01:09:18 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.800: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 03:03:05.800: INFO: kube-proxy-windows-nz2rt from kube-system started at 2022-11-14 01:08:57 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.800: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 03:03:05.800: INFO: pod0-1-sched-preemption-medium-priority from sched-preemption-8039 started at 2022-11-14 03:02:48 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.800: INFO: Container pod0-1-sched-preemption-medium-priority ready: true, restart count 0 Nov 14 03:03:05.800: INFO: preemptor-pod from sched-preemption-8039 started at 2022-11-14 03:03:00 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.800: INFO: Container preemptor-pod ready: true, restart count 0 Nov 14 03:03:05.800: INFO: Logging pods the apiserver thinks is on node capz-conf-sq8nr before test Nov 14 03:03:05.838: INFO: calico-node-windows-w6hn2 from kube-system started at 2022-11-14 01:08:50 +0000 UTC (2 container statuses recorded) Nov 14 03:03:05.838: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 03:03:05.838: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 03:03:05.838: INFO: containerd-logger-bf8mz from kube-system started at 2022-11-14 01:08:50 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.838: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 03:03:05.838: INFO: csi-proxy-fbwsw from kube-system started at 2022-11-14 01:09:15 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.838: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 03:03:05.839: INFO: kube-proxy-windows-lldgb from kube-system started at 2022-11-14 01:08:50 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.839: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 03:03:05.839: INFO: pod1-0-sched-preemption-medium-priority from sched-preemption-8039 started at 2022-11-14 03:02:53 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.839: INFO: Container pod1-0-sched-preemption-medium-priority ready: true, restart count 0 Nov 14 03:03:05.839: INFO: pod1-1-sched-preemption-medium-priority from sched-preemption-8039 started at 2022-11-14 03:02:53 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.839: INFO: Container pod1-1-sched-preemption-medium-priority ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/scheduling/predicates.go:704 �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m11/14/22 03:03:05.839�[0m Nov 14 03:03:05.880: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-3314" to be "running" Nov 14 03:03:05.910: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 30.744337ms Nov 14 03:03:07.943: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063608613s Nov 14 03:03:09.943: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.063157103s Nov 14 03:03:09.943: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m11/14/22 03:03:09.975�[0m �[1mSTEP:�[0m Trying to apply a random label on the found node. �[38;5;243m11/14/22 03:03:10.051�[0m �[1mSTEP:�[0m verifying the node has the label kubernetes.io/e2e-a13a9d96-cb9d-49ac-958b-f807d57d9662 95 �[38;5;243m11/14/22 03:03:10.093�[0m �[1mSTEP:�[0m Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled �[38;5;243m11/14/22 03:03:10.125�[0m Nov 14 03:03:10.160: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-3314" to be "not pending" Nov 14 03:03:10.191: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 31.191248ms Nov 14 03:03:12.223: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062824534s Nov 14 03:03:16.119: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.958436326s Nov 14 03:03:16.224: INFO: Pod "pod4": Phase="Running", Reason="", readiness=true. Elapsed: 6.064143798s Nov 14 03:03:16.224: INFO: Pod "pod4" satisfied condition "not pending" �[1mSTEP:�[0m Trying to create another pod(pod5) with hostport 54322 but hostIP 10.1.0.4 on the node which pod4 resides and expect not scheduled �[38;5;243m11/14/22 03:03:16.224�[0m Nov 14 03:03:16.261: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-3314" to be "not pending" Nov 14 03:03:16.295: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.707982ms Nov 14 03:03:18.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066933001s Nov 14 03:03:20.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066547796s Nov 14 03:03:22.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066731574s Nov 14 03:03:24.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066716706s Nov 14 03:03:26.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.067828585s Nov 14 03:03:28.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.066430829s Nov 14 03:03:30.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.067362217s Nov 14 03:03:32.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.067517722s Nov 14 03:03:34.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.067703632s Nov 14 03:03:36.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.066392839s Nov 14 03:03:38.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.06816671s Nov 14 03:03:40.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.066594887s Nov 14 03:03:42.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.066724238s Nov 14 03:03:44.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.067565967s Nov 14 03:03:46.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.066328988s Nov 14 03:03:48.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.068162748s Nov 14 03:03:50.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.066203098s Nov 14 03:03:52.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.066457843s Nov 14 03:03:54.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.067176605s Nov 14 03:03:56.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.067280007s Nov 14 03:03:58.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.06785527s Nov 14 03:04:00.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.067495199s Nov 14 03:04:02.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.067240365s Nov 14 03:04:04.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.06670394s Nov 14 03:04:06.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.067957276s Nov 14 03:04:08.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.067593223s Nov 14 03:04:10.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.066961815s Nov 14 03:04:12.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.067192505s Nov 14 03:04:14.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.066450784s Nov 14 03:04:16.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.067135121s Nov 14 03:04:18.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.067165865s Nov 14 03:04:20.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.067516393s Nov 14 03:04:22.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.067698722s Nov 14 03:04:24.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.068050367s Nov 14 03:04:26.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.068790886s Nov 14 03:04:28.330: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.068992206s Nov 14 03:04:30.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.066365706s Nov 14 03:04:32.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.066929705s Nov 14 03:04:34.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.0665671s Nov 14 03:04:36.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.067079222s Nov 14 03:04:38.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.067984063s Nov 14 03:04:40.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.066429864s Nov 14 03:04:42.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.066939146s Nov 14 03:04:44.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.068835114s Nov 14 03:04:46.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.06660021s Nov 14 03:04:48.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.068300359s Nov 14 03:04:50.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.068632902s Nov 14 03:04:52.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.066324101s Nov 14 03:04:54.330: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.068948404s Nov 14 03:04:56.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.066893814s Nov 14 03:04:58.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.068230564s Nov 14 03:05:00.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.067961778s Nov 14 03:05:02.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.067175093s Nov 14 03:05:04.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.068579731s Nov 14 03:05:06.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.068263205s Nov 14 03:05:08.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.066840582s Nov 14 03:05:10.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.066620246s Nov 14 03:05:12.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.067666943s Nov 14 03:05:14.330: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.06976754s Nov 14 03:05:16.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.067061903s Nov 14 03:05:18.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.067628153s Nov 14 03:05:20.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.066599008s Nov 14 03:05:22.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.0665563s Nov 14 03:05:24.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.066480946s Nov 14 03:05:26.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.067187112s Nov 14 03:05:28.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.067104457s Nov 14 03:05:30.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.066764288s Nov 14 03:05:32.330: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.069616131s Nov 14 03:05:34.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.067077478s Nov 14 03:05:36.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.067861687s Nov 14 03:05:38.330: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.069064532s Nov 14 03:05:40.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.067977767s Nov 14 03:05:42.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.066745425s Nov 14 03:05:44.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.066718894s Nov 14 03:05:46.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.067184586s Nov 14 03:05:48.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.067703188s Nov 14 03:05:50.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.067147564s Nov 14 03:05:52.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.066774621s Nov 14 03:05:54.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.06682246s Nov 14 03:05:56.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.067631893s Nov 14 03:05:58.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.06729473s Nov 14 03:06:00.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.066877315s Nov 14 03:06:02.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.066793645s Nov 14 03:06:04.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.067001949s Nov 14 03:06:06.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.06652025s Nov 14 03:06:08.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.066719525s Nov 14 03:06:10.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.066854528s Nov 14 03:06:12.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.066539734s Nov 14 03:06:14.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.067865783s Nov 14 03:06:16.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.066756889s Nov 14 03:06:18.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.066587837s Nov 14 03:06:20.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.066847735s Nov 14 03:06:22.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.066736417s Nov 14 03:06:24.330: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.069744636s Nov 14 03:06:26.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.067478973s Nov 14 03:06:28.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.066899077s Nov 14 03:06:30.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.068418037s Nov 14 03:06:32.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.068696306s Nov 14 03:06:34.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.067310689s Nov 14 03:06:36.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.067765024s Nov 14 03:06:38.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.067504659s Nov 14 03:06:40.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.068413425s Nov 14 03:06:42.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.068197066s Nov 14 03:06:44.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.06782004s Nov 14 03:06:46.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.066414336s Nov 14 03:06:48.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.066559305s Nov 14 03:06:50.330: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.069217552s Nov 14 03:06:52.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.067420904s Nov 14 03:06:54.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.068531586s Nov 14 03:06:56.331: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.070118362s Nov 14 03:06:58.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.06654199s Nov 14 03:07:00.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.067482036s Nov 14 03:07:02.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.067867078s Nov 14 03:07:04.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.068756019s Nov 14 03:07:06.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.067479432s Nov 14 03:07:08.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.066605792s Nov 14 03:07:10.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.066391055s Nov 14 03:07:12.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.067153799s Nov 14 03:07:14.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.066891957s Nov 14 03:07:16.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.066820243s Nov 14 03:07:18.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.067085191s Nov 14 03:07:20.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.06700927s Nov 14 03:07:22.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.066530525s Nov 14 03:07:24.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.068449845s Nov 14 03:07:26.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.068296525s Nov 14 03:07:28.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.066900867s Nov 14 03:07:30.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.066840569s Nov 14 03:07:32.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.066563673s Nov 14 03:07:34.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.068783447s Nov 14 03:07:36.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.067706156s Nov 14 03:07:38.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.066490155s Nov 14 03:07:40.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.067538676s Nov 14 03:07:42.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.067271254s Nov 14 03:07:44.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.066651601s Nov 14 03:07:46.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.06721457s Nov 14 03:07:48.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.068778043s Nov 14 03:07:50.331: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.070276108s Nov 14 03:07:52.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.067531247s Nov 14 03:07:54.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.06846515s Nov 14 03:07:56.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.066620307s Nov 14 03:07:58.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.067948061s Nov 14 03:08:00.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.067546695s Nov 14 03:08:02.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.067661915s Nov 14 03:08:04.330: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.069677552s Nov 14 03:08:06.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.067378053s Nov 14 03:08:08.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.06675542s Nov 14 03:08:10.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.068827853s Nov 14 03:08:12.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.066821863s Nov 14 03:08:14.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.066864674s Nov 14 03:08:16.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.067872086s Nov 14 03:08:16.360: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.099839612s �[1mSTEP:�[0m removing the label kubernetes.io/e2e-a13a9d96-cb9d-49ac-958b-f807d57d9662 off the node capz-conf-bpf2r �[38;5;243m11/14/22 03:08:16.361�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-a13a9d96-cb9d-49ac-958b-f807d57d9662 �[38;5;243m11/14/22 03:08:16.437�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 03:08:16.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-pred-3314" for this suite. �[38;5;243m11/14/22 03:08:16.503�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [311.108 seconds]�[0m [sig-scheduling] SchedulerPredicates [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] �[38;5;243mtest/e2e/scheduling/predicates.go:704�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:03:05.435�[0m Nov 14 03:03:05.435: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-pred �[38;5;243m11/14/22 03:03:05.436�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 03:03:05.533�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 03:03:05.595�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Nov 14 03:03:05.662: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 14 03:03:05.731: INFO: Waiting for terminating namespaces to be deleted... Nov 14 03:03:05.763: INFO: Logging pods the apiserver thinks is on node capz-conf-bpf2r before test Nov 14 03:03:05.799: INFO: calico-node-windows-xk6bd from kube-system started at 2022-11-14 01:08:57 +0000 UTC (2 container statuses recorded) Nov 14 03:03:05.800: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 03:03:05.800: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 03:03:05.800: INFO: containerd-logger-bpt69 from kube-system started at 2022-11-14 01:08:57 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.800: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 03:03:05.800: INFO: csi-proxy-76x9p from kube-system started at 2022-11-14 01:09:18 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.800: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 03:03:05.800: INFO: kube-proxy-windows-nz2rt from kube-system started at 2022-11-14 01:08:57 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.800: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 03:03:05.800: INFO: pod0-1-sched-preemption-medium-priority from sched-preemption-8039 started at 2022-11-14 03:02:48 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.800: INFO: Container pod0-1-sched-preemption-medium-priority ready: true, restart count 0 Nov 14 03:03:05.800: INFO: preemptor-pod from sched-preemption-8039 started at 2022-11-14 03:03:00 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.800: INFO: Container preemptor-pod ready: true, restart count 0 Nov 14 03:03:05.800: INFO: Logging pods the apiserver thinks is on node capz-conf-sq8nr before test Nov 14 03:03:05.838: INFO: calico-node-windows-w6hn2 from kube-system started at 2022-11-14 01:08:50 +0000 UTC (2 container statuses recorded) Nov 14 03:03:05.838: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 03:03:05.838: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 03:03:05.838: INFO: containerd-logger-bf8mz from kube-system started at 2022-11-14 01:08:50 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.838: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 03:03:05.838: INFO: csi-proxy-fbwsw from kube-system started at 2022-11-14 01:09:15 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.838: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 03:03:05.839: INFO: kube-proxy-windows-lldgb from kube-system started at 2022-11-14 01:08:50 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.839: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 03:03:05.839: INFO: pod1-0-sched-preemption-medium-priority from sched-preemption-8039 started at 2022-11-14 03:02:53 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.839: INFO: Container pod1-0-sched-preemption-medium-priority ready: true, restart count 0 Nov 14 03:03:05.839: INFO: pod1-1-sched-preemption-medium-priority from sched-preemption-8039 started at 2022-11-14 03:02:53 +0000 UTC (1 container statuses recorded) Nov 14 03:03:05.839: INFO: Container pod1-1-sched-preemption-medium-priority ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/scheduling/predicates.go:704 �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m11/14/22 03:03:05.839�[0m Nov 14 03:03:05.880: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-3314" to be "running" Nov 14 03:03:05.910: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 30.744337ms Nov 14 03:03:07.943: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063608613s Nov 14 03:03:09.943: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.063157103s Nov 14 03:03:09.943: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m11/14/22 03:03:09.975�[0m �[1mSTEP:�[0m Trying to apply a random label on the found node. �[38;5;243m11/14/22 03:03:10.051�[0m �[1mSTEP:�[0m verifying the node has the label kubernetes.io/e2e-a13a9d96-cb9d-49ac-958b-f807d57d9662 95 �[38;5;243m11/14/22 03:03:10.093�[0m �[1mSTEP:�[0m Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled �[38;5;243m11/14/22 03:03:10.125�[0m Nov 14 03:03:10.160: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-3314" to be "not pending" Nov 14 03:03:10.191: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 31.191248ms Nov 14 03:03:12.223: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062824534s Nov 14 03:03:16.119: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.958436326s Nov 14 03:03:16.224: INFO: Pod "pod4": Phase="Running", Reason="", readiness=true. Elapsed: 6.064143798s Nov 14 03:03:16.224: INFO: Pod "pod4" satisfied condition "not pending" �[1mSTEP:�[0m Trying to create another pod(pod5) with hostport 54322 but hostIP 10.1.0.4 on the node which pod4 resides and expect not scheduled �[38;5;243m11/14/22 03:03:16.224�[0m Nov 14 03:03:16.261: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-3314" to be "not pending" Nov 14 03:03:16.295: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.707982ms Nov 14 03:03:18.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066933001s Nov 14 03:03:20.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066547796s Nov 14 03:03:22.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066731574s Nov 14 03:03:24.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066716706s Nov 14 03:03:26.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.067828585s Nov 14 03:03:28.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.066430829s Nov 14 03:03:30.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.067362217s Nov 14 03:03:32.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.067517722s Nov 14 03:03:34.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.067703632s Nov 14 03:03:36.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.066392839s Nov 14 03:03:38.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.06816671s Nov 14 03:03:40.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.066594887s Nov 14 03:03:42.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.066724238s Nov 14 03:03:44.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.067565967s Nov 14 03:03:46.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.066328988s Nov 14 03:03:48.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.068162748s Nov 14 03:03:50.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.066203098s Nov 14 03:03:52.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.066457843s Nov 14 03:03:54.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.067176605s Nov 14 03:03:56.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.067280007s Nov 14 03:03:58.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.06785527s Nov 14 03:04:00.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.067495199s Nov 14 03:04:02.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.067240365s Nov 14 03:04:04.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.06670394s Nov 14 03:04:06.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.067957276s Nov 14 03:04:08.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.067593223s Nov 14 03:04:10.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.066961815s Nov 14 03:04:12.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.067192505s Nov 14 03:04:14.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.066450784s Nov 14 03:04:16.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.067135121s Nov 14 03:04:18.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.067165865s Nov 14 03:04:20.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.067516393s Nov 14 03:04:22.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.067698722s Nov 14 03:04:24.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.068050367s Nov 14 03:04:26.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.068790886s Nov 14 03:04:28.330: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.068992206s Nov 14 03:04:30.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.066365706s Nov 14 03:04:32.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.066929705s Nov 14 03:04:34.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.0665671s Nov 14 03:04:36.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.067079222s Nov 14 03:04:38.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.067984063s Nov 14 03:04:40.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.066429864s Nov 14 03:04:42.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.066939146s Nov 14 03:04:44.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.068835114s Nov 14 03:04:46.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.06660021s Nov 14 03:04:48.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.068300359s Nov 14 03:04:50.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.068632902s Nov 14 03:04:52.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.066324101s Nov 14 03:04:54.330: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.068948404s Nov 14 03:04:56.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.066893814s Nov 14 03:04:58.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.068230564s Nov 14 03:05:00.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.067961778s Nov 14 03:05:02.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.067175093s Nov 14 03:05:04.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.068579731s Nov 14 03:05:06.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.068263205s Nov 14 03:05:08.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.066840582s Nov 14 03:05:10.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.066620246s Nov 14 03:05:12.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.067666943s Nov 14 03:05:14.330: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.06976754s Nov 14 03:05:16.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.067061903s Nov 14 03:05:18.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.067628153s Nov 14 03:05:20.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.066599008s Nov 14 03:05:22.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.0665563s Nov 14 03:05:24.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.066480946s Nov 14 03:05:26.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.067187112s Nov 14 03:05:28.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.067104457s Nov 14 03:05:30.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.066764288s Nov 14 03:05:32.330: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.069616131s Nov 14 03:05:34.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.067077478s Nov 14 03:05:36.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.067861687s Nov 14 03:05:38.330: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.069064532s Nov 14 03:05:40.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.067977767s Nov 14 03:05:42.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.066745425s Nov 14 03:05:44.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.066718894s Nov 14 03:05:46.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.067184586s Nov 14 03:05:48.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.067703188s Nov 14 03:05:50.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.067147564s Nov 14 03:05:52.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.066774621s Nov 14 03:05:54.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.06682246s Nov 14 03:05:56.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.067631893s Nov 14 03:05:58.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.06729473s Nov 14 03:06:00.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.066877315s Nov 14 03:06:02.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.066793645s Nov 14 03:06:04.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.067001949s Nov 14 03:06:06.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.06652025s Nov 14 03:06:08.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.066719525s Nov 14 03:06:10.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.066854528s Nov 14 03:06:12.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.066539734s Nov 14 03:06:14.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.067865783s Nov 14 03:06:16.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.066756889s Nov 14 03:06:18.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.066587837s Nov 14 03:06:20.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.066847735s Nov 14 03:06:22.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.066736417s Nov 14 03:06:24.330: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.069744636s Nov 14 03:06:26.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.067478973s Nov 14 03:06:28.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.066899077s Nov 14 03:06:30.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.068418037s Nov 14 03:06:32.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.068696306s Nov 14 03:06:34.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.067310689s Nov 14 03:06:36.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.067765024s Nov 14 03:06:38.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.067504659s Nov 14 03:06:40.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.068413425s Nov 14 03:06:42.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.068197066s Nov 14 03:06:44.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.06782004s Nov 14 03:06:46.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.066414336s Nov 14 03:06:48.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.066559305s Nov 14 03:06:50.330: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.069217552s Nov 14 03:06:52.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.067420904s Nov 14 03:06:54.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.068531586s Nov 14 03:06:56.331: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.070118362s Nov 14 03:06:58.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.06654199s Nov 14 03:07:00.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.067482036s Nov 14 03:07:02.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.067867078s Nov 14 03:07:04.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.068756019s Nov 14 03:07:06.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.067479432s Nov 14 03:07:08.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.066605792s Nov 14 03:07:10.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.066391055s Nov 14 03:07:12.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.067153799s Nov 14 03:07:14.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.066891957s Nov 14 03:07:16.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.066820243s Nov 14 03:07:18.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.067085191s Nov 14 03:07:20.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.06700927s Nov 14 03:07:22.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.066530525s Nov 14 03:07:24.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.068449845s Nov 14 03:07:26.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.068296525s Nov 14 03:07:28.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.066900867s Nov 14 03:07:30.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.066840569s Nov 14 03:07:32.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.066563673s Nov 14 03:07:34.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.068783447s Nov 14 03:07:36.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.067706156s Nov 14 03:07:38.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.066490155s Nov 14 03:07:40.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.067538676s Nov 14 03:07:42.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.067271254s Nov 14 03:07:44.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.066651601s Nov 14 03:07:46.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.06721457s Nov 14 03:07:48.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.068778043s Nov 14 03:07:50.331: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.070276108s Nov 14 03:07:52.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.067531247s Nov 14 03:07:54.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.06846515s Nov 14 03:07:56.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.066620307s Nov 14 03:07:58.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.067948061s Nov 14 03:08:00.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.067546695s Nov 14 03:08:02.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.067661915s Nov 14 03:08:04.330: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.069677552s Nov 14 03:08:06.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.067378053s Nov 14 03:08:08.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.06675542s Nov 14 03:08:10.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.068827853s Nov 14 03:08:12.327: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.066821863s Nov 14 03:08:14.328: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.066864674s Nov 14 03:08:16.329: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.067872086s Nov 14 03:08:16.360: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.099839612s �[1mSTEP:�[0m removing the label kubernetes.io/e2e-a13a9d96-cb9d-49ac-958b-f807d57d9662 off the node capz-conf-bpf2r �[38;5;243m11/14/22 03:08:16.361�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-a13a9d96-cb9d-49ac-958b-f807d57d9662 �[38;5;243m11/14/22 03:08:16.437�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 14 03:08:16.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-pred-3314" for this suite. �[38;5;243m11/14/22 03:08:16.503�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould delete pods created by rc when not orphaning [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:312�[0m [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:08:16.547�[0m Nov 14 03:08:16.547: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m11/14/22 03:08:16.55�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 03:08:16.649�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 03:08:16.711�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 [It] should delete pods created by rc when not orphaning [Conformance] test/e2e/apimachinery/garbage_collector.go:312 �[1mSTEP:�[0m create the rc �[38;5;243m11/14/22 03:08:16.773�[0m �[1mSTEP:�[0m delete the rc �[38;5;243m11/14/22 03:08:21.846�[0m �[1mSTEP:�[0m wait for all pods to be garbage collected �[38;5;243m11/14/22 03:08:21.888�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m11/14/22 03:08:26.95�[0m Nov 14 03:08:27.058: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" in namespace "kube-system" to be "running and ready" Nov 14 03:08:27.090: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt": Phase="Running", Reason="", readiness=true. Elapsed: 32.089018ms Nov 14 03:08:27.090: INFO: The phase of Pod kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt is Running (Ready = true) Nov 14 03:08:27.090: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" satisfied condition "running and ready" Nov 14 03:08:27.419: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 14 03:08:27.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gc-9497" for this suite. �[38;5;243m11/14/22 03:08:27.454�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [10.943 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should delete pods created by rc when not orphaning [Conformance] �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:312�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:08:16.547�[0m Nov 14 03:08:16.547: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m11/14/22 03:08:16.55�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 03:08:16.649�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 03:08:16.711�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 [It] should delete pods created by rc when not orphaning [Conformance] test/e2e/apimachinery/garbage_collector.go:312 �[1mSTEP:�[0m create the rc �[38;5;243m11/14/22 03:08:16.773�[0m �[1mSTEP:�[0m delete the rc �[38;5;243m11/14/22 03:08:21.846�[0m �[1mSTEP:�[0m wait for all pods to be garbage collected �[38;5;243m11/14/22 03:08:21.888�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m11/14/22 03:08:26.95�[0m Nov 14 03:08:27.058: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" in namespace "kube-system" to be "running and ready" Nov 14 03:08:27.090: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt": Phase="Running", Reason="", readiness=true. Elapsed: 32.089018ms Nov 14 03:08:27.090: INFO: The phase of Pod kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt is Running (Ready = true) Nov 14 03:08:27.090: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" satisfied condition "running and ready" Nov 14 03:08:27.419: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 14 03:08:27.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gc-9497" for this suite. �[38;5;243m11/14/22 03:08:27.454�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] �[38;5;243mAllocatable node memory�[0m �[1mshould be equal to a calculated allocatable memory value�[0m �[38;5;243mtest/e2e/windows/memory_limits.go:54�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:08:27.49�[0m Nov 14 03:08:27.490: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename memory-limit-test-windows �[38;5;243m11/14/22 03:08:27.491�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 03:08:27.589�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 03:08:27.65�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should be equal to a calculated allocatable memory value test/e2e/windows/memory_limits.go:54 �[1mSTEP:�[0m Getting memory details from node status and kubelet config �[38;5;243m11/14/22 03:08:27.744�[0m Nov 14 03:08:27.744: INFO: Getting configuration details for node capz-conf-bpf2r Nov 14 03:08:27.792: INFO: nodeMem says: {capacity:{i:{value:17179398144 scale:0} d:{Dec:<nil>} s:16776756Ki Format:BinarySI} allocatable:{i:{value:17074540544 scale:0} d:{Dec:<nil>} s:16674356Ki Format:BinarySI} systemReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} kubeReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} softEviction:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} hardEviction:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}} �[1mSTEP:�[0m Checking stated allocatable memory 16674356Ki against calculated allocatable memory {{17074540544 0} {<nil>} BinarySI} �[38;5;243m11/14/22 03:08:27.792�[0m [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 14 03:08:27.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "memory-limit-test-windows-2354" for this suite. �[38;5;243m11/14/22 03:08:27.832�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [0.381 seconds]�[0m [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] �[38;5;243mtest/e2e/windows/framework.go:27�[0m Allocatable node memory �[38;5;243mtest/e2e/windows/memory_limits.go:53�[0m should be equal to a calculated allocatable memory value �[38;5;243mtest/e2e/windows/memory_limits.go:54�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:08:27.49�[0m Nov 14 03:08:27.490: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename memory-limit-test-windows �[38;5;243m11/14/22 03:08:27.491�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 03:08:27.589�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 03:08:27.65�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should be equal to a calculated allocatable memory value test/e2e/windows/memory_limits.go:54 �[1mSTEP:�[0m Getting memory details from node status and kubelet config �[38;5;243m11/14/22 03:08:27.744�[0m Nov 14 03:08:27.744: INFO: Getting configuration details for node capz-conf-bpf2r Nov 14 03:08:27.792: INFO: nodeMem says: {capacity:{i:{value:17179398144 scale:0} d:{Dec:<nil>} s:16776756Ki Format:BinarySI} allocatable:{i:{value:17074540544 scale:0} d:{Dec:<nil>} s:16674356Ki Format:BinarySI} systemReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} kubeReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} softEviction:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} hardEviction:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}} �[1mSTEP:�[0m Checking stated allocatable memory 16674356Ki against calculated allocatable memory {{17074540544 0} {<nil>} BinarySI} �[38;5;243m11/14/22 03:08:27.792�[0m [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 14 03:08:27.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "memory-limit-test-windows-2354" for this suite. �[38;5;243m11/14/22 03:08:27.832�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] CronJob�[0m �[1mshould not schedule jobs when suspended [Slow] [Conformance]�[0m �[38;5;243mtest/e2e/apps/cronjob.go:96�[0m [BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:08:27.878�[0m Nov 14 03:08:27.878: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename cronjob �[38;5;243m11/14/22 03:08:27.879�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 03:08:27.976�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 03:08:28.037�[0m [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 [It] should not schedule jobs when suspended [Slow] [Conformance] test/e2e/apps/cronjob.go:96 �[1mSTEP:�[0m Creating a suspended cronjob �[38;5;243m11/14/22 03:08:28.098�[0m �[1mSTEP:�[0m Ensuring no jobs are scheduled �[38;5;243m11/14/22 03:08:28.137�[0m �[1mSTEP:�[0m Ensuring no job exists by listing jobs explicitly �[38;5;243m11/14/22 03:13:28.202�[0m �[1mSTEP:�[0m Removing cronjob �[38;5;243m11/14/22 03:13:28.233�[0m [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 14 03:13:28.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "cronjob-6081" for this suite. �[38;5;243m11/14/22 03:13:28.306�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [300.463 seconds]�[0m [sig-apps] CronJob �[38;5;243mtest/e2e/apps/framework.go:23�[0m should not schedule jobs when suspended [Slow] [Conformance] �[38;5;243mtest/e2e/apps/cronjob.go:96�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:08:27.878�[0m Nov 14 03:08:27.878: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename cronjob �[38;5;243m11/14/22 03:08:27.879�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 03:08:27.976�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 03:08:28.037�[0m [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 [It] should not schedule jobs when suspended [Slow] [Conformance] test/e2e/apps/cronjob.go:96 �[1mSTEP:�[0m Creating a suspended cronjob �[38;5;243m11/14/22 03:08:28.098�[0m �[1mSTEP:�[0m Ensuring no jobs are scheduled �[38;5;243m11/14/22 03:08:28.137�[0m �[1mSTEP:�[0m Ensuring no job exists by listing jobs explicitly �[38;5;243m11/14/22 03:13:28.202�[0m �[1mSTEP:�[0m Removing cronjob �[38;5;243m11/14/22 03:13:28.233�[0m [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 14 03:13:28.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "cronjob-6081" for this suite. �[38;5;243m11/14/22 03:13:28.306�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mReplicationController light�[0m �[1m[Slow] Should scale from 2 pods to 1 pod�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:103�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:13:28.344�[0m Nov 14 03:13:28.344: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/14/22 03:13:28.346�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 03:13:28.448�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 03:13:28.509�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] [Slow] Should scale from 2 pods to 1 pod test/e2e/autoscaling/horizontal_pod_autoscaling.go:103 Nov 14 03:13:28.570: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC rc-light via /v1, Kind=ReplicationController with 2 replicas �[38;5;243m11/14/22 03:13:28.571�[0m �[1mSTEP:�[0m creating replication controller rc-light in namespace horizontal-pod-autoscaling-2511 �[38;5;243m11/14/22 03:13:28.619�[0m I1114 03:13:28.654564 13 runners.go:193] Created replication controller with name: rc-light, namespace: horizontal-pod-autoscaling-2511, replica count: 2 I1114 03:13:38.705300 13 runners.go:193] rc-light Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/14/22 03:13:38.705�[0m �[1mSTEP:�[0m creating replication controller rc-light-ctrl in namespace horizontal-pod-autoscaling-2511 �[38;5;243m11/14/22 03:13:38.753�[0m I1114 03:13:38.787877 13 runners.go:193] Created replication controller with name: rc-light-ctrl, namespace: horizontal-pod-autoscaling-2511, replica count: 1 I1114 03:13:48.841226 13 runners.go:193] rc-light-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 14 03:13:53.843: INFO: Waiting for amount of service:rc-light-ctrl endpoints to be 1 Nov 14 03:13:53.875: INFO: RC rc-light: consume 50 millicores in total Nov 14 03:13:53.875: INFO: RC rc-light: setting consumption to 50 millicores in total Nov 14 03:13:53.875: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:13:53.875: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:13:53.875: INFO: RC rc-light: consume 0 MB in total Nov 14 03:13:53.875: INFO: RC rc-light: disabling mem consumption Nov 14 03:13:53.875: INFO: RC rc-light: consume custom metric 0 in total Nov 14 03:13:53.875: INFO: RC rc-light: disabling consumption of custom metric QPS Nov 14 03:13:53.942: INFO: waiting for 1 replicas (current: 2) Nov 14 03:14:13.976: INFO: waiting for 1 replicas (current: 2) Nov 14 03:14:23.927: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:14:23.927: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:14:33.976: INFO: waiting for 1 replicas (current: 2) Nov 14 03:14:53.977: INFO: waiting for 1 replicas (current: 2) Nov 14 03:14:53.991: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:14:53.991: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:15:13.976: INFO: waiting for 1 replicas (current: 2) Nov 14 03:15:24.032: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:15:24.032: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:15:33.974: INFO: waiting for 1 replicas (current: 2) Nov 14 03:15:53.975: INFO: waiting for 1 replicas (current: 2) Nov 14 03:15:54.077: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:15:54.078: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:16:13.977: INFO: waiting for 1 replicas (current: 2) Nov 14 03:16:24.120: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:16:24.120: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:16:33.977: INFO: waiting for 1 replicas (current: 2) Nov 14 03:16:53.977: INFO: waiting for 1 replicas (current: 2) Nov 14 03:16:54.167: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:16:54.167: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:17:13.975: INFO: waiting for 1 replicas (current: 2) Nov 14 03:17:24.210: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:17:24.210: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:17:33.977: INFO: waiting for 1 replicas (current: 2) Nov 14 03:17:53.977: INFO: waiting for 1 replicas (current: 2) Nov 14 03:17:54.251: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:17:54.251: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:18:13.977: INFO: waiting for 1 replicas (current: 2) Nov 14 03:18:24.292: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:18:24.292: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:18:33.978: INFO: waiting for 1 replicas (current: 2) Nov 14 03:18:53.980: INFO: waiting for 1 replicas (current: 2) Nov 14 03:18:54.331: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:18:54.331: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:19:13.977: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC rc-light �[38;5;243m11/14/22 03:19:14.013�[0m Nov 14 03:19:14.013: INFO: RC rc-light: stopping metric consumer Nov 14 03:19:14.014: INFO: RC rc-light: stopping CPU consumer Nov 14 03:19:14.014: INFO: RC rc-light: stopping mem consumer �[1mSTEP:�[0m deleting ReplicationController rc-light in namespace horizontal-pod-autoscaling-2511, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 03:19:24.014�[0m Nov 14 03:19:24.135: INFO: Deleting ReplicationController rc-light took: 36.567428ms Nov 14 03:19:24.235: INFO: Terminating ReplicationController rc-light pods took: 100.482003ms �[1mSTEP:�[0m deleting ReplicationController rc-light-ctrl in namespace horizontal-pod-autoscaling-2511, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 03:19:25.71�[0m Nov 14 03:19:25.829: INFO: Deleting ReplicationController rc-light-ctrl took: 35.567499ms Nov 14 03:19:25.930: INFO: Terminating ReplicationController rc-light-ctrl pods took: 100.883648ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 14 03:19:27.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-2511" for this suite. �[38;5;243m11/14/22 03:19:27.837�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [359.529 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m ReplicationController light �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:88�[0m [Slow] Should scale from 2 pods to 1 pod �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:103�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:13:28.344�[0m Nov 14 03:13:28.344: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/14/22 03:13:28.346�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 03:13:28.448�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 03:13:28.509�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] [Slow] Should scale from 2 pods to 1 pod test/e2e/autoscaling/horizontal_pod_autoscaling.go:103 Nov 14 03:13:28.570: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC rc-light via /v1, Kind=ReplicationController with 2 replicas �[38;5;243m11/14/22 03:13:28.571�[0m �[1mSTEP:�[0m creating replication controller rc-light in namespace horizontal-pod-autoscaling-2511 �[38;5;243m11/14/22 03:13:28.619�[0m I1114 03:13:28.654564 13 runners.go:193] Created replication controller with name: rc-light, namespace: horizontal-pod-autoscaling-2511, replica count: 2 I1114 03:13:38.705300 13 runners.go:193] rc-light Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/14/22 03:13:38.705�[0m �[1mSTEP:�[0m creating replication controller rc-light-ctrl in namespace horizontal-pod-autoscaling-2511 �[38;5;243m11/14/22 03:13:38.753�[0m I1114 03:13:38.787877 13 runners.go:193] Created replication controller with name: rc-light-ctrl, namespace: horizontal-pod-autoscaling-2511, replica count: 1 I1114 03:13:48.841226 13 runners.go:193] rc-light-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 14 03:13:53.843: INFO: Waiting for amount of service:rc-light-ctrl endpoints to be 1 Nov 14 03:13:53.875: INFO: RC rc-light: consume 50 millicores in total Nov 14 03:13:53.875: INFO: RC rc-light: setting consumption to 50 millicores in total Nov 14 03:13:53.875: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:13:53.875: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:13:53.875: INFO: RC rc-light: consume 0 MB in total Nov 14 03:13:53.875: INFO: RC rc-light: disabling mem consumption Nov 14 03:13:53.875: INFO: RC rc-light: consume custom metric 0 in total Nov 14 03:13:53.875: INFO: RC rc-light: disabling consumption of custom metric QPS Nov 14 03:13:53.942: INFO: waiting for 1 replicas (current: 2) Nov 14 03:14:13.976: INFO: waiting for 1 replicas (current: 2) Nov 14 03:14:23.927: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:14:23.927: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:14:33.976: INFO: waiting for 1 replicas (current: 2) Nov 14 03:14:53.977: INFO: waiting for 1 replicas (current: 2) Nov 14 03:14:53.991: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:14:53.991: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:15:13.976: INFO: waiting for 1 replicas (current: 2) Nov 14 03:15:24.032: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:15:24.032: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:15:33.974: INFO: waiting for 1 replicas (current: 2) Nov 14 03:15:53.975: INFO: waiting for 1 replicas (current: 2) Nov 14 03:15:54.077: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:15:54.078: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:16:13.977: INFO: waiting for 1 replicas (current: 2) Nov 14 03:16:24.120: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:16:24.120: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:16:33.977: INFO: waiting for 1 replicas (current: 2) Nov 14 03:16:53.977: INFO: waiting for 1 replicas (current: 2) Nov 14 03:16:54.167: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:16:54.167: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:17:13.975: INFO: waiting for 1 replicas (current: 2) Nov 14 03:17:24.210: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:17:24.210: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:17:33.977: INFO: waiting for 1 replicas (current: 2) Nov 14 03:17:53.977: INFO: waiting for 1 replicas (current: 2) Nov 14 03:17:54.251: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:17:54.251: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:18:13.977: INFO: waiting for 1 replicas (current: 2) Nov 14 03:18:24.292: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:18:24.292: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:18:33.978: INFO: waiting for 1 replicas (current: 2) Nov 14 03:18:53.980: INFO: waiting for 1 replicas (current: 2) Nov 14 03:18:54.331: INFO: RC rc-light: sending request to consume 50 millicores Nov 14 03:18:54.331: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2511/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 14 03:19:13.977: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC rc-light �[38;5;243m11/14/22 03:19:14.013�[0m Nov 14 03:19:14.013: INFO: RC rc-light: stopping metric consumer Nov 14 03:19:14.014: INFO: RC rc-light: stopping CPU consumer Nov 14 03:19:14.014: INFO: RC rc-light: stopping mem consumer �[1mSTEP:�[0m deleting ReplicationController rc-light in namespace horizontal-pod-autoscaling-2511, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 03:19:24.014�[0m Nov 14 03:19:24.135: INFO: Deleting ReplicationController rc-light took: 36.567428ms Nov 14 03:19:24.235: INFO: Terminating ReplicationController rc-light pods took: 100.482003ms �[1mSTEP:�[0m deleting ReplicationController rc-light-ctrl in namespace horizontal-pod-autoscaling-2511, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 03:19:25.71�[0m Nov 14 03:19:25.829: INFO: Deleting ReplicationController rc-light-ctrl took: 35.567499ms Nov 14 03:19:25.930: INFO: Terminating ReplicationController rc-light-ctrl pods took: 100.883648ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 14 03:19:27.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-2511" for this suite. �[38;5;243m11/14/22 03:19:27.837�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould delete RS created by deployment when not orphaning [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:491�[0m [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:19:27.877�[0m Nov 14 03:19:27.877: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m11/14/22 03:19:27.879�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 03:19:27.976�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 03:19:28.038�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 [It] should delete RS created by deployment when not orphaning [Conformance] test/e2e/apimachinery/garbage_collector.go:491 �[1mSTEP:�[0m create the deployment �[38;5;243m11/14/22 03:19:28.099�[0m �[1mSTEP:�[0m Wait for the Deployment to create new ReplicaSet �[38;5;243m11/14/22 03:19:28.133�[0m �[1mSTEP:�[0m delete the deployment �[38;5;243m11/14/22 03:19:28.174�[0m �[1mSTEP:�[0m wait for all rs to be garbage collected �[38;5;243m11/14/22 03:19:28.218�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m11/14/22 03:19:28.312�[0m Nov 14 03:19:28.423: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" in namespace "kube-system" to be "running and ready" Nov 14 03:19:28.456: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt": Phase="Running", Reason="", readiness=true. Elapsed: 33.15866ms Nov 14 03:19:28.456: INFO: The phase of Pod kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt is Running (Ready = true) Nov 14 03:19:28.456: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" satisfied condition "running and ready" Nov 14 03:19:28.798: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 14 03:19:28.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gc-1055" for this suite. �[38;5;243m11/14/22 03:19:28.833�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [0.994 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should delete RS created by deployment when not orphaning [Conformance] �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:491�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:19:27.877�[0m Nov 14 03:19:27.877: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m11/14/22 03:19:27.879�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 03:19:27.976�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 03:19:28.038�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 [It] should delete RS created by deployment when not orphaning [Conformance] test/e2e/apimachinery/garbage_collector.go:491 �[1mSTEP:�[0m create the deployment �[38;5;243m11/14/22 03:19:28.099�[0m �[1mSTEP:�[0m Wait for the Deployment to create new ReplicaSet �[38;5;243m11/14/22 03:19:28.133�[0m �[1mSTEP:�[0m delete the deployment �[38;5;243m11/14/22 03:19:28.174�[0m �[1mSTEP:�[0m wait for all rs to be garbage collected �[38;5;243m11/14/22 03:19:28.218�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m11/14/22 03:19:28.312�[0m Nov 14 03:19:28.423: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" in namespace "kube-system" to be "running and ready" Nov 14 03:19:28.456: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt": Phase="Running", Reason="", readiness=true. Elapsed: 33.15866ms Nov 14 03:19:28.456: INFO: The phase of Pod kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt is Running (Ready = true) Nov 14 03:19:28.456: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" satisfied condition "running and ready" Nov 14 03:19:28.798: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 14 03:19:28.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gc-1055" for this suite. �[38;5;243m11/14/22 03:19:28.833�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:650�[0m [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:19:28.872�[0m Nov 14 03:19:28.872: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m11/14/22 03:19:28.873�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 03:19:28.971�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 03:19:29.032�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] test/e2e/apimachinery/garbage_collector.go:650 �[1mSTEP:�[0m create the rc �[38;5;243m11/14/22 03:19:29.127�[0m �[1mSTEP:�[0m delete the rc �[38;5;243m11/14/22 03:19:34.198�[0m �[1mSTEP:�[0m wait for the rc to be deleted �[38;5;243m11/14/22 03:19:34.232�[0m Nov 14 03:19:35.317: INFO: 80 pods remaining Nov 14 03:19:35.317: INFO: 80 pods has nil DeletionTimestamp Nov 14 03:19:35.317: INFO: Nov 14 03:19:36.313: INFO: 71 pods remaining Nov 14 03:19:36.313: INFO: 70 pods has nil DeletionTimestamp Nov 14 03:19:36.313: INFO: Nov 14 03:19:37.314: INFO: 58 pods remaining Nov 14 03:19:37.314: INFO: 58 pods has nil DeletionTimestamp Nov 14 03:19:37.314: INFO: Nov 14 03:19:38.306: INFO: 40 pods remaining Nov 14 03:19:38.306: INFO: 40 pods has nil DeletionTimestamp Nov 14 03:19:38.306: INFO: Nov 14 03:19:39.304: INFO: 31 pods remaining Nov 14 03:19:39.304: INFO: 31 pods has nil DeletionTimestamp Nov 14 03:19:39.304: INFO: Nov 14 03:19:40.309: INFO: 17 pods remaining Nov 14 03:19:40.309: INFO: 17 pods has nil DeletionTimestamp Nov 14 03:19:40.309: INFO: �[1mSTEP:�[0m Gathering metrics �[38;5;243m11/14/22 03:19:41.299�[0m Nov 14 03:19:41.408: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" in namespace "kube-system" to be "running and ready" Nov 14 03:19:41.443: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt": Phase="Running", Reason="", readiness=true. Elapsed: 35.383924ms Nov 14 03:19:41.443: INFO: The phase of Pod kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt is Running (Ready = true) Nov 14 03:19:41.443: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" satisfied condition "running and ready" Nov 14 03:19:41.801: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 14 03:19:41.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gc-2550" for this suite. �[38;5;243m11/14/22 03:19:41.837�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [13.002 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:650�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:19:28.872�[0m Nov 14 03:19:28.872: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m11/14/22 03:19:28.873�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 03:19:28.971�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 03:19:29.032�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] test/e2e/apimachinery/garbage_collector.go:650 �[1mSTEP:�[0m create the rc �[38;5;243m11/14/22 03:19:29.127�[0m �[1mSTEP:�[0m delete the rc �[38;5;243m11/14/22 03:19:34.198�[0m �[1mSTEP:�[0m wait for the rc to be deleted �[38;5;243m11/14/22 03:19:34.232�[0m Nov 14 03:19:35.317: INFO: 80 pods remaining Nov 14 03:19:35.317: INFO: 80 pods has nil DeletionTimestamp Nov 14 03:19:35.317: INFO: Nov 14 03:19:36.313: INFO: 71 pods remaining Nov 14 03:19:36.313: INFO: 70 pods has nil DeletionTimestamp Nov 14 03:19:36.313: INFO: Nov 14 03:19:37.314: INFO: 58 pods remaining Nov 14 03:19:37.314: INFO: 58 pods has nil DeletionTimestamp Nov 14 03:19:37.314: INFO: Nov 14 03:19:38.306: INFO: 40 pods remaining Nov 14 03:19:38.306: INFO: 40 pods has nil DeletionTimestamp Nov 14 03:19:38.306: INFO: Nov 14 03:19:39.304: INFO: 31 pods remaining Nov 14 03:19:39.304: INFO: 31 pods has nil DeletionTimestamp Nov 14 03:19:39.304: INFO: Nov 14 03:19:40.309: INFO: 17 pods remaining Nov 14 03:19:40.309: INFO: 17 pods has nil DeletionTimestamp Nov 14 03:19:40.309: INFO: �[1mSTEP:�[0m Gathering metrics �[38;5;243m11/14/22 03:19:41.299�[0m Nov 14 03:19:41.408: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" in namespace "kube-system" to be "running and ready" Nov 14 03:19:41.443: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt": Phase="Running", Reason="", readiness=true. Elapsed: 35.383924ms Nov 14 03:19:41.443: INFO: The phase of Pod kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt is Running (Ready = true) Nov 14 03:19:41.443: INFO: Pod "kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt" satisfied condition "running and ready" Nov 14 03:19:41.801: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 14 03:19:41.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gc-2550" for this suite. �[38;5;243m11/14/22 03:19:41.837�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243m[Serial] [Slow] Deployment (Pod Resource)�[0m �[1mShould scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:55�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:19:41.886�[0m Nov 14 03:19:41.887: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/14/22 03:19:41.887�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 03:19:41.984�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 03:19:42.045�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:55 Nov 14 03:19:42.106: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/14/22 03:19:42.108�[0m �[1mSTEP:�[0m Creating deployment test-deployment in namespace horizontal-pod-autoscaling-7009 �[38;5;243m11/14/22 03:19:42.169�[0m I1114 03:19:42.207014 13 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-7009, replica count: 1 I1114 03:19:52.258423 13 runners.go:193] test-deployment Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1114 03:20:02.259647 13 runners.go:193] test-deployment Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1114 03:20:12.260094 13 runners.go:193] test-deployment Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/14/22 03:20:12.26�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-7009 �[38;5;243m11/14/22 03:20:12.316�[0m I1114 03:20:12.351297 13 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-7009, replica count: 1 I1114 03:20:22.402488 13 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 14 03:20:27.403: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Nov 14 03:20:27.434: INFO: RC test-deployment: consume 250 millicores in total Nov 14 03:20:27.434: INFO: RC test-deployment: setting consumption to 250 millicores in total Nov 14 03:20:27.434: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:20:27.434: INFO: RC test-deployment: consume 0 MB in total Nov 14 03:20:27.434: INFO: RC test-deployment: consume custom metric 0 in total Nov 14 03:20:27.434: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:20:27.434: INFO: RC test-deployment: disabling mem consumption Nov 14 03:20:27.434: INFO: RC test-deployment: disabling consumption of custom metric QPS Nov 14 03:20:27.502: INFO: waiting for 3 replicas (current: 1) Nov 14 03:20:47.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:20:57.511: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:20:57.511: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:21:07.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:21:27.534: INFO: waiting for 3 replicas (current: 2) Nov 14 03:21:27.557: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:21:27.557: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:21:47.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:21:57.601: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:21:57.601: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:22:07.537: INFO: waiting for 3 replicas (current: 2) Nov 14 03:22:27.537: INFO: waiting for 3 replicas (current: 2) Nov 14 03:22:27.652: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:22:27.652: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:22:47.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:22:57.706: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:22:57.706: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:23:07.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:23:27.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:23:27.753: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:23:27.753: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:23:47.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:23:57.796: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:23:57.797: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:24:07.538: INFO: waiting for 3 replicas (current: 2) Nov 14 03:24:27.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:24:27.846: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:24:27.846: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:24:47.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:24:57.887: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:24:57.888: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:25:07.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:25:27.537: INFO: waiting for 3 replicas (current: 2) Nov 14 03:25:27.934: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:25:27.935: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:25:47.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:25:57.992: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:25:57.992: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:26:07.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:26:27.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:26:28.044: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:26:28.044: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:26:47.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:26:58.104: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:26:58.104: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:27:07.539: INFO: waiting for 3 replicas (current: 2) Nov 14 03:27:27.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:27:28.154: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:27:28.154: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:27:47.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:27:58.209: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:27:58.209: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:28:07.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:28:27.538: INFO: waiting for 3 replicas (current: 2) Nov 14 03:28:28.257: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:28:28.257: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:28:47.534: INFO: waiting for 3 replicas (current: 2) Nov 14 03:28:58.302: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:28:58.302: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:29:07.538: INFO: waiting for 3 replicas (current: 2) Nov 14 03:29:27.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:29:28.340: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:29:28.341: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:29:47.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:29:58.382: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:29:58.382: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:30:07.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:30:27.538: INFO: waiting for 3 replicas (current: 2) Nov 14 03:30:28.422: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:30:28.422: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:30:47.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:30:58.462: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:30:58.462: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:31:07.534: INFO: waiting for 3 replicas (current: 2) Nov 14 03:31:27.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:31:28.505: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:31:28.505: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:31:47.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:31:58.547: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:31:58.547: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:32:07.534: INFO: waiting for 3 replicas (current: 2) Nov 14 03:32:27.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:32:28.596: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:32:28.596: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:32:47.537: INFO: waiting for 3 replicas (current: 2) Nov 14 03:32:58.637: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:32:58.639: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:33:07.534: INFO: waiting for 3 replicas (current: 2) Nov 14 03:33:27.534: INFO: waiting for 3 replicas (current: 2) Nov 14 03:33:28.679: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:33:28.679: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:33:47.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:33:58.719: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:33:58.719: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:34:07.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:34:27.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:34:28.773: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:34:28.773: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:34:47.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:34:58.813: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:34:58.814: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:35:07.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:35:27.534: INFO: waiting for 3 replicas (current: 2) Nov 14 03:35:27.566: INFO: waiting for 3 replicas (current: 2) Nov 14 03:35:27.566: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 14 03:35:27.567: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc004965e68, {0x75d77c5?, 0xc004783f80?}, {{0x75ac8f6, 0x4}, {0x75b5b16, 0x7}, {0x75bdfe5, 0xa}}, 0xc000bece10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75d77c5?, 0x62ae505?}, {{0x75ac8f6, 0x4}, {0x75b5b16, 0x7}, {0x75bdfe5, 0xa}}, {0x75abb3b, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.1.3() test/e2e/autoscaling/horizontal_pod_autoscaling.go:56 +0x88 �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m11/14/22 03:35:27.606�[0m Nov 14 03:35:27.606: INFO: RC test-deployment: stopping metric consumer Nov 14 03:35:27.606: INFO: RC test-deployment: stopping CPU consumer Nov 14 03:35:27.606: INFO: RC test-deployment: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-7009, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 03:35:37.607�[0m Nov 14 03:35:37.737: INFO: Deleting Deployment.apps test-deployment took: 47.426348ms Nov 14 03:35:37.838: INFO: Terminating Deployment.apps test-deployment pods took: 101.086522ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-7009, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 03:35:40.194�[0m Nov 14 03:35:40.314: INFO: Deleting ReplicationController test-deployment-ctrl took: 37.227091ms Nov 14 03:35:40.415: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.885663ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 14 03:35:42.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/14/22 03:35:42.107�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-7009". �[38;5;243m11/14/22 03:35:42.107�[0m �[1mSTEP:�[0m Found 21 events. �[38;5;243m11/14/22 03:35:42.14�[0m Nov 14 03:35:42.140: INFO: At 2022-11-14 03:19:42 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-54fb67b787 to 1 Nov 14 03:35:42.140: INFO: At 2022-11-14 03:19:42 +0000 UTC - event for test-deployment-54fb67b787: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-54fb67b787-bzjlx Nov 14 03:35:42.140: INFO: At 2022-11-14 03:19:42 +0000 UTC - event for test-deployment-54fb67b787-bzjlx: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-7009/test-deployment-54fb67b787-bzjlx to capz-conf-bpf2r Nov 14 03:35:42.140: INFO: At 2022-11-14 03:20:04 +0000 UTC - event for test-deployment-54fb67b787-bzjlx: {kubelet capz-conf-bpf2r} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 14 03:35:42.140: INFO: At 2022-11-14 03:20:04 +0000 UTC - event for test-deployment-54fb67b787-bzjlx: {kubelet capz-conf-bpf2r} Created: Created container test-deployment Nov 14 03:35:42.140: INFO: At 2022-11-14 03:20:06 +0000 UTC - event for test-deployment-54fb67b787-bzjlx: {kubelet capz-conf-bpf2r} Started: Started container test-deployment Nov 14 03:35:42.140: INFO: At 2022-11-14 03:20:12 +0000 UTC - event for test-deployment-ctrl: {replication-controller } SuccessfulCreate: Created pod: test-deployment-ctrl-nq2l4 Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:12 +0000 UTC - event for test-deployment-ctrl-nq2l4: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-7009/test-deployment-ctrl-nq2l4 to capz-conf-sq8nr Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:14 +0000 UTC - event for test-deployment-ctrl-nq2l4: {kubelet capz-conf-sq8nr} Created: Created container test-deployment-ctrl Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:14 +0000 UTC - event for test-deployment-ctrl-nq2l4: {kubelet capz-conf-sq8nr} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:16 +0000 UTC - event for test-deployment-ctrl-nq2l4: {kubelet capz-conf-sq8nr} Started: Started container test-deployment-ctrl Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:42 +0000 UTC - event for test-deployment: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 2; reason: cpu resource above target Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:42 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-54fb67b787 to 2 from 1 Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:42 +0000 UTC - event for test-deployment-54fb67b787: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-54fb67b787-zfj6h Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:42 +0000 UTC - event for test-deployment-54fb67b787-zfj6h: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-7009/test-deployment-54fb67b787-zfj6h to capz-conf-sq8nr Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:44 +0000 UTC - event for test-deployment-54fb67b787-zfj6h: {kubelet capz-conf-sq8nr} Created: Created container test-deployment Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:44 +0000 UTC - event for test-deployment-54fb67b787-zfj6h: {kubelet capz-conf-sq8nr} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:45 +0000 UTC - event for test-deployment-54fb67b787-zfj6h: {kubelet capz-conf-sq8nr} Started: Started container test-deployment Nov 14 03:35:42.141: INFO: At 2022-11-14 03:35:37 +0000 UTC - event for test-deployment-54fb67b787-bzjlx: {kubelet capz-conf-bpf2r} Killing: Stopping container test-deployment Nov 14 03:35:42.141: INFO: At 2022-11-14 03:35:37 +0000 UTC - event for test-deployment-54fb67b787-zfj6h: {kubelet capz-conf-sq8nr} Killing: Stopping container test-deployment Nov 14 03:35:42.141: INFO: At 2022-11-14 03:35:40 +0000 UTC - event for test-deployment-ctrl-nq2l4: {kubelet capz-conf-sq8nr} Killing: Stopping container test-deployment-ctrl Nov 14 03:35:42.172: INFO: POD NODE PHASE GRACE CONDITIONS Nov 14 03:35:42.172: INFO: Nov 14 03:35:42.205: INFO: Logging node info for node capz-conf-5alf7c-control-plane-hknpt Nov 14 03:35:42.237: INFO: Node Info: &Node{ObjectMeta:{capz-conf-5alf7c-control-plane-hknpt a75ceb7e-c32f-458d-b53e-3b6c4a58b600 21100 0 2022-11-14 01:06:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-5alf7c-control-plane-hknpt kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-1] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-control-plane-xgnl5 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-5alf7c-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.133.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-14 01:06:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-14 01:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-14 01:06:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-14 01:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-14 01:07:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-14 03:35:32 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-5alf7c-control-plane-hknpt,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-14 01:07:09 +0000 UTC,LastTransitionTime:2022-11-14 01:07:09 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 03:35:32 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 03:35:32 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 03:35:32 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 03:35:32 +0000 UTC,LastTransitionTime:2022-11-14 01:07:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-5alf7c-control-plane-hknpt,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3ddd5ba1d7ea4f438c89ec4460eb4485,SystemUUID:9c65c2f7-ac82-7844-bea4-259d3ca85e49,BootID:e90a54a7-31c1-4896-bf10-fccff5507cc5,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-apiserver-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-apiserver:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:135156176,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-controller-manager-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-controller-manager:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:124986169,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-proxy-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-proxy:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:67201736,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-scheduler-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-scheduler:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:57656120,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 03:35:42.238: INFO: Logging kubelet events for node capz-conf-5alf7c-control-plane-hknpt Nov 14 03:35:42.270: INFO: Logging pods the kubelet thinks is on node capz-conf-5alf7c-control-plane-hknpt Nov 14 03:35:42.322: INFO: kube-proxy-nvvcp started at 2022-11-14 01:06:31 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 03:35:42.322: INFO: calico-node-jwd52 started at 2022-11-14 01:06:48 +0000 UTC (2+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 14 03:35:42.322: INFO: Init container install-cni ready: true, restart count 0 Nov 14 03:35:42.322: INFO: Container calico-node ready: true, restart count 0 Nov 14 03:35:42.322: INFO: coredns-787d4945fb-qs9pc started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Container coredns ready: true, restart count 0 Nov 14 03:35:42.322: INFO: metrics-server-c9574f845-p9ptg started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Container metrics-server ready: true, restart count 0 Nov 14 03:35:42.322: INFO: kube-apiserver-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 03:35:42.322: INFO: kube-scheduler-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 03:35:42.322: INFO: calico-kube-controllers-657b584867-65vn5 started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 14 03:35:42.322: INFO: coredns-787d4945fb-dfwrp started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Container coredns ready: true, restart count 0 Nov 14 03:35:42.322: INFO: etcd-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:31 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Container etcd ready: true, restart count 0 Nov 14 03:35:42.322: INFO: kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 03:35:42.479: INFO: Latency metrics for node capz-conf-5alf7c-control-plane-hknpt Nov 14 03:35:42.479: INFO: Logging node info for node capz-conf-bpf2r Nov 14 03:35:42.511: INFO: Node Info: &Node{ObjectMeta:{capz-conf-bpf2r c45cb394-b969-49da-b171-8e075ea29d20 20941 0 2022-11-14 01:08:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-bpf2r kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-md-win-5c98d6f77b-lr6hr cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-5alf7c-md-win-5c98d6f77b kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.114.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:c9:39:af volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-14 01:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-14 01:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-14 01:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-14 01:09:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-14 01:10:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-14 03:02:48 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}} status} {kubelet.exe Update v1 2022-11-14 03:33:48 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakecpu":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-bpf2r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakecpu: {{1 3} {<nil>} 1k DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakecpu: {{1 3} {<nil>} 1k DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 03:33:48 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 03:33:48 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 03:33:48 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 03:33:48 +0000 UTC,LastTransitionTime:2022-11-14 01:09:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-bpf2r,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-bpf2r,SystemUUID:21083AEB-D819-4573-9CD1-AA772F09A374,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-beta.0.65_8e48df13531802-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 03:35:42.512: INFO: Logging kubelet events for node capz-conf-bpf2r Nov 14 03:35:42.543: INFO: Logging pods the kubelet thinks is on node capz-conf-bpf2r Nov 14 03:35:42.594: INFO: containerd-logger-bpt69 started at 2022-11-14 01:08:57 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.594: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 03:35:42.594: INFO: calico-node-windows-xk6bd started at 2022-11-14 01:08:57 +0000 UTC (1+2 container statuses recorded) Nov 14 03:35:42.594: INFO: Init container install-cni ready: true, restart count 0 Nov 14 03:35:42.594: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 03:35:42.594: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 03:35:42.594: INFO: csi-proxy-76x9p started at 2022-11-14 01:09:18 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.594: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 03:35:42.594: INFO: kube-proxy-windows-nz2rt started at 2022-11-14 01:08:57 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.594: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 03:35:42.751: INFO: Latency metrics for node capz-conf-bpf2r Nov 14 03:35:42.751: INFO: Logging node info for node capz-conf-sq8nr Nov 14 03:35:42.783: INFO: Node Info: &Node{ObjectMeta:{capz-conf-sq8nr 51b52b43-1941-43af-b740-46bccdd021dd 20911 0 2022-11-14 01:08:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-sq8nr kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-md-win-5c98d6f77b-pnhpc cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-5alf7c-md-win-5c98d6f77b kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.166.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:39:f9:57 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-14 01:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-14 01:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-14 01:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-14 01:09:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-14 01:09:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-14 03:02:48 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}} status} {kubelet.exe Update v1 2022-11-14 03:33:30 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-sq8nr,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 03:33:30 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 03:33:30 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 03:33:30 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 03:33:30 +0000 UTC,LastTransitionTime:2022-11-14 01:09:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-sq8nr,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-sq8nr,SystemUUID:9699376C-B5F7-4F5B-B48F-D84D2BD16580,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-beta.0.65_8e48df13531802-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 03:35:42.784: INFO: Logging kubelet events for node capz-conf-sq8nr Nov 14 03:35:42.815: INFO: Logging pods the kubelet thinks is on node capz-conf-sq8nr Nov 14 03:35:42.865: INFO: calico-node-windows-w6hn2 started at 2022-11-14 01:08:50 +0000 UTC (1+2 container statuses recorded) Nov 14 03:35:42.865: INFO: Init container install-cni ready: true, restart count 0 Nov 14 03:35:42.865: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 03:35:42.865: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 03:35:42.865: INFO: kube-proxy-windows-lldgb started at 2022-11-14 01:08:50 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.865: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 03:35:42.865: INFO: csi-proxy-fbwsw started at 2022-11-14 01:09:15 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.865: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 03:35:42.865: INFO: containerd-logger-bf8mz started at 2022-11-14 01:08:50 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.865: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 03:35:43.041: INFO: Latency metrics for node capz-conf-sq8nr [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-7009" for this suite. �[38;5;243m11/14/22 03:35:43.042�[0m �[38;5;243m------------------------------�[0m �[38;5;9m• [FAILED] [961.192 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] Deployment (Pod Resource) �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:48�[0m �[38;5;9m�[1m[It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:55�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:19:41.886�[0m Nov 14 03:19:41.887: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/14/22 03:19:41.887�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 03:19:41.984�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 03:19:42.045�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:55 Nov 14 03:19:42.106: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/14/22 03:19:42.108�[0m �[1mSTEP:�[0m Creating deployment test-deployment in namespace horizontal-pod-autoscaling-7009 �[38;5;243m11/14/22 03:19:42.169�[0m I1114 03:19:42.207014 13 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-7009, replica count: 1 I1114 03:19:52.258423 13 runners.go:193] test-deployment Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1114 03:20:02.259647 13 runners.go:193] test-deployment Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1114 03:20:12.260094 13 runners.go:193] test-deployment Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/14/22 03:20:12.26�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-7009 �[38;5;243m11/14/22 03:20:12.316�[0m I1114 03:20:12.351297 13 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-7009, replica count: 1 I1114 03:20:22.402488 13 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 14 03:20:27.403: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Nov 14 03:20:27.434: INFO: RC test-deployment: consume 250 millicores in total Nov 14 03:20:27.434: INFO: RC test-deployment: setting consumption to 250 millicores in total Nov 14 03:20:27.434: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:20:27.434: INFO: RC test-deployment: consume 0 MB in total Nov 14 03:20:27.434: INFO: RC test-deployment: consume custom metric 0 in total Nov 14 03:20:27.434: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:20:27.434: INFO: RC test-deployment: disabling mem consumption Nov 14 03:20:27.434: INFO: RC test-deployment: disabling consumption of custom metric QPS Nov 14 03:20:27.502: INFO: waiting for 3 replicas (current: 1) Nov 14 03:20:47.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:20:57.511: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:20:57.511: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:21:07.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:21:27.534: INFO: waiting for 3 replicas (current: 2) Nov 14 03:21:27.557: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:21:27.557: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:21:47.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:21:57.601: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:21:57.601: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:22:07.537: INFO: waiting for 3 replicas (current: 2) Nov 14 03:22:27.537: INFO: waiting for 3 replicas (current: 2) Nov 14 03:22:27.652: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:22:27.652: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:22:47.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:22:57.706: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:22:57.706: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:23:07.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:23:27.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:23:27.753: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:23:27.753: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:23:47.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:23:57.796: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:23:57.797: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:24:07.538: INFO: waiting for 3 replicas (current: 2) Nov 14 03:24:27.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:24:27.846: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:24:27.846: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:24:47.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:24:57.887: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:24:57.888: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:25:07.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:25:27.537: INFO: waiting for 3 replicas (current: 2) Nov 14 03:25:27.934: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:25:27.935: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:25:47.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:25:57.992: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:25:57.992: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:26:07.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:26:27.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:26:28.044: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:26:28.044: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:26:47.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:26:58.104: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:26:58.104: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:27:07.539: INFO: waiting for 3 replicas (current: 2) Nov 14 03:27:27.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:27:28.154: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:27:28.154: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:27:47.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:27:58.209: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:27:58.209: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:28:07.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:28:27.538: INFO: waiting for 3 replicas (current: 2) Nov 14 03:28:28.257: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:28:28.257: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:28:47.534: INFO: waiting for 3 replicas (current: 2) Nov 14 03:28:58.302: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:28:58.302: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:29:07.538: INFO: waiting for 3 replicas (current: 2) Nov 14 03:29:27.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:29:28.340: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:29:28.341: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:29:47.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:29:58.382: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:29:58.382: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:30:07.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:30:27.538: INFO: waiting for 3 replicas (current: 2) Nov 14 03:30:28.422: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:30:28.422: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:30:47.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:30:58.462: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:30:58.462: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:31:07.534: INFO: waiting for 3 replicas (current: 2) Nov 14 03:31:27.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:31:28.505: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:31:28.505: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:31:47.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:31:58.547: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:31:58.547: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:32:07.534: INFO: waiting for 3 replicas (current: 2) Nov 14 03:32:27.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:32:28.596: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:32:28.596: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:32:47.537: INFO: waiting for 3 replicas (current: 2) Nov 14 03:32:58.637: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:32:58.639: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:33:07.534: INFO: waiting for 3 replicas (current: 2) Nov 14 03:33:27.534: INFO: waiting for 3 replicas (current: 2) Nov 14 03:33:28.679: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:33:28.679: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:33:47.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:33:58.719: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:33:58.719: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:34:07.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:34:27.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:34:28.773: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:34:28.773: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:34:47.536: INFO: waiting for 3 replicas (current: 2) Nov 14 03:34:58.813: INFO: RC test-deployment: sending request to consume 250 millicores Nov 14 03:34:58.814: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7009/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 14 03:35:07.535: INFO: waiting for 3 replicas (current: 2) Nov 14 03:35:27.534: INFO: waiting for 3 replicas (current: 2) Nov 14 03:35:27.566: INFO: waiting for 3 replicas (current: 2) Nov 14 03:35:27.566: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 14 03:35:27.567: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc004965e68, {0x75d77c5?, 0xc004783f80?}, {{0x75ac8f6, 0x4}, {0x75b5b16, 0x7}, {0x75bdfe5, 0xa}}, 0xc000bece10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75d77c5?, 0x62ae505?}, {{0x75ac8f6, 0x4}, {0x75b5b16, 0x7}, {0x75bdfe5, 0xa}}, {0x75abb3b, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.1.3() test/e2e/autoscaling/horizontal_pod_autoscaling.go:56 +0x88 �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m11/14/22 03:35:27.606�[0m Nov 14 03:35:27.606: INFO: RC test-deployment: stopping metric consumer Nov 14 03:35:27.606: INFO: RC test-deployment: stopping CPU consumer Nov 14 03:35:27.606: INFO: RC test-deployment: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-7009, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 03:35:37.607�[0m Nov 14 03:35:37.737: INFO: Deleting Deployment.apps test-deployment took: 47.426348ms Nov 14 03:35:37.838: INFO: Terminating Deployment.apps test-deployment pods took: 101.086522ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-7009, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 03:35:40.194�[0m Nov 14 03:35:40.314: INFO: Deleting ReplicationController test-deployment-ctrl took: 37.227091ms Nov 14 03:35:40.415: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.885663ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 14 03:35:42.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/14/22 03:35:42.107�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-7009". �[38;5;243m11/14/22 03:35:42.107�[0m �[1mSTEP:�[0m Found 21 events. �[38;5;243m11/14/22 03:35:42.14�[0m Nov 14 03:35:42.140: INFO: At 2022-11-14 03:19:42 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-54fb67b787 to 1 Nov 14 03:35:42.140: INFO: At 2022-11-14 03:19:42 +0000 UTC - event for test-deployment-54fb67b787: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-54fb67b787-bzjlx Nov 14 03:35:42.140: INFO: At 2022-11-14 03:19:42 +0000 UTC - event for test-deployment-54fb67b787-bzjlx: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-7009/test-deployment-54fb67b787-bzjlx to capz-conf-bpf2r Nov 14 03:35:42.140: INFO: At 2022-11-14 03:20:04 +0000 UTC - event for test-deployment-54fb67b787-bzjlx: {kubelet capz-conf-bpf2r} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 14 03:35:42.140: INFO: At 2022-11-14 03:20:04 +0000 UTC - event for test-deployment-54fb67b787-bzjlx: {kubelet capz-conf-bpf2r} Created: Created container test-deployment Nov 14 03:35:42.140: INFO: At 2022-11-14 03:20:06 +0000 UTC - event for test-deployment-54fb67b787-bzjlx: {kubelet capz-conf-bpf2r} Started: Started container test-deployment Nov 14 03:35:42.140: INFO: At 2022-11-14 03:20:12 +0000 UTC - event for test-deployment-ctrl: {replication-controller } SuccessfulCreate: Created pod: test-deployment-ctrl-nq2l4 Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:12 +0000 UTC - event for test-deployment-ctrl-nq2l4: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-7009/test-deployment-ctrl-nq2l4 to capz-conf-sq8nr Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:14 +0000 UTC - event for test-deployment-ctrl-nq2l4: {kubelet capz-conf-sq8nr} Created: Created container test-deployment-ctrl Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:14 +0000 UTC - event for test-deployment-ctrl-nq2l4: {kubelet capz-conf-sq8nr} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:16 +0000 UTC - event for test-deployment-ctrl-nq2l4: {kubelet capz-conf-sq8nr} Started: Started container test-deployment-ctrl Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:42 +0000 UTC - event for test-deployment: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 2; reason: cpu resource above target Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:42 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-54fb67b787 to 2 from 1 Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:42 +0000 UTC - event for test-deployment-54fb67b787: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-54fb67b787-zfj6h Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:42 +0000 UTC - event for test-deployment-54fb67b787-zfj6h: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-7009/test-deployment-54fb67b787-zfj6h to capz-conf-sq8nr Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:44 +0000 UTC - event for test-deployment-54fb67b787-zfj6h: {kubelet capz-conf-sq8nr} Created: Created container test-deployment Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:44 +0000 UTC - event for test-deployment-54fb67b787-zfj6h: {kubelet capz-conf-sq8nr} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 14 03:35:42.141: INFO: At 2022-11-14 03:20:45 +0000 UTC - event for test-deployment-54fb67b787-zfj6h: {kubelet capz-conf-sq8nr} Started: Started container test-deployment Nov 14 03:35:42.141: INFO: At 2022-11-14 03:35:37 +0000 UTC - event for test-deployment-54fb67b787-bzjlx: {kubelet capz-conf-bpf2r} Killing: Stopping container test-deployment Nov 14 03:35:42.141: INFO: At 2022-11-14 03:35:37 +0000 UTC - event for test-deployment-54fb67b787-zfj6h: {kubelet capz-conf-sq8nr} Killing: Stopping container test-deployment Nov 14 03:35:42.141: INFO: At 2022-11-14 03:35:40 +0000 UTC - event for test-deployment-ctrl-nq2l4: {kubelet capz-conf-sq8nr} Killing: Stopping container test-deployment-ctrl Nov 14 03:35:42.172: INFO: POD NODE PHASE GRACE CONDITIONS Nov 14 03:35:42.172: INFO: Nov 14 03:35:42.205: INFO: Logging node info for node capz-conf-5alf7c-control-plane-hknpt Nov 14 03:35:42.237: INFO: Node Info: &Node{ObjectMeta:{capz-conf-5alf7c-control-plane-hknpt a75ceb7e-c32f-458d-b53e-3b6c4a58b600 21100 0 2022-11-14 01:06:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-5alf7c-control-plane-hknpt kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-1] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-control-plane-xgnl5 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-5alf7c-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.133.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-14 01:06:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-14 01:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-14 01:06:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-14 01:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-14 01:07:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-14 03:35:32 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-5alf7c-control-plane-hknpt,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-14 01:07:09 +0000 UTC,LastTransitionTime:2022-11-14 01:07:09 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 03:35:32 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 03:35:32 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 03:35:32 +0000 UTC,LastTransitionTime:2022-11-14 01:06:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 03:35:32 +0000 UTC,LastTransitionTime:2022-11-14 01:07:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-5alf7c-control-plane-hknpt,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3ddd5ba1d7ea4f438c89ec4460eb4485,SystemUUID:9c65c2f7-ac82-7844-bea4-259d3ca85e49,BootID:e90a54a7-31c1-4896-bf10-fccff5507cc5,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-apiserver-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-apiserver:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:135156176,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-controller-manager-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-controller-manager:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:124986169,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-proxy-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-proxy:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:67201736,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-scheduler-amd64:v1.26.0-beta.0.65_8e48df13531802 registry.k8s.io/kube-scheduler:v1.26.0-beta.0.65_8e48df13531802],SizeBytes:57656120,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 03:35:42.238: INFO: Logging kubelet events for node capz-conf-5alf7c-control-plane-hknpt Nov 14 03:35:42.270: INFO: Logging pods the kubelet thinks is on node capz-conf-5alf7c-control-plane-hknpt Nov 14 03:35:42.322: INFO: kube-proxy-nvvcp started at 2022-11-14 01:06:31 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 03:35:42.322: INFO: calico-node-jwd52 started at 2022-11-14 01:06:48 +0000 UTC (2+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 14 03:35:42.322: INFO: Init container install-cni ready: true, restart count 0 Nov 14 03:35:42.322: INFO: Container calico-node ready: true, restart count 0 Nov 14 03:35:42.322: INFO: coredns-787d4945fb-qs9pc started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Container coredns ready: true, restart count 0 Nov 14 03:35:42.322: INFO: metrics-server-c9574f845-p9ptg started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Container metrics-server ready: true, restart count 0 Nov 14 03:35:42.322: INFO: kube-apiserver-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 03:35:42.322: INFO: kube-scheduler-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 03:35:42.322: INFO: calico-kube-controllers-657b584867-65vn5 started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 14 03:35:42.322: INFO: coredns-787d4945fb-dfwrp started at 2022-11-14 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Container coredns ready: true, restart count 0 Nov 14 03:35:42.322: INFO: etcd-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:31 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Container etcd ready: true, restart count 0 Nov 14 03:35:42.322: INFO: kube-controller-manager-capz-conf-5alf7c-control-plane-hknpt started at 2022-11-14 01:06:30 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.322: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 03:35:42.479: INFO: Latency metrics for node capz-conf-5alf7c-control-plane-hknpt Nov 14 03:35:42.479: INFO: Logging node info for node capz-conf-bpf2r Nov 14 03:35:42.511: INFO: Node Info: &Node{ObjectMeta:{capz-conf-bpf2r c45cb394-b969-49da-b171-8e075ea29d20 20941 0 2022-11-14 01:08:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-bpf2r kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-md-win-5c98d6f77b-lr6hr cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-5alf7c-md-win-5c98d6f77b kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.114.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:c9:39:af volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-14 01:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-14 01:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-14 01:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-14 01:09:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-14 01:10:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-14 03:02:48 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}} status} {kubelet.exe Update v1 2022-11-14 03:33:48 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakecpu":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-bpf2r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakecpu: {{1 3} {<nil>} 1k DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakecpu: {{1 3} {<nil>} 1k DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 03:33:48 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 03:33:48 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 03:33:48 +0000 UTC,LastTransitionTime:2022-11-14 01:08:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 03:33:48 +0000 UTC,LastTransitionTime:2022-11-14 01:09:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-bpf2r,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-bpf2r,SystemUUID:21083AEB-D819-4573-9CD1-AA772F09A374,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-beta.0.65_8e48df13531802-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 03:35:42.512: INFO: Logging kubelet events for node capz-conf-bpf2r Nov 14 03:35:42.543: INFO: Logging pods the kubelet thinks is on node capz-conf-bpf2r Nov 14 03:35:42.594: INFO: containerd-logger-bpt69 started at 2022-11-14 01:08:57 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.594: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 03:35:42.594: INFO: calico-node-windows-xk6bd started at 2022-11-14 01:08:57 +0000 UTC (1+2 container statuses recorded) Nov 14 03:35:42.594: INFO: Init container install-cni ready: true, restart count 0 Nov 14 03:35:42.594: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 03:35:42.594: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 03:35:42.594: INFO: csi-proxy-76x9p started at 2022-11-14 01:09:18 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.594: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 03:35:42.594: INFO: kube-proxy-windows-nz2rt started at 2022-11-14 01:08:57 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.594: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 03:35:42.751: INFO: Latency metrics for node capz-conf-bpf2r Nov 14 03:35:42.751: INFO: Logging node info for node capz-conf-sq8nr Nov 14 03:35:42.783: INFO: Node Info: &Node{ObjectMeta:{capz-conf-sq8nr 51b52b43-1941-43af-b740-46bccdd021dd 20911 0 2022-11-14 01:08:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-sq8nr kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-5alf7c cluster.x-k8s.io/cluster-namespace:capz-conf-5alf7c cluster.x-k8s.io/machine:capz-conf-5alf7c-md-win-5c98d6f77b-pnhpc cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-5alf7c-md-win-5c98d6f77b kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.166.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:39:f9:57 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-14 01:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-14 01:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-14 01:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-14 01:09:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-14 01:09:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-14 03:02:48 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}} status} {kubelet.exe Update v1 2022-11-14 03:33:30 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-5alf7c/providers/Microsoft.Compute/virtualMachines/capz-conf-sq8nr,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-14 03:33:30 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-14 03:33:30 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-14 03:33:30 +0000 UTC,LastTransitionTime:2022-11-14 01:08:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-14 03:33:30 +0000 UTC,LastTransitionTime:2022-11-14 01:09:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-sq8nr,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-sq8nr,SystemUUID:9699376C-B5F7-4F5B-B48F-D84D2BD16580,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-beta.0.65+8e48df13531802,KubeProxyVersion:v1.26.0-beta.0.65+8e48df13531802,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-beta.0.65_8e48df13531802-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 03:35:42.784: INFO: Logging kubelet events for node capz-conf-sq8nr Nov 14 03:35:42.815: INFO: Logging pods the kubelet thinks is on node capz-conf-sq8nr Nov 14 03:35:42.865: INFO: calico-node-windows-w6hn2 started at 2022-11-14 01:08:50 +0000 UTC (1+2 container statuses recorded) Nov 14 03:35:42.865: INFO: Init container install-cni ready: true, restart count 0 Nov 14 03:35:42.865: INFO: Container calico-node-felix ready: true, restart count 1 Nov 14 03:35:42.865: INFO: Container calico-node-startup ready: true, restart count 0 Nov 14 03:35:42.865: INFO: kube-proxy-windows-lldgb started at 2022-11-14 01:08:50 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.865: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 03:35:42.865: INFO: csi-proxy-fbwsw started at 2022-11-14 01:09:15 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.865: INFO: Container csi-proxy ready: true, restart count 0 Nov 14 03:35:42.865: INFO: containerd-logger-bf8mz started at 2022-11-14 01:08:50 +0000 UTC (0+1 container statuses recorded) Nov 14 03:35:42.865: INFO: Container containerd-logger ready: true, restart count 0 Nov 14 03:35:43.041: INFO: Latency metrics for node capz-conf-sq8nr [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-7009" for this suite. �[38;5;243m11/14/22 03:35:43.042�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;9mNov 14 03:35:27.567: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition�[0m �[38;5;9mIn �[1m[It]�[0m�[38;5;9m at: �[1mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:209�[0m �[38;5;9mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc004965e68, {0x75d77c5?, 0xc004783f80?}, {{0x75ac8f6, 0x4}, {0x75b5b16, 0x7}, {0x75bdfe5, 0xa}}, 0xc000bece10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75d77c5?, 0x62ae505?}, {{0x75ac8f6, 0x4}, {0x75b5b16, 0x7}, {0x75bdfe5, 0xa}}, {0x75abb3b, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.1.3() test/e2e/autoscaling/horizontal_pod_autoscaling.go:56 +0x88 �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mwith autoscaling disabled�[0m �[1mshouldn't scale up�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:138�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:35:43.081�[0m Nov 14 03:35:43.081: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/14/22 03:35:43.083�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/14/22 03:35:43.186�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/14/22 03:35:43.246�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:31 [It] shouldn't scale up test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:138 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m11/14/22 03:35:43.307�[0m Nov 14 03:35:43.307: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/14/22 03:35:43.308�[0m �[1mSTEP:�[0m Creating deployment consumer in namespace horizontal-pod-autoscaling-4263 �[38;5;243m11/14/22 03:35:43.354�[0m I1114 03:35:43.390574 13 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-4263, replica count: 1 I1114 03:35:53.441452 13 runners.go:193] consumer Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/14/22 03:35:53.441�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-4263 �[38;5;243m11/14/22 03:35:53.49�[0m I1114 03:35:53.527305 13 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-4263, replica count: 1 I1114 03:36:03.578348 13 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 14 03:36:08.578: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Nov 14 03:36:08.610: INFO: RC consumer: consume 110 millicores in total Nov 14 03:36:08.610: INFO: RC consumer: setting consumption to 110 millicores in total Nov 14 03:36:08.610: INFO: RC consumer: sending request to consume 110 millicores Nov 14 03:36:08.610: INFO: RC consumer: consume 0 MB in total Nov 14 03:36:08.611: INFO: RC consumer: disabling mem consumption Nov 14 03:36:08.611: INFO: RC consumer: consume custom metric 0 in total Nov 14 03:36:08.611: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4263/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 14 03:36:08.612: INFO: RC consumer: disabling consumption of custom metric QPS �[1mSTEP:�[0m trying to trigger scale up �[38;5;243m11/14/22 03:36:08.647�[0m Nov 14 03:36:08.647: INFO: RC consumer: consume 880 millicores in total Nov 14 03:36:08.675: INFO: RC consumer: setting consumption to 880 millicores in total Nov 14 03:36:08.707: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:36:08.738: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Nov 14 03:36:18.772: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:36:18.803: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Nov 14 03:36:28.771: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:36:28.802: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002c94e80} Nov 14 03:36:38.676: INFO: RC consumer: sending request to consume 880 millicores Nov 14 03:36:38.676: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4263/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Nov 14 03:36:38.770: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:36:38.802: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004a2f180} Nov 14 03:36:48.771: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:36:48.808: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004a2f480} Nov 14 03:36:58.772: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:36:58.803: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003a91790} Nov 14 03:37:08.753: INFO: RC consumer: sending request to consume 880 millicores Nov 14 03:37:08.753: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4263/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Nov 14 03:37:08.770: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:37:08.801: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003a900c0} Nov 14 03:37:18.771: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:37:18.803: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00593a6c0} Nov 14 03:37:28.772: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:37:28.804: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003a901a0} Nov 14 03:37:38.772: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:37:38.796: INFO: RC consumer: sending request to consume 880 millicores Nov 14 03:37:38.797: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4263/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Nov 14 03:37:38.804: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003a905b0} Nov 14 03:37:48.770: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:37:48.802: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00593aa20} Nov 14 03:37:58.772: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:37:58.803: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00593acc0} Nov 14 03:38:08.771: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:38:08.802: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003a90b80} Nov 14 03:38:08.841: INFO: RC consumer: sending request to consume 880 millicores Nov 14 03:38:08.841: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4263/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Nov 14 03:38:18.771: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:38:18.803: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00593afd0} Nov 14 03:38:28.770: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:38:28.803: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00593b2c0} Nov 14 03:38:38.770: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:38:38.801: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003a6c340} Nov 14 03:38:38.888: INFO: RC consumer: sending request to consume 880 millicores Nov 14 03:38:38.888: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4263/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Nov 14 03:38:48.772: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:38:48.803: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003a91070} Nov 14 03:38:58.770: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:38:58.801: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003a913b0} Nov 14 03:39:08.770: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:39:08.801: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003a6c280} Nov 14 03:39:08.932: INFO: RC consumer: sending request to consume 880 millicores Nov 14 03:39:08.932: INFO: ConsumeCPU URL: {https capz-conf-5alf7c-b921e503.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4263/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Nov 14 03:39:18.773: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:39:18.804: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00593a320} Nov 14 03:39:28.772: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:39:28.803: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003a900a0} Nov 14 03:39:38.770: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:39:38.801: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003a6c620} Nov 14 03:39:38.832: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 14 03:39:38.863: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003a90360} Nov 14 03:39:38.863: INFO: Number of replicas was stable over 3m30s �[1mSTEP:�[0m verifying time waited for a scale up �[38;5;243m11/14/22 03:39:38.863�[0m Nov 14 03:39:38.864: INFO: time waited for scale up: 3m30.188121806s �[1mSTEP:�[0m verifying number of replicas �[38;5;243m11/14/22 03:39:38.864�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m11/14/22 03:39:38.933�[0m Nov 14 03:39:38.934: INFO: RC consumer: stopping metric consumer Nov 14 03:39:38.934: INFO: RC consumer: stopping CPU consumer Nov 14 03:39:38.934: INFO: RC consumer: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-4263, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 03:39:48.935�[0m Nov 14 03:39:49.055: INFO: Deleting Deployment.apps consumer took: 37.328071ms Nov 14 03:39:49.156: INFO: Terminating Deployment.apps consumer pods took: 101.113904ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-4263, will wait for the garbage collector to delete the pods �[38;5;243m11/14/22 03:39:51.216�[0m Nov 14 03:39:51.334: INFO: Deleting ReplicationController consumer-ctrl took: 35.196895ms Nov 14 03:39:51.434: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.781542ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/node/init/init.go:32 Nov 14 03:39:52.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-4263" for this suite. �[38;5;243m11/14/22 03:39:52.943�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [249.901 seconds]�[0m [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m with autoscaling disabled �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:137�[0m shouldn't scale up �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:138�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/14/22 03:35:43.081�[0m Nov 14 03:35:43.081: INFO: >>> kubeConfig: