Recent runs || View in Spyglass
PR | claudiubelu: Refactored kubelet's kuberuntime_sandbox |
Result | FAILURE |
Tests | 1 failed / 2 succeeded |
Started | |
Elapsed | 1h5m |
Revision | 5e605d81d57e2309b3c08f821c9dc41372f802c7 |
Refs |
114185 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sConformance\sTests\sconformance\-tests$'
[FAILED] Unexpected error: <*errors.withStack | 0xc000f9b470>: { error: <*errors.withMessage | 0xc002656300>{ cause: <*errors.errorString | 0xc00021f130>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x34b656e, 0x376dca7, 0x196a59b, 0x197e6d8, 0x14ec761], } Unable to run conformance tests: error container run failed with exit code 1 occurred In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227 @ 03/20/23 20:55:12.532from junit.e2e_suite.1.xml
> Enter [BeforeEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:54 @ 03/20/23 20:33:22.243 INFO: Cluster name is capz-conf-1plfqp STEP: Creating namespace "capz-conf-1plfqp" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:96 @ 03/20/23 20:33:22.243 Mar 20 20:33:22.243: INFO: starting to create namespace for hosting the "capz-conf-1plfqp" test spec INFO: Creating namespace capz-conf-1plfqp INFO: Creating event watcher for namespace "capz-conf-1plfqp" < Exit [BeforeEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:54 @ 03/20/23 20:33:22.333 (90ms) > Enter [It] conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:98 @ 03/20/23 20:33:22.333 conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 @ 03/20/23 20:33:22.333 conformance-tests Name | N | Min | Median | Mean | StdDev | Max ==================================================================================== cluster creation [duration] | 1 | 9m9.638s | 9m9.638s | 9m9.638s | 0s | 9m9.638s INFO: Creating the workload cluster with name "capz-conf-1plfqp" using the "conformance-presubmit-artifacts-windows-containerd" template (Kubernetes v1.27.0-beta.0.25+15894cfc85cab6, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-conf-1plfqp --infrastructure (default) --kubernetes-version v1.27.0-beta.0.25+15894cfc85cab6 --control-plane-machine-count 1 --worker-machine-count 0 --flavor conformance-presubmit-artifacts-windows-containerd INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/cluster_helpers.go:134 @ 03/20/23 20:33:28.297 INFO: Waiting for control plane to be initialized STEP: Ensuring KubeadmControlPlane is initialized - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:263 @ 03/20/23 20:35:28.431 STEP: Ensuring API Server is reachable before applying Helm charts - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:269 @ 03/20/23 20:38:38.576 STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:49 @ 03/20/23 20:38:39.012 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:102 @ 03/20/23 20:38:39.013 Mar 20 20:38:39.065: INFO: getting history for release projectcalico Mar 20 20:38:39.102: INFO: Release projectcalico does not exist, installing it Mar 20 20:38:40.020: INFO: creating 1 resource(s) Mar 20 20:38:40.107: INFO: creating 1 resource(s) Mar 20 20:38:40.205: INFO: creating 1 resource(s) Mar 20 20:38:40.286: INFO: creating 1 resource(s) Mar 20 20:38:40.374: INFO: creating 1 resource(s) Mar 20 20:38:40.458: INFO: creating 1 resource(s) Mar 20 20:38:40.587: INFO: creating 1 resource(s) Mar 20 20:38:40.689: INFO: creating 1 resource(s) Mar 20 20:38:40.773: INFO: creating 1 resource(s) Mar 20 20:38:40.852: INFO: creating 1 resource(s) Mar 20 20:38:40.935: INFO: creating 1 resource(s) Mar 20 20:38:41.013: INFO: creating 1 resource(s) Mar 20 20:38:41.088: INFO: creating 1 resource(s) Mar 20 20:38:41.169: INFO: creating 1 resource(s) Mar 20 20:38:41.247: INFO: creating 1 resource(s) Mar 20 20:38:41.337: INFO: creating 1 resource(s) Mar 20 20:38:41.438: INFO: creating 1 resource(s) Mar 20 20:38:41.524: INFO: creating 1 resource(s) Mar 20 20:38:41.626: INFO: creating 1 resource(s) Mar 20 20:38:41.767: INFO: creating 1 resource(s) Mar 20 20:38:42.092: INFO: creating 1 resource(s) Mar 20 20:38:42.161: INFO: Clearing discovery cache Mar 20 20:38:42.161: INFO: beginning wait for 21 resources with timeout of 1m0s Mar 20 20:38:44.561: INFO: creating 1 resource(s) Mar 20 20:38:44.937: INFO: creating 6 resource(s) Mar 20 20:38:45.471: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:58 @ 03/20/23 20:38:45.831 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:96 @ 03/20/23 20:38:46.086 Mar 20 20:38:46.086: INFO: starting to wait for deployment to become available Mar 20 20:38:56.157: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.071684395s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:64 @ 03/20/23 20:38:56.157 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:96 @ 03/20/23 20:38:56.482 Mar 20 20:38:56.482: INFO: starting to wait for deployment to become available Mar 20 20:39:47.524: INFO: Deployment calico-system/calico-kube-controllers is now available, took 51.041524561s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:96 @ 03/20/23 20:39:47.827 Mar 20 20:39:47.827: INFO: starting to wait for deployment to become available Mar 20 20:39:47.860: INFO: Deployment calico-system/calico-typha is now available, took 33.387198ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:69 @ 03/20/23 20:39:47.86 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:96 @ 03/20/23 20:39:58.148 Mar 20 20:39:58.148: INFO: starting to wait for deployment to become available Mar 20 20:40:08.216: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 10.068038775s INFO: Waiting for the first control plane machine managed by capz-conf-1plfqp/capz-conf-1plfqp-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/controlplane_helpers.go:132 @ 03/20/23 20:40:08.237 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:77 @ 03/20/23 20:40:08.243 Mar 20 20:40:08.292: INFO: getting history for release azuredisk-csi-driver-oot Mar 20 20:40:08.326: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Mar 20 20:40:10.970: INFO: creating 1 resource(s) Mar 20 20:40:11.061: INFO: creating 18 resource(s) Mar 20 20:40:11.394: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:87 @ 03/20/23 20:40:11.394 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:96 @ 03/20/23 20:40:11.556 Mar 20 20:40:11.556: INFO: starting to wait for deployment to become available Mar 20 20:40:41.698: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 30.142776802s INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-conf-1plfqp/capz-conf-1plfqp-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/controlplane_helpers.go:164 @ 03/20/23 20:40:41.715 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/controlplane_helpers.go:209 @ 03/20/23 20:40:41.725 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/machinedeployment_helpers.go:102 @ 03/20/23 20:40:41.754 STEP: Checking all the machines controlled by capz-conf-1plfqp-md-0 are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/ginkgoextensions/output.go:35 @ 03/20/23 20:40:41.763 STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/machinedeployment_helpers.go:102 @ 03/20/23 20:40:41.771 STEP: Checking all the machines controlled by capz-conf-1plfqp-md-win are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/ginkgoextensions/output.go:35 @ 03/20/23 20:42:31.919 INFO: Waiting for the machine pools to be provisioned INFO: Using repo-list '' for version 'v1.27.0-beta.0.25+15894cfc85cab6' STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e, command=["-nodes=4" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "--num-nodes=2" "--kubeconfig=/tmp/kubeconfig" "-prepull-images=true" "-disable-log-dump=true" "-ginkgo.progress=true" "-ginkgo.timeout=3h" "-ginkgo.trace=true" "-ginkgo.v=true" "-node-os-distro=windows" "-dump-logs-on-failure=true" "-ginkgo.flakeAttempts=0" "-ginkgo.focus=\\[Conformance\\]|\\[NodeConformance\\]|\\[sig-windows\\]|\\[sig-apps\\].CronJob|\\[sig-api-machinery\\].ResourceQuota|\\[sig-scheduling\\].SchedulerPreemption" "-ginkgo.skip=\\[LinuxOnly\\]|\\[Serial\\]|\\[Slow\\]|\\[Excluded:WindowsDocker\\]|Networking.Granular.Checks(.*)node-pod.communication|Guestbook.application.should.create.and.stop.a.working.application|device.plugin.for.Windows|Container.Lifecycle.Hook.when.create.a.pod.with.lifecycle.hook.should.execute(.*)http.hook.properly|\\[sig-api-machinery\\].Garbage.collector" "-ginkgo.slow-spec-threshold=120s"] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/ginkgoextensions/output.go:35 @ 03/20/23 20:42:32.168 Running Suite: Kubernetes e2e suite - /usr/local/bin ==================================================== Random Seed: �[1m1679344953�[0m - will randomize all specs Will run �[1m348�[0m of �[1m7207�[0m specs Running in parallel across �[1m4�[0m processes �[38;5;243m------------------------------�[0m �[38;5;9m[SynchronizedBeforeSuite] [FAILED] [758.318 seconds]�[0m �[38;5;9m�[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:77�[0m �[38;5;243mTimeline >>�[0m Mar 20 20:42:33.697: INFO: >>> kubeConfig: /tmp/kubeconfig Mar 20 20:42:33.699: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 20 20:42:33.889: INFO: Condition Ready of node capz-conf-gm7xg is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-03-20 20:41:06 +0000 UTC}]. Failure Mar 20 20:42:33.889: INFO: Unschedulable nodes= 1, maximum value for starting tests= 0 Mar 20 20:42:33.889: INFO: -> Node capz-conf-gm7xg [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-03-20 20:41:06 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master ]]] Mar 20 20:42:33.889: INFO: ==== node wait: 2 out of 3 nodes are ready, max notReady allowed 0. Need 1 more before starting. Mar 20 20:43:03.937: INFO: Condition Ready of node capz-conf-gm7xg is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-03-20 20:41:06 +0000 UTC}]. Failure Mar 20 20:43:03.937: INFO: Unschedulable nodes= 1, maximum value for starting tests= 0 Mar 20 20:43:03.937: INFO: -> Node capz-conf-gm7xg [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-03-20 20:41:06 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master ]]] Mar 20 20:43:03.937: INFO: ==== node wait: 2 out of 3 nodes are ready, max notReady allowed 0. Need 1 more before starting. Mar 20 20:43:33.937: INFO: Condition Ready of node capz-conf-gm7xg is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-03-20 20:41:06 +0000 UTC}]. Failure Mar 20 20:43:33.937: INFO: Unschedulable nodes= 1, maximum value for starting tests= 0 Mar 20 20:43:33.937: INFO: -> Node capz-conf-gm7xg [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-03-20 20:41:06 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master ]]] Mar 20 20:43:33.937: INFO: ==== node wait: 2 out of 3 nodes are ready, max notReady allowed 0. Need 1 more before starting. Mar 20 20:44:03.937: INFO: Condition Ready of node capz-conf-gm7xg is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-03-20 20:41:06 +0000 UTC}]. Failure Mar 20 20:44:03.937: INFO: Condition Ready of node capz-conf-vvvcd is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-03-20 20:43:59 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-03-20 20:44:01 +0000 UTC}]. Failure Mar 20 20:44:03.937: INFO: Unschedulable nodes= 2, maximum value for starting tests= 0 Mar 20 20:44:03.937: INFO: -> Node capz-conf-gm7xg [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-03-20 20:41:06 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master ]]] Mar 20 20:44:03.937: INFO: -> Node capz-conf-vvvcd [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/not-ready NoSchedule 2023-03-20 20:43:59 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-03-20 20:44:01 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master ]]] Mar 20 20:44:03.937: INFO: ==== node wait: 1 out of 3 nodes are ready, max notReady allowed 0. Need 2 more before starting. Mar 20 20:44:33.939: INFO: Condition Ready of node capz-conf-gm7xg is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-03-20 20:41:06 +0000 UTC}]. Failure Mar 20 20:44:33.940: INFO: Condition Ready of node capz-conf-vvvcd is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-03-20 20:43:59 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-03-20 20:44:01 +0000 UTC}]. Failure Mar 20 20:44:33.940: INFO: Unschedulable nodes= 2, maximum value for starting tests= 0 Mar 20 20:44:33.940: INFO: -> Node capz-conf-gm7xg [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-03-20 20:41:06 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master ]]] Mar 20 20:44:33.940: INFO: -> Node capz-conf-vvvcd [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/not-ready NoSchedule 2023-03-20 20:43:59 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-03-20 20:44:01 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master ]]] Mar 20 20:44:33.940: INFO: ==== node wait: 1 out of 3 nodes are ready, max notReady allowed 0. Need 2 more before starting. �[1mSTEP:�[0m Collecting events from namespace "kube-system". �[38;5;243m@ 03/20/23 20:55:03.96�[0m �[1mSTEP:�[0m Found 186 events. �[38;5;243m@ 03/20/23 20:55:04.015�[0m Mar 20 20:55:04.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for containerd-logger-ng4wl: { } Scheduled: Successfully assigned kube-system/containerd-logger-ng4wl to capz-conf-gm7xg Mar 20 20:55:04.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for containerd-logger-xxz7w: { } Scheduled: Successfully assigned kube-system/containerd-logger-xxz7w to capz-conf-vvvcd Mar 20 20:55:04.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for coredns-5d78c9869d-c58vk: { } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Mar 20 20:55:04.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for coredns-5d78c9869d-c58vk: { } Scheduled: Successfully assigned kube-system/coredns-5d78c9869d-c58vk to capz-conf-1plfqp-control-plane-2j2gm Mar 20 20:55:04.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for coredns-5d78c9869d-wh4l9: { } Scheduled: Successfully assigned kube-system/coredns-5d78c9869d-wh4l9 to capz-conf-1plfqp-control-plane-2j2gm Mar 20 20:55:04.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for coredns-5d78c9869d-wh4l9: { } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Mar 20 20:55:04.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: { } Scheduled: Successfully assigned kube-system/csi-azuredisk-controller-56db99df6c-sbnn7 to capz-conf-1plfqp-control-plane-2j2gm Mar 20 20:55:04.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for csi-azuredisk-node-jtlzl: { } Scheduled: Successfully assigned kube-system/csi-azuredisk-node-jtlzl to capz-conf-1plfqp-control-plane-2j2gm Mar 20 20:55:04.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for csi-azuredisk-node-win-778bd: { } Scheduled: Successfully assigned kube-system/csi-azuredisk-node-win-778bd to capz-conf-gm7xg Mar 20 20:55:04.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for csi-azuredisk-node-win-nrh82: { } Scheduled: Successfully assigned kube-system/csi-azuredisk-node-win-nrh82 to capz-conf-vvvcd Mar 20 20:55:04.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for csi-proxy-4v7zg: { } Scheduled: Successfully assigned kube-system/csi-proxy-4v7zg to capz-conf-gm7xg Mar 20 20:55:04.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for csi-proxy-bnsgh: { } Scheduled: Successfully assigned kube-system/csi-proxy-bnsgh to capz-conf-vvvcd Mar 20 20:55:04.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for kube-proxy-windows-527hb: { } Scheduled: Successfully assigned kube-system/kube-proxy-windows-527hb to capz-conf-gm7xg Mar 20 20:55:04.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for kube-proxy-windows-wmp2s: { } Scheduled: Successfully assigned kube-system/kube-proxy-windows-wmp2s to capz-conf-vvvcd Mar 20 20:55:04.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for metrics-server-6987569d96-kbkwt: { } Scheduled: Successfully assigned kube-system/metrics-server-6987569d96-kbkwt to capz-conf-1plfqp-control-plane-2j2gm Mar 20 20:55:04.015: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for metrics-server-6987569d96-kbkwt: { } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:39 +0000 UTC - event for etcd-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container etcd Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:39 +0000 UTC - event for etcd-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:39 +0000 UTC - event for kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container kube-apiserver Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:39 +0000 UTC - event for kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Container image "gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-beta.0.25_15894cfc85cab6" already present on machine Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:39 +0000 UTC - event for kube-controller-manager-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container kube-controller-manager Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:39 +0000 UTC - event for kube-controller-manager-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Container image "gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-beta.0.25_15894cfc85cab6" already present on machine Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:39 +0000 UTC - event for kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container kube-scheduler Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:39 +0000 UTC - event for kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Container image "gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-beta.0.25_15894cfc85cab6" already present on machine Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:40 +0000 UTC - event for etcd-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container etcd Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:40 +0000 UTC - event for kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container kube-apiserver Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:40 +0000 UTC - event for kube-controller-manager-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container kube-controller-manager Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:40 +0000 UTC - event for kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container kube-scheduler Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:51 +0000 UTC - event for kube-controller-manager: {kube-controller-manager } LeaderElection: capz-conf-1plfqp-control-plane-2j2gm_87d90ad9-48a4-40e6-89b1-22ec95065c9a became leader Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:52 +0000 UTC - event for kube-scheduler: {default-scheduler } LeaderElection: capz-conf-1plfqp-control-plane-2j2gm_a43d2e3d-ae5a-4af4-8423-c173133e130f became leader Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:59 +0000 UTC - event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-5d78c9869d to 2 Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:59 +0000 UTC - event for coredns-5d78c9869d: {replicaset-controller } SuccessfulCreate: Created pod: coredns-5d78c9869d-wh4l9 Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:59 +0000 UTC - event for coredns-5d78c9869d: {replicaset-controller } SuccessfulCreate: Created pod: coredns-5d78c9869d-c58vk Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:59 +0000 UTC - event for coredns-5d78c9869d-c58vk: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:59 +0000 UTC - event for coredns-5d78c9869d-wh4l9: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:59 +0000 UTC - event for kube-proxy: {daemonset-controller } SuccessfulDelete: Deleted pod: kube-proxy-x9kfz Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:59 +0000 UTC - event for kube-proxy: {daemonset-controller } SuccessfulCreate: Created pod: kube-proxy-x9kfz Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:59 +0000 UTC - event for kube-proxy-x9kfz: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-proxy-x9kfz to capz-conf-1plfqp-control-plane-2j2gm Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "capzci.azurecr.io/kube-apiserver:v1.27.0-beta.0.29_117662b4a973d5" Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-controller-manager-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "capzci.azurecr.io/kube-controller-manager:v1.27.0-beta.0.29_117662b4a973d5" Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-proxy: {daemonset-controller } SuccessfulCreate: Created pod: kube-proxy-7gqj4 Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-proxy-7gqj4: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-proxy-7gqj4 to capz-conf-1plfqp-control-plane-2j2gm Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-proxy-7gqj4: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "capzci.azurecr.io/kube-proxy:v1.27.0-beta.0.29_117662b4a973d5" Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-proxy-x9kfz: {kubelet capz-conf-1plfqp-control-plane-2j2gm} FailedMount: MountVolume.SetUp failed for volume "kube-proxy" : object "kube-system"/"kube-proxy" not registered Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-proxy-x9kfz: {kubelet capz-conf-1plfqp-control-plane-2j2gm} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-m8dpv" : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:capz-conf-1plfqp-control-plane-2j2gm" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'capz-conf-1plfqp-control-plane-2j2gm' and this object, object "kube-system"/"kube-root-ca.crt" not registered] Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "capzci.azurecr.io/kube-scheduler:v1.27.0-beta.0.29_117662b4a973d5" Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:02 +0000 UTC - event for kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Killing: Stopping container kube-apiserver Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:02 +0000 UTC - event for kube-controller-manager-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Killing: Stopping container kube-controller-manager Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:02 +0000 UTC - event for kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Killing: Stopping container kube-scheduler Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:04 +0000 UTC - event for kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "capzci.azurecr.io/kube-apiserver:v1.27.0-beta.0.29_117662b4a973d5" in 3.562353503s (3.562464604s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:05 +0000 UTC - event for kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container kube-apiserver Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:05 +0000 UTC - event for kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container kube-apiserver Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:08 +0000 UTC - event for kube-controller-manager-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container kube-controller-manager Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:08 +0000 UTC - event for kube-controller-manager-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container kube-controller-manager Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:08 +0000 UTC - event for kube-controller-manager-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "capzci.azurecr.io/kube-controller-manager:v1.27.0-beta.0.29_117662b4a973d5" in 3.083614598s (6.546239874s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:10 +0000 UTC - event for kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container kube-scheduler Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:10 +0000 UTC - event for kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "capzci.azurecr.io/kube-scheduler:v1.27.0-beta.0.29_117662b4a973d5" in 2.140395691s (8.671982443s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:10 +0000 UTC - event for kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container kube-scheduler Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:13 +0000 UTC - event for kube-proxy-7gqj4: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container kube-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:13 +0000 UTC - event for kube-proxy-7gqj4: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "capzci.azurecr.io/kube-proxy:v1.27.0-beta.0.29_117662b4a973d5" in 3.499445571s (11.923340955s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:13 +0000 UTC - event for kube-proxy-7gqj4: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container kube-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:21 +0000 UTC - event for kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Unhealthy: Startup probe failed: HTTP probe failed with statuscode: 500 Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:24 +0000 UTC - event for kube-controller-manager: {kube-controller-manager } LeaderElection: capz-conf-1plfqp-control-plane-2j2gm_11cc3f7d-b40e-4cbe-be22-ee508e31eb2b became leader Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:26 +0000 UTC - event for kube-scheduler: {default-scheduler } LeaderElection: capz-conf-1plfqp-control-plane-2j2gm_d0286c4b-aa0a-48d9-b282-91d3450fb492 became leader Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:31 +0000 UTC - event for metrics-server: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-6987569d96 to 1 Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:31 +0000 UTC - event for metrics-server-6987569d96: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-6987569d96-kbkwt Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:20 +0000 UTC - event for coredns-5d78c9869d-c58vk: {kubelet capz-conf-1plfqp-control-plane-2j2gm} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "9e13897e7fed205b2819620b91a752b5b98b00008e7f1e2aad8184773be3dc43": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/ Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:20 +0000 UTC - event for coredns-5d78c9869d-wh4l9: {kubelet capz-conf-1plfqp-control-plane-2j2gm} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "32f577f0bea9664ec11ac0e5b98a62af85a154812095aa16ee7f9349556e49a7": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/ Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:20 +0000 UTC - event for metrics-server-6987569d96-kbkwt: {kubelet capz-conf-1plfqp-control-plane-2j2gm} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "da59f6a3840523d29dc136abb059229721874304ef229111992d9d331dfd85cf": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/ Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:21 +0000 UTC - event for coredns-5d78c9869d-c58vk: {kubelet capz-conf-1plfqp-control-plane-2j2gm} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:21 +0000 UTC - event for coredns-5d78c9869d-wh4l9: {kubelet capz-conf-1plfqp-control-plane-2j2gm} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:21 +0000 UTC - event for metrics-server-6987569d96-kbkwt: {kubelet capz-conf-1plfqp-control-plane-2j2gm} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:33 +0000 UTC - event for coredns-5d78c9869d-wh4l9: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container coredns Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:33 +0000 UTC - event for coredns-5d78c9869d-wh4l9: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.1" already present on machine Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:33 +0000 UTC - event for coredns-5d78c9869d-wh4l9: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container coredns Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:33 +0000 UTC - event for metrics-server-6987569d96-kbkwt: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "k8s.gcr.io/metrics-server/metrics-server:v0.6.2" Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:38 +0000 UTC - event for coredns-5d78c9869d-c58vk: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.1" already present on machine Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:38 +0000 UTC - event for coredns-5d78c9869d-c58vk: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container coredns Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:38 +0000 UTC - event for coredns-5d78c9869d-c58vk: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:38 +0000 UTC - event for coredns-5d78c9869d-c58vk: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container coredns Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:39 +0000 UTC - event for metrics-server-6987569d96-kbkwt: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "k8s.gcr.io/metrics-server/metrics-server:v0.6.2" in 5.443256687s (6.220329455s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:40 +0000 UTC - event for metrics-server-6987569d96-kbkwt: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container metrics-server Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:41 +0000 UTC - event for metrics-server-6987569d96-kbkwt: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container metrics-server Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:11 +0000 UTC - event for csi-azuredisk-controller: {deployment-controller } ScalingReplicaSet: Scaled up replica set csi-azuredisk-controller-56db99df6c to 1 Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:11 +0000 UTC - event for csi-azuredisk-controller-56db99df6c: {replicaset-controller } SuccessfulCreate: Created pod: csi-azuredisk-controller-56db99df6c-sbnn7 Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:11 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.3.0" Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:11 +0000 UTC - event for csi-azuredisk-node: {daemonset-controller } SuccessfulCreate: Created pod: csi-azuredisk-node-jtlzl Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:11 +0000 UTC - event for csi-azuredisk-node-jtlzl: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0" Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:13 +0000 UTC - event for csi-azuredisk-node-jtlzl: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0" in 1.738564957s (1.738657058s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:13 +0000 UTC - event for csi-azuredisk-node-jtlzl: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container liveness-probe Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:13 +0000 UTC - event for csi-azuredisk-node-jtlzl: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2" Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:13 +0000 UTC - event for csi-azuredisk-node-jtlzl: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container liveness-probe Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:16 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container csi-provisioner Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:16 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.3.0" in 3.278329676s (4.902677184s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:16 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container csi-provisioner Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:16 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v4.0.0" Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:17 +0000 UTC - event for csi-azuredisk-node-jtlzl: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2" in 791.260057ms (3.888502872s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:17 +0000 UTC - event for csi-azuredisk-node-jtlzl: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:17 +0000 UTC - event for csi-azuredisk-node-jtlzl: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container node-driver-registrar Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:17 +0000 UTC - event for csi-azuredisk-node-jtlzl: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container node-driver-registrar Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:21 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v4.0.0" in 3.512063943s (4.153404625s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:21 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1" Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:21 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container csi-attacher Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:21 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container csi-attacher Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:29 +0000 UTC - event for csi-azuredisk-node-jtlzl: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container azuredisk Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:29 +0000 UTC - event for csi-azuredisk-node-jtlzl: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" in 8.576826767s (11.93296218s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:29 +0000 UTC - event for csi-azuredisk-node-jtlzl: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container azuredisk Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:31 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1" in 2.359007044s (10.77998773s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:31 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container csi-snapshotter Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:32 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container csi-snapshotter Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:32 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0" Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:39 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Container image "mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0" already present on machine Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:39 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container azuredisk Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:39 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Container image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" already present on machine Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:39 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0" in 7.16163065s (7.16163665s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:39 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container liveness-probe Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:39 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container csi-resizer Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:39 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container csi-resizer Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:39 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container liveness-probe Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:39 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-sbnn7: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container azuredisk Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:39 +0000 UTC - event for external-snapshotter-leader-disk-csi-azure-com: {external-snapshotter-leader-disk.csi.azure.com/capz-conf-1plfqp-control-plane-2j2gm } LeaderElection: capz-conf-1plfqp-control-plane-2j2gm became leader Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:40 +0000 UTC - event for disk-csi-azure-com: {disk.csi.azure.com/1679344840624-8081-disk.csi.azure.com } LeaderElection: 1679344840624-8081-disk-csi-azure-com became leader Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:40 +0000 UTC - event for external-attacher-leader-disk-csi-azure-com: {external-attacher-leader-disk.csi.azure.com/capz-conf-1plfqp-control-plane-2j2gm } LeaderElection: capz-conf-1plfqp-control-plane-2j2gm became leader Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:40 +0000 UTC - event for external-resizer-disk-csi-azure-com: {external-resizer-disk-csi-azure-com/capz-conf-1plfqp-control-plane-2j2gm } LeaderElection: capz-conf-1plfqp-control-plane-2j2gm became leader Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:54 +0000 UTC - event for containerd-logger: {daemonset-controller } SuccessfulCreate: Created pod: containerd-logger-xxz7w Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:54 +0000 UTC - event for kube-proxy-windows: {daemonset-controller } SuccessfulCreate: Created pod: kube-proxy-windows-wmp2s Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:01 +0000 UTC - event for containerd-logger: {daemonset-controller } SuccessfulCreate: Created pod: containerd-logger-ng4wl Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:01 +0000 UTC - event for kube-proxy-windows: {daemonset-controller } SuccessfulCreate: Created pod: kube-proxy-windows-527hb Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:15 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Pulling: Pulling image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:15 +0000 UTC - event for kube-proxy-windows-wmp2s: {kubelet capz-conf-vvvcd} Pulled: Container image "sigwindowstools/kube-proxy:v1.27.0-beta.0.25_15894cfc85cab6-calico-hostprocess" already present on machine Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:16 +0000 UTC - event for kube-proxy-windows-wmp2s: {kubelet capz-conf-vvvcd} Created: Created container kube-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:16 +0000 UTC - event for kube-proxy-windows-wmp2s: {kubelet capz-conf-vvvcd} Started: Started container kube-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:16 +0000 UTC - event for kube-proxy-windows-wmp2s: {kubelet capz-conf-vvvcd} Killing: Stopping container kube-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:20 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 4.4769903s (4.4769903s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:22 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Pulling: Pulling image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:22 +0000 UTC - event for kube-proxy-windows-527hb: {kubelet capz-conf-gm7xg} Created: Created container kube-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:22 +0000 UTC - event for kube-proxy-windows-527hb: {kubelet capz-conf-gm7xg} Pulled: Container image "sigwindowstools/kube-proxy:v1.27.0-beta.0.25_15894cfc85cab6-calico-hostprocess" already present on machine Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:22 +0000 UTC - event for kube-proxy-windows-527hb: {kubelet capz-conf-gm7xg} Started: Started container kube-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:23 +0000 UTC - event for kube-proxy-windows-527hb: {kubelet capz-conf-gm7xg} Killing: Stopping container kube-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:27 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Created: Created container containerd-logger Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:27 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Started: Started container containerd-logger Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:28 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Killing: Stopping container containerd-logger Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:31 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 4.2654753s (9.082484s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:32 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 415.267ms (415.267ms including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:32 +0000 UTC - event for kube-proxy-windows-wmp2s: {kubelet capz-conf-vvvcd} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-windows-wmp2s_kube-system(bcd38796-26a8-4f15-9513-2a8ac58d2ba4) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:43 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Created: Created container containerd-logger Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:43 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Started: Started container containerd-logger Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:44 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Killing: Stopping container containerd-logger Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:44 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 470.5145ms (470.5145ms including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:45 +0000 UTC - event for kube-proxy-windows-527hb: {kubelet capz-conf-gm7xg} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-windows-527hb_kube-system(00140840-3274-4053-b4b9-49e8d5996de7) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:49 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 595.9949ms (595.9949ms including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:55 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 607.4595ms (607.4595ms including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:03 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 525.7424ms (525.7424ms including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:06 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} BackOff: Back-off restarting failed container containerd-logger in pod containerd-logger-xxz7w_kube-system(e7e2ec93-e3fc-4ecc-8c7e-5cdb59f5fa8c) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:07 +0000 UTC - event for csi-azuredisk-node-win: {daemonset-controller } SuccessfulCreate: Created pod: csi-azuredisk-node-win-nrh82 Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:07 +0000 UTC - event for csi-proxy: {daemonset-controller } SuccessfulCreate: Created pod: csi-proxy-bnsgh Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:08 +0000 UTC - event for csi-azuredisk-node-win-nrh82: {kubelet capz-conf-vvvcd} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:08 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} Pulling: Pulling image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:18 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 498.4167ms (498.4167ms including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:26 +0000 UTC - event for csi-azuredisk-node-win-nrh82: {kubelet capz-conf-vvvcd} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" in 17.1572849s (17.1572849s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:26 +0000 UTC - event for csi-azuredisk-node-win-nrh82: {kubelet capz-conf-vvvcd} Created: Created container init Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:26 +0000 UTC - event for csi-azuredisk-node-win-nrh82: {kubelet capz-conf-vvvcd} Started: Started container init Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:27 +0000 UTC - event for csi-azuredisk-node-win-nrh82: {kubelet capz-conf-vvvcd} Killing: Stopping container init Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:27 +0000 UTC - event for csi-azuredisk-node-win-nrh82: {kubelet capz-conf-vvvcd} Pulled: Container image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" already present on machine Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:31 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} BackOff: Back-off restarting failed container containerd-logger in pod containerd-logger-ng4wl_kube-system(bd28dbc9-32d2-41df-8201-42b78981a1f5) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:43 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} Created: Created container csi-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:43 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" in 17.127592s (34.2429305s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:43 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} Started: Started container csi-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:44 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} Killing: Stopping container csi-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:48 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} Pulled: Container image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" already present on machine Mar 20 20:55:04.016: INFO: At 2023-03-20 20:43:05 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} BackOff: Back-off restarting failed container csi-proxy in pod csi-proxy-bnsgh_kube-system(d8246000-ea4b-4f56-a4b8-755b44656004) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:44:46 +0000 UTC - event for csi-azuredisk-node-win: {daemonset-controller } SuccessfulCreate: Created pod: csi-azuredisk-node-win-778bd Mar 20 20:55:04.016: INFO: At 2023-03-20 20:44:46 +0000 UTC - event for csi-proxy: {daemonset-controller } SuccessfulCreate: Created pod: csi-proxy-4v7zg Mar 20 20:55:04.016: INFO: At 2023-03-20 20:44:47 +0000 UTC - event for csi-azuredisk-node-win-778bd: {kubelet capz-conf-gm7xg} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" Mar 20 20:55:04.016: INFO: At 2023-03-20 20:44:47 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} Pulling: Pulling image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:09 +0000 UTC - event for csi-azuredisk-node-win-778bd: {kubelet capz-conf-gm7xg} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" in 21.9331601s (21.933656s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:09 +0000 UTC - event for csi-azuredisk-node-win-778bd: {kubelet capz-conf-gm7xg} Created: Created container init Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:10 +0000 UTC - event for csi-azuredisk-node-win-778bd: {kubelet capz-conf-gm7xg} Started: Started container init Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:11 +0000 UTC - event for csi-azuredisk-node-win-778bd: {kubelet capz-conf-gm7xg} Killing: Stopping container init Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:15 +0000 UTC - event for csi-azuredisk-node-win-778bd: {kubelet capz-conf-gm7xg} Pulled: Container image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" already present on machine Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:31 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" in 22.3031678s (44.1908144s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:32 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} Created: Created container csi-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:32 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} Started: Started container csi-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:33 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} Killing: Stopping container csi-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:37 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} Pulled: Container image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" already present on machine Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:54 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} BackOff: Back-off restarting failed container csi-proxy in pod csi-proxy-4v7zg_kube-system(4bfb48ce-a08e-4c4b-8d11-594ea6912696) Mar 20 20:55:04.070: INFO: POD NODE PHASE GRACE CONDITIONS Mar 20 20:55:04.070: INFO: containerd-logger-ng4wl capz-conf-gm7xg Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:41:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:54:10 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:54:10 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:41:01 +0000 UTC }] Mar 20 20:55:04.070: INFO: containerd-logger-xxz7w capz-conf-vvvcd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:52:45 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:52:45 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:54 +0000 UTC }] Mar 20 20:55:04.070: INFO: coredns-5d78c9869d-c58vk capz-conf-1plfqp-control-plane-2j2gm Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:19 +0000 UTC }] Mar 20 20:55:04.070: INFO: coredns-5d78c9869d-wh4l9 capz-conf-1plfqp-control-plane-2j2gm Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:19 +0000 UTC }] Mar 20 20:55:04.070: INFO: csi-azuredisk-controller-56db99df6c-sbnn7 capz-conf-1plfqp-control-plane-2j2gm Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:11 +0000 UTC }] Mar 20 20:55:04.070: INFO: csi-azuredisk-node-jtlzl capz-conf-1plfqp-control-plane-2j2gm Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:11 +0000 UTC }] Mar 20 20:55:04.070: INFO: csi-azuredisk-node-win-778bd capz-conf-gm7xg Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:51:52 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:44:46 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:44:46 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:44:46 +0000 UTC }] Mar 20 20:55:04.070: INFO: csi-azuredisk-node-win-nrh82 capz-conf-vvvcd Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:42:07 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:42:07 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:42:07 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:42:07 +0000 UTC }] Mar 20 20:55:04.070: INFO: csi-proxy-4v7zg capz-conf-gm7xg Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:44:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:51:04 +0000 UTC ContainersNotReady containers with unready status: [csi-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:51:04 +0000 UTC ContainersNotReady containers with unready status: [csi-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:44:46 +0000 UTC }] Mar 20 20:55:04.070: INFO: csi-proxy-bnsgh capz-conf-vvvcd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:42:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:53:19 +0000 UTC ContainersNotReady containers with unready status: [csi-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:53:19 +0000 UTC ContainersNotReady containers with unready status: [csi-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:42:07 +0000 UTC }] Mar 20 20:55:04.070: INFO: etcd-capz-conf-1plfqp-control-plane-2j2gm capz-conf-1plfqp-control-plane-2j2gm Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:00 +0000 UTC }] Mar 20 20:55:04.070: INFO: kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm capz-conf-1plfqp-control-plane-2j2gm Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:00 +0000 UTC }] Mar 20 20:55:04.070: INFO: kube-controller-manager-capz-conf-1plfqp-control-plane-2j2gm capz-conf-1plfqp-control-plane-2j2gm Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:00 +0000 UTC }] Mar 20 20:55:04.071: INFO: kube-proxy-7gqj4 capz-conf-1plfqp-control-plane-2j2gm Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:01 +0000 UTC }] Mar 20 20:55:04.071: INFO: kube-proxy-windows-527hb capz-conf-gm7xg Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:41:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:52:02 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:52:02 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:41:01 +0000 UTC }] Mar 20 20:55:04.071: INFO: kube-proxy-windows-wmp2s capz-conf-vvvcd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:51:58 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:51:58 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:54 +0000 UTC }] Mar 20 20:55:04.071: INFO: kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm capz-conf-1plfqp-control-plane-2j2gm Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:38:00 +0000 UTC }] Mar 20 20:55:04.071: INFO: metrics-server-6987569d96-kbkwt capz-conf-1plfqp-control-plane-2j2gm Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:19 +0000 UTC }] Mar 20 20:55:04.071: INFO: Mar 20 20:55:04.609: INFO: Logging node info for node capz-conf-1plfqp-control-plane-2j2gm Mar 20 20:55:04.802: INFO: Node Info: &Node{ObjectMeta:{capz-conf-1plfqp-control-plane-2j2gm ac78c7f9-8101-4bea-a120-e721cedc32ca 4098 0 2023-03-20 20:37:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_B2s beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:canadacentral failure-domain.beta.kubernetes.io/zone:canadacentral-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-1plfqp-control-plane-2j2gm kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_B2s topology.disk.csi.azure.com/zone:canadacentral-1 topology.kubernetes.io/region:canadacentral topology.kubernetes.io/zone:canadacentral-1] map[cluster.x-k8s.io/cluster-name:capz-conf-1plfqp cluster.x-k8s.io/cluster-namespace:capz-conf-1plfqp cluster.x-k8s.io/machine:capz-conf-1plfqp-control-plane-4zbz7 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-1plfqp-control-plane csi.volume.kubernetes.io/nodeid:{"csi.tigera.io":"capz-conf-1plfqp-control-plane-2j2gm","disk.csi.azure.com":"capz-conf-1plfqp-control-plane-2j2gm"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.107.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-20 20:37:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-20 20:37:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2023-03-20 20:38:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2023-03-20 20:39:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {calico-node Update v1 2023-03-20 20:39:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-03-20 20:51:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.disk.csi.azure.com/zone":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-1plfqp/providers/Microsoft.Compute/virtualMachines/capz-conf-1plfqp-control-plane-2j2gm,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4123181056 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4018323456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-20 20:39:30 +0000 UTC,LastTransitionTime:2023-03-20 20:39:30 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-20 20:51:19 +0000 UTC,LastTransitionTime:2023-03-20 20:37:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-20 20:51:19 +0000 UTC,LastTransitionTime:2023-03-20 20:37:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-20 20:51:19 +0000 UTC,LastTransitionTime:2023-03-20 20:37:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-20 20:51:19 +0000 UTC,LastTransitionTime:2023-03-20 20:39:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-1plfqp-control-plane-2j2gm,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2a89f76f138b44ed9dacdd7c86429be3,SystemUUID:17bc8ddc-aaa7-6347-8219-cbe9b82bc273,BootID:71b65451-13da-45e7-b1a0-7ed9b4c2b20f,KernelVersion:5.4.0-1104-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.29+117662b4a973d5,KubeProxyVersion:v1.27.0-beta.0.29+117662b4a973d5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/calico/cni@sha256:a38d53cb8688944eafede2f0eadc478b1b403cefeff7953da57fe9cd2d65e977 docker.io/calico/cni:v3.25.0],SizeBytes:87984941,},ContainerImage{Names:[docker.io/calico/node@sha256:a85123d1882832af6c45b5e289c6bb99820646cb7d4f6006f98095168808b1e6 docker.io/calico/node:v3.25.0],SizeBytes:87185935,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner@sha256:3ef7d954946bd1cf9e5e3564a8d1acf8e5852616f7ae96bcbc5ced8c275483ee mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.3.0],SizeBytes:61391360,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-resizer@sha256:9ba6483d2f8aa6051cb3a50e42d638fc17a6e4699a6689f054969024b7c12944 mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0],SizeBytes:58560473,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-attacher@sha256:bc317fea7e7bbaff65130d7ac6ea7c96bc15eb1f086374b8c3359f11988ac024 mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v4.0.0],SizeBytes:57948644,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:5f9044f5ddfba19c4fcb1d4c41984d17b72c1050692bcaeaee3a1e93cd0a17ca mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0],SizeBytes:56451605,},ContainerImage{Names:[docker.io/calico/apiserver@sha256:9819c1b569e60eec4dbab82c1b41cee80fe8af282b25ba2c174b2a00ae555af6 docker.io/calico/apiserver:v3.25.0],SizeBytes:35624155,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:0f03b93af45f39704b7da175db31e20da63d2ab369f350e59de8cbbef9d703e0 registry.k8s.io/kube-apiserver:v1.26.2],SizeBytes:35329425,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver@sha256:b1734ba11340234a7dac1b75aed045a355c7ad1414089a24b12a857a70018f71 gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-beta.0.25_15894cfc85cab6],SizeBytes:33206527,},ContainerImage{Names:[capzci.azurecr.io/kube-apiserver@sha256:31870f4e3fc00dc3ee2dacc17e44c6c78d0d1022dcb03eac9678b883c315fd7e capzci.azurecr.io/kube-apiserver:v1.27.0-beta.0.29_117662b4a973d5],SizeBytes:33205151,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:5434d52f88eb16bc5e98ccb65e97e97cb5cf7861749afbf26174d27c4ece1fad registry.k8s.io/kube-controller-manager:v1.26.2],SizeBytes:32180749,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:c45af3a9692d87a527451cf544557138fedf86f92b6e39bf2003e2fdb848dce3 docker.io/calico/kube-controllers:v3.25.0],SizeBytes:31271800,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager@sha256:21fecc1471fd46a3569b90ee21bca428830b44e14fd50b6e5ad56d35e1d6fb19 gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-beta.0.25_15894cfc85cab6],SizeBytes:30822153,},ContainerImage{Names:[capzci.azurecr.io/kube-controller-manager@sha256:6ba0482bb5d1c47126a5c617bfde8ff0bdc33777a5cf1c8deea5bcaa266f9826 capzci.azurecr.io/kube-controller-manager:v1.27.0-beta.0.29_117662b4a973d5],SizeBytes:30820801,},ContainerImage{Names:[docker.io/calico/typha@sha256:f7e0557e03f422c8ba5fcf64ef0fac054ee99935b5d101a0a50b5e9b65f6a5c5 docker.io/calico/typha:v3.25.0],SizeBytes:28533187,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:f977ad859fb500c1302d9c3428c6271db031bb7431e7076213b676b345a88dc2 k8s.gcr.io/metrics-server/metrics-server:v0.6.2],SizeBytes:28135299,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy@sha256:a5657c213363c5f4156cb02f7b5b42116d1229b0a097213299418cf87f90e059 gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-beta.0.25_15894cfc85cab6],SizeBytes:23897544,},ContainerImage{Names:[capzci.azurecr.io/kube-proxy@sha256:ca4673a406a7cba6f771223c62f6f800a14e4ab4491a363de2fe2bf409c39a82 capzci.azurecr.io/kube-proxy:v1.27.0-beta.0.29_117662b4a973d5],SizeBytes:23896193,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter@sha256:a889e925e15f9423f7842f1b769f64cbcf6a20b6956122836fc835cf22d9073f mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1],SizeBytes:22192414,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:5dac6611aceb1452a5d4036108a15ceb0699c083a942977e30640d521e7d2078 registry.k8s.io/kube-proxy:v1.26.2],SizeBytes:21541935,},ContainerImage{Names:[quay.io/tigera/operator@sha256:89eef35e1bbe8c88792ce69c3f3f38fb9838e58602c570524350b5f3ab127582 quay.io/tigera/operator:v1.29.0],SizeBytes:21108896,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler@sha256:08129a370960a9e8da936ac32228f475c786ec320a710799c06c45e6a6bce152 gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-beta.0.25_15894cfc85cab6],SizeBytes:18078756,},ContainerImage{Names:[capzci.azurecr.io/kube-scheduler@sha256:6af31eb282ccebdd358a3d1f03dcf6e1edc16138cef73691063c4acb25c03b7c capzci.azurecr.io/kube-scheduler:v1.27.0-beta.0.29_117662b4a973d5],SizeBytes:18077395,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:da109877fd8fd0feba2f9a4cb6a199797452c17ddcfaf7b023cf0bac09e51417 registry.k8s.io/kube-scheduler:v1.26.2],SizeBytes:17489559,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/calico/node-driver-registrar@sha256:f559ee53078266d2126732303f588b9d4266607088e457ea04286f31727676f7 docker.io/calico/node-driver-registrar:v3.25.0],SizeBytes:11133658,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:515b883deb0ae8d58eef60312f4d460ff8a3f52a2a5e487c94a8ebb2ca362720 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2],SizeBytes:10076715,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:fcb73e1939d9abeb2d1e1680b476a10a422a04a73ea5a65e64eec3fde1f2a5a1 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0],SizeBytes:9117963,},ContainerImage{Names:[docker.io/calico/csi@sha256:61a95f3ee79a7e591aff9eff535be73e62d2c3931d07c2ea8a1305f7bea19b31 docker.io/calico/csi:v3.25.0],SizeBytes:9076936,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:01ddd57d428787b3ac689daa685660defe4bd7810069544bd43a9103a7b0a789 docker.io/calico/pod2daemon-flexvol:v3.25.0],SizeBytes:7076045,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 20 20:55:04.802: INFO: Logging kubelet events for node capz-conf-1plfqp-control-plane-2j2gm Mar 20 20:55:04.997: INFO: Logging pods the kubelet thinks is on node capz-conf-1plfqp-control-plane-2j2gm Mar 20 20:55:05.266: INFO: coredns-5d78c9869d-c58vk started at 2023-03-20 20:39:19 +0000 UTC (0+1 container statuses recorded) Mar 20 20:55:05.266: INFO: Container coredns ready: true, restart count 0 Mar 20 20:55:05.266: INFO: csi-azuredisk-node-jtlzl started at 2023-03-20 20:40:11 +0000 UTC (0+3 container statuses recorded) Mar 20 20:55:05.266: INFO: Container azuredisk ready: true, restart count 0 Mar 20 20:55:05.266: INFO: Container liveness-probe ready: true, restart count 0 Mar 20 20:55:05.266: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 20 20:55:05.266: INFO: tigera-operator-59c686f986-m7hjf started at 2023-03-20 20:38:45 +0000 UTC (0+1 container statuses recorded) Mar 20 20:55:05.266: INFO: Container tigera-operator ready: true, restart count 0 Mar 20 20:55:05.266: INFO: kube-controller-manager-capz-conf-1plfqp-control-plane-2j2gm started at 2023-03-20 20:38:00 +0000 UTC (0+1 container statuses recorded) Mar 20 20:55:05.266: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 20 20:55:05.266: INFO: kube-proxy-7gqj4 started at 2023-03-20 20:38:01 +0000 UTC (0+1 container statuses recorded) Mar 20 20:55:05.266: INFO: Container kube-proxy ready: true, restart count 0 Mar 20 20:55:05.266: INFO: metrics-server-6987569d96-kbkwt started at 2023-03-20 20:39:19 +0000 UTC (0+1 container statuses recorded) Mar 20 20:55:05.266: INFO: Container metrics-server ready: true, restart count 0 Mar 20 20:55:05.266: INFO: csi-node-driver-j9ptp started at 2023-03-20 20:39:20 +0000 UTC (0+2 container statuses recorded) Mar 20 20:55:05.266: INFO: Container calico-csi ready: true, restart count 0 Mar 20 20:55:05.266: INFO: Container csi-node-driver-registrar ready: true, restart count 0 Mar 20 20:55:05.266: INFO: calico-apiserver-5467959f9d-8zg79 started at 2023-03-20 20:39:51 +0000 UTC (0+1 container statuses recorded) Mar 20 20:55:05.266: INFO: Container calico-apiserver ready: true, restart count 0 Mar 20 20:55:05.266: INFO: kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm started at 2023-03-20 20:38:00 +0000 UTC (0+1 container statuses recorded) Mar 20 20:55:05.266: INFO: Container kube-apiserver ready: true, restart count 0 Mar 20 20:55:05.266: INFO: calico-typha-96fb785dc-c7sr9 started at 2023-03-20 20:38:52 +0000 UTC (0+1 container statuses recorded) Mar 20 20:55:05.266: INFO: Container calico-typha ready: true, restart count 0 Mar 20 20:55:05.266: INFO: calico-node-bdvzb started at 2023-03-20 20:38:52 +0000 UTC (2+1 container statuses recorded) Mar 20 20:55:05.266: INFO: Init container flexvol-driver ready: true, restart count 0 Mar 20 20:55:05.266: INFO: Init container install-cni ready: true, restart count 0 Mar 20 20:55:05.266: INFO: Container calico-node ready: true, restart count 0 Mar 20 20:55:05.266: INFO: coredns-5d78c9869d-wh4l9 started at 2023-03-20 20:39:19 +0000 UTC (0+1 container statuses recorded) Mar 20 20:55:05.266: INFO: Container coredns ready: true, restart count 0 Mar 20 20:55:05.266: INFO: calico-apiserver-5467959f9d-n9qxv started at 2023-03-20 20:39:51 +0000 UTC (0+1 container statuses recorded) Mar 20 20:55:05.266: INFO: Container calico-apiserver ready: true, restart count 0 Mar 20 20:55:05.266: INFO: kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm started at 2023-03-20 20:38:00 +0000 UTC (0+1 container statuses recorded) Mar 20 20:55:05.266: INFO: Container kube-scheduler ready: true, restart count 0 Mar 20 20:55:05.266: INFO: calico-kube-controllers-59d9cb8fbb-8ft2d started at 2023-03-20 20:39:19 +0000 UTC (0+1 container statuses recorded) Mar 20 20:55:05.266: INFO: Container calico-kube-controllers ready: true, restart count 0 Mar 20 20:55:05.266: INFO: csi-azuredisk-controller-56db99df6c-sbnn7 started at 2023-03-20 20:40:11 +0000 UTC (0+6 container statuses recorded) Mar 20 20:55:05.266: INFO: Container azuredisk ready: true, restart count 0 Mar 20 20:55:05.266: INFO: Container csi-attacher ready: true, restart count 0 Mar 20 20:55:05.266: INFO: Container csi-provisioner ready: true, restart count 0 Mar 20 20:55:05.266: INFO: Container csi-resizer ready: true, restart count 0 Mar 20 20:55:05.266: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 20 20:55:05.266: INFO: Container liveness-probe ready: true, restart count 0 Mar 20 20:55:05.266: INFO: etcd-capz-conf-1plfqp-control-plane-2j2gm started at 2023-03-20 20:38:00 +0000 UTC (0+1 container statuses recorded) Mar 20 20:55:05.266: INFO: Container etcd ready: true, restart count 0 Mar 20 20:55:05.864: INFO: Latency metrics for node capz-conf-1plfqp-control-plane-2j2gm Mar 20 20:55:05.864: INFO: Logging node info for node capz-conf-gm7xg Mar 20 20:55:05.997: INFO: Node Info: &Node{ObjectMeta:{capz-conf-gm7xg 81f37820-91f6-4190-9850-f6b9b34795ff 4074 0 2023-03-20 20:41:01 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:canadacentral failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-gm7xg kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:canadacentral topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-1plfqp cluster.x-k8s.io/cluster-namespace:capz-conf-1plfqp cluster.x-k8s.io/machine:capz-conf-1plfqp-md-win-65dbf97bf6-csgg7 cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-1plfqp-md-win-65dbf97bf6 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-20 20:41:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2023-03-20 20:41:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {manager Update v1 2023-03-20 20:42:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2023-03-20 20:44:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet.exe Update v1 2023-03-20 20:51:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-1plfqp/providers/Microsoft.Compute/virtualMachines/capz-conf-gm7xg,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-20 20:51:16 +0000 UTC,LastTransitionTime:2023-03-20 20:41:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-20 20:51:16 +0000 UTC,LastTransitionTime:2023-03-20 20:41:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-20 20:51:16 +0000 UTC,LastTransitionTime:2023-03-20 20:41:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-20 20:51:16 +0000 UTC,LastTransitionTime:2023-03-20 20:44:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-gm7xg,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-gm7xg,SystemUUID:6742B7A6-C8EA-4D43-BD17-2D5ABCED8BC6,BootID:9,KernelVersion:10.0.17763.4131,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.7.0,KubeletVersion:v1.27.0-beta.0.29+117662b4a973d5-dirty,KubeProxyVersion:v1.27.0-beta.0.29+117662b4a973d5-dirty,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:269513752,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:5f9044f5ddfba19c4fcb1d4c41984d17b72c1050692bcaeaee3a1e93cd0a17ca mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0],SizeBytes:130192348,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.27.0-beta.0.25_15894cfc85cab6-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:014b93f2aae432969c3ffe0f99d4c30537e101572f1007e9a15ace393df47e7b docker.io/sigwindowstools/calico-install:v3.25.0-hostprocess],SizeBytes:49946025,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 20 20:55:05.997: INFO: Logging kubelet events for node capz-conf-gm7xg Mar 20 20:55:06.198: INFO: Logging pods the kubelet thinks is on node capz-conf-gm7xg Mar 20 20:55:06.407: INFO: calico-node-windows-9f96h started at 2023-03-20 20:41:01 +0000 UTC (1+2 container statuses recorded) Mar 20 20:55:06.407: INFO: Init container install-cni ready: false, restart count 120 Mar 20 20:55:06.407: INFO: Container calico-node-felix ready: false, restart count 0 Mar 20 20:55:06.407: INFO: Container calico-node-startup ready: false, restart count 0 Mar 20 20:55:06.407: INFO: containerd-logger-ng4wl started at 2023-03-20 20:41:01 +0000 UTC (0+1 container statuses recorded) Mar 20 20:55:06.407: INFO: Container containerd-logger ready: false, restart count 9 Mar 20 20:55:06.407: INFO: kube-proxy-windows-527hb started at 2023-03-20 20:41:01 +0000 UTC (0+1 container statuses recorded) Mar 20 20:55:06.407: INFO: Container kube-proxy ready: false, restart count 9 Mar 20 20:55:06.407: INFO: csi-azuredisk-node-win-778bd started at 2023-03-20 20:44:46 +0000 UTC (1+3 container statuses recorded) Mar 20 20:55:06.407: INFO: Init container init ready: false, restart count 15 Mar 20 20:55:06.407: INFO: Container azuredisk ready: false, restart count 0 Mar 20 20:55:06.407: INFO: Container liveness-probe ready: false, restart count 0 Mar 20 20:55:06.407: INFO: Container node-driver-registrar ready: false, restart count 0 Mar 20 20:55:06.407: INFO: csi-proxy-4v7zg started at 2023-03-20 20:44:46 +0000 UTC (0+1 container statuses recorded) Mar 20 20:55:06.407: INFO: Container csi-proxy ready: false, restart count 7 Mar 20 20:55:07.048: INFO: Latency metrics for node capz-conf-gm7xg Mar 20 20:55:07.048: INFO: Logging node info for node capz-conf-vvvcd Mar 20 20:55:07.199: INFO: Node Info: &Node{ObjectMeta:{capz-conf-vvvcd b1d69908-dbd2-4a3a-8f8c-88c76c6558ec 4148 0 2023-03-20 20:40:54 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:canadacentral failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-vvvcd kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:canadacentral topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-1plfqp cluster.x-k8s.io/cluster-namespace:capz-conf-1plfqp cluster.x-k8s.io/machine:capz-conf-1plfqp-md-win-65dbf97bf6-j9qvz cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-1plfqp-md-win-65dbf97bf6 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-20 20:40:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2023-03-20 20:40:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {manager Update v1 2023-03-20 20:41:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2023-03-20 20:51:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet.exe Update v1 2023-03-20 20:51:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-1plfqp/providers/Microsoft.Compute/virtualMachines/capz-conf-vvvcd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-20 20:51:28 +0000 UTC,LastTransitionTime:2023-03-20 20:40:54 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-20 20:51:28 +0000 UTC,LastTransitionTime:2023-03-20 20:40:54 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-20 20:51:28 +0000 UTC,LastTransitionTime:2023-03-20 20:40:54 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-20 20:51:28 +0000 UTC,LastTransitionTime:2023-03-20 20:51:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-vvvcd,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-vvvcd,SystemUUID:14681417-8FCD-42E8-B00B-880545CD35C0,BootID:9,KernelVersion:10.0.17763.4131,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.7.0,KubeletVersion:v1.27.0-beta.0.29+117662b4a973d5-dirty,KubeProxyVersion:v1.27.0-beta.0.29+117662b4a973d5-dirty,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:269513752,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:5f9044f5ddfba19c4fcb1d4c41984d17b72c1050692bcaeaee3a1e93cd0a17ca mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0],SizeBytes:130192348,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.27.0-beta.0.25_15894cfc85cab6-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:014b93f2aae432969c3ffe0f99d4c30537e101572f1007e9a15ace393df47e7b docker.io/sigwindowstools/calico-install:v3.25.0-hostprocess],SizeBytes:49946025,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 20 20:55:07.199: INFO: Logging kubelet events for node capz-conf-vvvcd Mar 20 20:55:07.397: INFO: Logging pods the kubelet thinks is on node capz-conf-vvvcd Mar 20 20:55:07.606: INFO: csi-azuredisk-node-win-nrh82 started at 2023-03-20 20:42:07 +0000 UTC (1+3 container statuses recorded) Mar 20 20:55:07.606: INFO: Init container init ready: false, restart count 173 Mar 20 20:55:07.606: INFO: Container azuredisk ready: false, restart count 0 Mar 20 20:55:07.606: INFO: Container liveness-probe ready: false, restart count 0 Mar 20 20:55:07.606: INFO: Container node-driver-registrar ready: false, restart count 0 Mar 20 20:55:07.606: INFO: csi-proxy-bnsgh started at 2023-03-20 20:42:07 +0000 UTC (0+1 container statuses recorded) Mar 20 20:55:07.606: INFO: Container csi-proxy ready: false, restart count 9 Mar 20 20:55:07.606: INFO: calico-node-windows-k9kth started at 2023-03-20 20:40:55 +0000 UTC (1+2 container statuses recorded) Mar 20 20:55:07.606: INFO: Init container install-cni ready: false, restart count 28 Mar 20 20:55:07.606: INFO: Container calico-node-felix ready: false, restart count 0 Mar 20 20:55:07.606: INFO: Container calico-node-startup ready: false, restart count 0 Mar 20 20:55:07.606: INFO: containerd-logger-xxz7w started at 2023-03-20 20:40:55 +0000 UTC (0+1 container statuses recorded) Mar 20 20:55:07.606: INFO: Container containerd-logger ready: false, restart count 9 Mar 20 20:55:07.606: INFO: kube-proxy-windows-wmp2s started at 2023-03-20 20:40:55 +0000 UTC (0+1 container statuses recorded) Mar 20 20:55:07.606: INFO: Container kube-proxy ready: false, restart count 9 Mar 20 20:55:08.230: INFO: Latency metrics for node capz-conf-vvvcd Mar 20 20:55:08.407: INFO: Running kubectl logs on non-ready containers in kube-system Mar 20 20:55:08.605: INFO: Logs of kube-system/containerd-logger-ng4wl:containerd-logger on node capz-conf-gm7xg Mar 20 20:55:08.605: INFO: : STARTLOG Using configuration file config.json: { "inputs": [ { "type": "ETW", "sessionNamePrefix": "containerd", "cleanupOldSessions": true, "reuseExistingSession": true, "providers": [ { "providerName": "Microsoft.Virtualization.RunHCS", "providerGuid": "0B52781F-B24D-5685-DDF6-69830ED40EC3", "level": "Verbose" }, { "providerName": "ContainerD", "providerGuid": "2acb92c0-eb9b-571a-69cf-8f3410f383ad", "level": "Verbose" } ] } ], "filters": [ { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == Stats && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::LayerID && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::NameToGuid && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.Stats && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.State && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetProcessProperties && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetComputeSystemProperties && hasnoproperty error" } ], "outputs": [ { "type": "StdOutput" } ], "schemaVersion": "2016-08-11" } Logging started... ENDLOG for container kube-system:containerd-logger-ng4wl:containerd-logger Mar 20 20:55:08.801: INFO: Logs of kube-system/containerd-logger-xxz7w:containerd-logger on node capz-conf-vvvcd Mar 20 20:55:08.801: INFO: : STARTLOG Using configuration file config.json: { "inputs": [ { "type": "ETW", "sessionNamePrefix": "containerd", "cleanupOldSessions": true, "reuseExistingSession": true, "providers": [ { "providerName": "Microsoft.Virtualization.RunHCS", "providerGuid": "0B52781F-B24D-5685-DDF6-69830ED40EC3", "level": "Verbose" }, { "providerName": "ContainerD", "providerGuid": "2acb92c0-eb9b-571a-69cf-8f3410f383ad", "level": "Verbose" } ] } ], "filters": [ { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == Stats && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::LayerID && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::NameToGuid && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.Stats && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.State && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetProcessProperties && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetComputeSystemProperties && hasnoproperty error" } ], "outputs": [ { "type": "StdOutput" } ], "schemaVersion": "2016-08-11" } Logging started... ENDLOG for container kube-system:containerd-logger-xxz7w:containerd-logger Mar 20 20:55:09.198: INFO: Failed to get logs of pod csi-azuredisk-node-win-778bd, container liveness-probe, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-778bd) Mar 20 20:55:09.198: INFO: Logs of kube-system/csi-azuredisk-node-win-778bd:liveness-probe on node capz-conf-gm7xg Mar 20 20:55:09.198: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-778bd:liveness-probe Mar 20 20:55:09.597: INFO: Failed to get logs of pod csi-azuredisk-node-win-778bd, container node-driver-registrar, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-778bd) Mar 20 20:55:09.597: INFO: Logs of kube-system/csi-azuredisk-node-win-778bd:node-driver-registrar on node capz-conf-gm7xg Mar 20 20:55:09.597: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-778bd:node-driver-registrar Mar 20 20:55:09.998: INFO: Failed to get logs of pod csi-azuredisk-node-win-778bd, container azuredisk, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-778bd) Mar 20 20:55:09.998: INFO: Logs of kube-system/csi-azuredisk-node-win-778bd:azuredisk on node capz-conf-gm7xg Mar 20 20:55:09.998: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-778bd:azuredisk Mar 20 20:55:10.397: INFO: Failed to get logs of pod csi-azuredisk-node-win-nrh82, container liveness-probe, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-nrh82) Mar 20 20:55:10.397: INFO: Logs of kube-system/csi-azuredisk-node-win-nrh82:liveness-probe on node capz-conf-vvvcd Mar 20 20:55:10.397: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-nrh82:liveness-probe Mar 20 20:55:10.798: INFO: Failed to get logs of pod csi-azuredisk-node-win-nrh82, container node-driver-registrar, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-nrh82) Mar 20 20:55:10.798: INFO: Logs of kube-system/csi-azuredisk-node-win-nrh82:node-driver-registrar on node capz-conf-vvvcd Mar 20 20:55:10.798: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-nrh82:node-driver-registrar Mar 20 20:55:11.197: INFO: Failed to get logs of pod csi-azuredisk-node-win-nrh82, container azuredisk, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-nrh82) Mar 20 20:55:11.197: INFO: Logs of kube-system/csi-azuredisk-node-win-nrh82:azuredisk on node capz-conf-vvvcd Mar 20 20:55:11.197: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-nrh82:azuredisk Mar 20 20:55:11.413: INFO: Logs of kube-system/csi-proxy-4v7zg:csi-proxy on node capz-conf-gm7xg Mar 20 20:55:11.413: INFO: : STARTLOG I0320 20:50:58.646137 4320 main.go:54] Starting CSI-Proxy Server ... I0320 20:50:58.775337 4320 main.go:55] Version: v1.0.2-0-g51a6f06 ENDLOG for container kube-system:csi-proxy-4v7zg:csi-proxy Mar 20 20:55:11.607: INFO: Logs of kube-system/csi-proxy-bnsgh:csi-proxy on node capz-conf-vvvcd Mar 20 20:55:11.607: INFO: : STARTLOG I0320 20:53:13.887787 4860 main.go:54] Starting CSI-Proxy Server ... I0320 20:53:13.935638 4860 main.go:55] Version: v1.0.2-0-g51a6f06 ENDLOG for container kube-system:csi-proxy-bnsgh:csi-proxy Mar 20 20:55:11.801: INFO: Logs of kube-system/kube-proxy-windows-527hb:kube-proxy on node capz-conf-gm7xg Mar 20 20:55:11.801: INFO: : STARTLOG ENDLOG for container kube-system:kube-proxy-windows-527hb:kube-proxy Mar 20 20:55:12.013: INFO: Logs of kube-system/kube-proxy-windows-wmp2s:kube-proxy on node capz-conf-vvvcd Mar 20 20:55:12.013: INFO: : STARTLOG WARNING: The names of some imported commands from the module 'hns' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-Verb. Running kub-proxy service. Waiting for HNS network Calico to be created... ENDLOG for container kube-system:kube-proxy-windows-wmp2s:kube-proxy �[38;5;9m[FAILED]�[0m in [SynchronizedBeforeSuite] - test/e2e/e2e.go:242 �[38;5;243m@ 03/20/23 20:55:12.014�[0m �[38;5;243m<< Timeline�[0m �[38;5;9m[FAILED] Error waiting for all pods to be running and ready: Timed out after 600.000s. Expected all pods (need at least 0) in namespace "kube-system" to be running and ready (except for 0). 10 / 18 pods were running and ready. Expected 4 pod replicas, 4 are Running and Ready. Pods that were neither completed nor running: <[]v1.Pod | len:8, cap:8>: - metadata: creationTimestamp: "2023-03-20T20:41:01Z" generateName: containerd-logger- labels: controller-revision-hash: 56b7f4bb6 k8s-app: containerd-logger pod-template-generation: "1" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:controller-revision-hash: {} f:k8s-app: {} f:pod-template-generation: {} f:ownerReferences: .: {} k:{"uid":"e14f097e-f84b-45a1-bc2c-43dd1b04e785"}: {} f:spec: f:affinity: .: {} f:nodeAffinity: .: {} f:requiredDuringSchedulingIgnoredDuringExecution: {} f:containers: k:{"name":"containerd-logger"}: .: {} f:args: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/config.json"}: .: {} f:mountPath: {} f:name: {} f:subPath: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:hostNetwork: {} f:nodeSelector: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:windowsOptions: .: {} f:hostProcess: {} f:runAsUserName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} f:volumes: .: {} k:{"name":"containerd-logger-config"}: .: {} f:configMap: .: {} f:defaultMode: {} f:name: {} f:name: {} manager: kube-controller-manager operation: Update time: "2023-03-20T20:41:01Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.1.0.4"}: .: {} f:ip: {} f:startTime: {} manager: kubelet.exe operation: Update subresource: status time: "2023-03-20T20:54:10Z" name: containerd-logger-ng4wl namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: containerd-logger uid: e14f097e-f84b-45a1-bc2c-43dd1b04e785 resourceVersion: "4782" uid: bd28dbc9-32d2-41df-8201-42b78981a1f5 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - capz-conf-gm7xg containers: - args: - config.json image: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0 imagePullPolicy: Always name: containerd-logger resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /config.json name: containerd-logger-config subPath: config.json - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-9r5x4 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true nodeName: capz-conf-gm7xg nodeSelector: kubernetes.io/os: windows preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: windowsOptions: hostProcess: true runAsUserName: NT AUTHORITY\system serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - key: CriticalAddonsOnly operator: Exists - operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: - configMap: defaultMode: 420 name: containerd-logger-config name: containerd-logger-config - name: kube-api-access-9r5x4 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2023-03-20T20:41:01Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-03-20T20:54:10Z" message: 'containers with unready status: [containerd-logger]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-03-20T20:54:10Z" message: 'containers with unready status: [containerd-logger]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-03-20T20:41:01Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://1a2e8e1c5f111dd334de452542dd1eb113ab1ba967088428f64f5c344c449e41 image: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0 imageID: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 lastState: terminated: containerID: containerd://1a2e8e1c5f111dd334de452542dd1eb113ab1ba967088428f64f5c344c449e41 exitCode: -1073741510 finishedAt: "2023-03-20T20:54:06Z" reason: Error startedAt: "2023-03-20T20:54:05Z" name: containerd-logger ready: false restartCount: 9 started: false state: waiting: message: back-off 5m0s restarting failed container=containerd-logger pod=containerd-logger-ng4wl_kube-system(bd28dbc9-32d2-41df-8201-42b78981a1f5) reason: CrashLoopBackOff hostIP: 10.1.0.4 phase: Running podIP: 10.1.0.4 podIPs: - ip: 10.1.0.4 qosClass: BestEffort startTime: "2023-03-20T20:41:01Z" - metadata: creationTimestamp: "2023-03-20T20:40:54Z" generateName: containerd-logger- labels: controller-revision-hash: 56b7f4bb6 k8s-app: containerd-logger pod-template-generation: "1" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:controller-revision-hash: {} f:k8s-app: {} f:pod-template-generation: {} f:ownerReferences: .: {} k:{"uid":"e14f097e-f84b-45a1-bc2c-43dd1b04e785"}: {} f:spec: f:affinity: .: {} f:nodeAffinity: .: {} f:requiredDuringSchedulingIgnoredDuringExecution: {} f:containers: k:{"name":"containerd-logger"}: .: {} f:args: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/config.json"}: .: {} f:mountPath: {} f:name: {} f:subPath: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:hostNetwork: {} f:nodeSelector: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:windowsOptions: .: {} f:hostProcess: {} f:runAsUserName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} f:volumes: .: {} k:{"name":"containerd-logger-config"}: .: {} f:configMap: .: {} f:defaultMode: {} f:name: {} f:name: {} manager: kube-controller-manager operation: Update time: "2023-03-20T20:40:54Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.1.0.5"}: .: {} f:ip: {} f:startTime: {} manager: kubelet.exe operation: Update subresource: status time: "2023-03-20T20:52:45Z" name: containerd-logger-xxz7w namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: containerd-logger uid: e14f097e-f84b-45a1-bc2c-43dd1b04e785 resourceVersion: "4454" uid: e7e2ec93-e3fc-4ecc-8c7e-5cdb59f5fa8c spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - capz-conf-vvvcd containers: - args: - config.json image: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0 imagePullPolicy: Always name: containerd-logger resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /config.json name: containerd-logger-config subPath: config.json - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-5lfn7 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true nodeName: capz-conf-vvvcd nodeSelector: kubernetes.io/os: windows preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: windowsOptions: hostProcess: true runAsUserName: NT AUTHORITY\system serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - key: CriticalAddonsOnly operator: Exists - operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: - configMap: defaultMode: 420 name: containerd-logger-config name: containerd-logger-config - name: kube-api-access-5lfn7 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2023-03-20T20:40:55Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-03-20T20:52:45Z" message: 'containers with unready status: [containerd-logger]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-03-20T20:52:45Z" message: 'containers with unready status: [containerd-logger]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-03-20T20:40:54Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://8d389ab98b1e1381585a689130d3632e29d79314045e6027a7505ced598f75ac image: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0 imageID: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 lastState: terminated: containerID: containerd://8d389ab98b1e1381585a689130d3632e29d79314045e6027a7505ced598f75ac exitCode: -1073741510 finishedAt: "2023-03-20T20:52:41Z" reason: Error startedAt: "2023-03-20T20:52:40Z" name: containerd-logger ready: false restartCount: 9 started: false state: waiting: message: back-off 5m0s restarting failed container=containerd-logger pod=containerd-logger-xxz7w_kube-system(e7e2ec93-e3fc-4ecc-8c7e-5cdb59f5fa8c) reason: CrashLoopBackOff hostIP: 10.1.0.5 phase: Running podIP: 10.1.0.5 podIPs: - ip: 10.1.0.5 qosClass: BestEffort startTime: "2023-03-20T20:40:55Z" - metadata: creationTimestamp: "2023-03-20T20:44:46Z" generateName: csi-azuredisk-node-win- labels: app: csi-azuredisk-node-win app.kubernetes.io/instance: azuredisk-csi-driver-oot app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: azuredisk-csi-driver app.kubernetes.io/version: v1.27.0 controller-revision-hash: d9d49cd64 helm.sh/chart: azuredisk-csi-driver-v1.27.0 pod-template-generation: "1" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:app: {} f:app.kubernetes.io/instance: {} f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/version: {} f:controller-revision-hash: {} f:helm.sh/chart: {} f:pod-template-generation: {} f:ownerReferences: .: {} k:{"uid":"fd1c6a12-5673-482e-9658-9034527b5368"}: {} f:spec: f:affinity: .: {} f:nodeAffinity: .: {} f:requiredDuringSchedulingIgnoredDuringExecution: {} f:containers: k:{"name":"azuredisk"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"AZURE_CREDENTIAL_FILE"}: .: {} f:name: {} f:valueFrom: .: {} f:configMapKeyRef: {} k:{"name":"AZURE_GO_SDK_LOG_LEVEL"}: .: {} f:name: {} k:{"name":"CSI_ENDPOINT"}: .: {} f:name: {} f:value: {} k:{"name":"KUBE_NODE_NAME"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:ports: .: {} k:{"containerPort":29603,"protocol":"TCP"}: .: {} f:containerPort: {} f:hostPort: {} f:name: {} f:protocol: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} k:{"name":"liveness-probe"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"CSI_ENDPOINT"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} k:{"name":"node-driver-registrar"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"CSI_ENDPOINT"}: .: {} f:name: {} f:value: {} k:{"name":"DRIVER_REG_SOCK_PATH"}: .: {} f:name: {} f:value: {} k:{"name":"KUBE_NODE_NAME"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: {} k:{"name":"PLUGIN_REG_DIR"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:exec: .: {} f:command: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:hostNetwork: {} f:initContainers: .: {} k:{"name":"init"}: .: {} f:command: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:nodeSelector: {} f:priorityClassName: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:windowsOptions: .: {} f:hostProcess: {} f:runAsUserName: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: "2023-03-20T20:44:46Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:initContainerStatuses: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.1.0.4"}: .: {} f:ip: {} f:startTime: {} manager: kubelet.exe operation: Update subresource: status time: "2023-03-20T20:54:56Z" name: csi-azuredisk-node-win-778bd namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: csi-azuredisk-node-win uid: fd1c6a12-5673-482e-9658-9034527b5368 resourceVersion: "4957" uid: 7a52151d-dd5a-434c-8a85-3f4066e22329 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - capz-conf-gm7xg containers: - args: - --csi-address=$(CSI_ENDPOINT) - --probe-timeout=3s - --health-port=29603 - --v=2 command: - livenessprobe.exe env: - name: CSI_ENDPOINT value: unix://C:\\var\\lib\\kubelet\\plugins\\disk.csi.azure.com\\csi.sock image: mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0 imagePullPolicy: IfNotPresent name: liveness-probe resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-xl8pr readOnly: true - args: - --v=2 - --csi-address=$(CSI_ENDPOINT) - --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH) - --plugin-registration-path=$(PLUGIN_REG_DIR) command: - csi-node-driver-registrar.exe env: - name: CSI_ENDPOINT value: unix://C:\\var\\lib\\kubelet\\plugins\\disk.csi.azure.com\\csi.sock - name: DRIVER_REG_SOCK_PATH value: C:\\var\\lib\\kubelet\\plugins\\disk.csi.azure.com\\csi.sock - name: PLUGIN_REG_DIR value: C:\\var\\lib\\kubelet\\plugins_registry\\ - name: KUBE_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName image: mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2 imagePullPolicy: IfNotPresent livenessProbe: exec: command: - csi-node-driver-registrar.exe - --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH) - --mode=kubelet-registration-probe failureThreshold: 3 initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 30 name: node-driver-registrar resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-xl8pr readOnly: true - args: - --v=5 - --endpoint=$(CSI_ENDPOINT) - --nodeid=$(KUBE_NODE_NAME) - --metrics-address=0.0.0.0:29605 - --drivername=disk.csi.azure.com - --volume-attach-limit=-1 - --cloud-config-secret-name=azure-cloud-provider - --cloud-config-secret-namespace=kube-system - --custom-user-agent= - --user-agent-suffix=OSS-helm - --allow-empty-cloud-config=true - --support-zone=true command: - azurediskplugin.exe env: - name: AZURE_CREDENTIAL_FILE valueFrom: configMapKeyRef: key: path-windows name: azure-cred-file optional: true - name: CSI_ENDPOINT value: unix://C:\\var\\lib\\kubelet\\plugins\\disk.csi.azure.com\\csi.sock - name: KUBE_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: AZURE_GO_SDK_LOG_LEVEL image: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /healthz port: healthz scheme: HTTP initialDelaySeconds: 30 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 10 name: azuredisk ports: - containerPort: 29603 hostPort: 29603 name: healthz protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-xl8pr readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true initContainers: - command: - powershell.exe - -c - New-Item - -ItemType - Directory - -Path - C:\var\lib\kubelet\plugins\disk.csi.azure.com\ - -Force image: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0 imagePullPolicy: IfNotPresent name: init resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-xl8pr readOnly: true nodeName: capz-conf-gm7xg nodeSelector: kubernetes.io/os: windows preemptionPolicy: PreemptLowerPriority priority: 2000001000 priorityClassName: system-node-critical restartPolicy: Always schedulerName: default-scheduler securityContext: windowsOptions: hostProcess: true runAsUserName: NT AUTHORITY\SYSTEM serviceAccount: csi-azuredisk-node-sa serviceAccountName: csi-azuredisk-node-sa terminationGracePeriodSeconds: 30 tolerations: - effect: NoSchedule key: node.kubernetes.io/os operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: - name: kube-api-access-xl8pr projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2023-03-20T20:51:52Z" message: 'containers with incomplete status: [init]' reason: ContainersNotInitialized status: "False" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-03-20T20:44:46Z" message: 'containers with unready status: [liveness-probe node-driver-registrar azuredisk]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-03-20T20:44:46Z" message: 'containers with unready status: [liveness-probe node-driver-registrar azuredisk]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-03-20T20:44:46Z" status: "True" type: PodScheduled containerStatuses: - image: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0 imageID: "" lastState: {} name: azuredisk ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing - image: mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0 imageID: "" lastState: {} name: liveness-probe ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing - image: mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2 imageID: "" lastState: {} name: node-driver-registrar ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing hostIP: 10.1.0.4 initContainerStatuses: - containerID: containerd://628e3143fd21dc45c3752bacd217da95c3b025cb6933a0cd2d0541cdd2a0d4e4 image: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0 imageID: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:5f9044f5ddfba19c4fcb1d4c41984d17b72c1050692bcaeaee3a1e93cd0a17ca lastState: running: startedAt: "2023-03-20T20:54:54Z" name: init ready: false restartCount: 13 state: running: startedAt: "2023-03-20T20:54:56Z" phase: Pending podIP: 10.1.0.4 podIPs: - ip: 10.1.0.4 qosClass: BestEffort startTime: "2023-03-20T20:44:46Z" - metadata: creationTimestamp: "2023-03-20T20:42:07Z" generateName: csi-azuredisk-node-win- labels: app: csi-azuredisk-node-win app.kubernetes.io/instance: azuredisk-csi-driver-oot app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: azuredisk-csi-driver app.kubernetes.io/version: v1.27.0 controller-revision-hash: d9d49cd64 helm.sh/chart: azuredisk-csi-driver-v1.27.0 pod-template-generation: "1" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:app: {} f:app.kubernetes.io/instance: {} f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/version: {} f:controller-revision-hash: {} f:helm.sh/chart: {} f:pod-template-generation: {} f:ownerReferences: .: {} k:{"uid":"fd1c6a12-5673-482e-9658-9034527b5368"}: {} f:spec: f:affinity: .: {} f:nodeAffinity: .: {} f:requiredDuringSchedulingIgnoredDuringExecution: {} f:containers: k:{"name":"azuredisk"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"AZURE_CREDENTIAL_FILE"}: .: {} f:name: {} f:valueFrom: .: {} f:configMapKeyRef: {} k:{"name":"AZURE_GO_SDK_LOG_LEVEL"}: .: {} f:name: {} k:{"name":"CSI_ENDPOINT"}: .: {} f:name: {} f:value: {} k:{"name":"KUBE_NODE_NAME"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:ports: .: {} k:{"containerPort":29603,"protocol":"TCP"}: .: {} f:containerPort: {} f:hostPort: {} f:name: {} f:protocol: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} k:{"name":"liveness-probe"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"CSI_ENDPOINT"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} k:{"name":"node-driver-registrar"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"CSI_ENDPOINT"}: .: {} f:name: {} f:value: {} k:{"name":"DRIVER_REG_SOCK_PATH"}: .: {} f:name: {} f:value: {} k:{"name":"KUBE_NODE_NAME"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: {} k:{"name":"PLUGIN_REG_DIR"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:exec: .: {} f:command: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:hostNetwork: {} f:initContainers: .: {} k:{"name":"init"}: .: {} f:command: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:nodeSelector: {} f:priorityClassName: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:windowsOptions: .: {} f:hostProcess: {} f:runAsUserName: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: "2023-03-20T20:42:07Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:initContainerStatuses: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.1.0.5"}: .: {} f:ip: {} f:startTime: {} manager: kubelet.exe operation: Update subresource: status time: "2023-03-20T20:55:00Z" name: csi-azuredisk-node-win-nrh82 namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: csi-azuredisk-node-win uid: fd1c6a12-5673-482e-9658-9034527b5368 resourceVersion: "4969" uid: 5504c7c9-8fd9-4126-8060-da6f55027440 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - capz-conf-vvvcd containers: - args: - --csi-address=$(CSI_ENDPOINT) - --probe-timeout=3s - --health-port=29603 - --v=2 command: - livenessprobe.exe env: - name: CSI_ENDPOINT value: unix://C:\\var\\lib\\kubelet\\plugins\\disk.csi.azure.com\\csi.sock image: mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0 imagePullPolicy: IfNotPresent name: liveness-probe resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-lsczl readOnly: true - args: - --v=2 - --csi-address=$(CSI_ENDPOINT) - --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH) - --plugin-registration-path=$(PLUGIN_REG_DIR) command: - csi-node-driver-registrar.exe env: - name: CSI_ENDPOINT value: unix://C:\\var\\lib\\kubelet\\plugins\\disk.csi.azure.com\\csi.sock - name: DRIVER_REG_SOCK_PATH value: C:\\var\\lib\\kubelet\\plugins\\disk.csi.azure.com\\csi.sock - name: PLUGIN_REG_DIR value: C:\\var\\lib\\kubelet\\plugins_registry\\ - name: KUBE_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName image: mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2 imagePullPolicy: IfNotPresent livenessProbe: exec: command: - csi-node-driver-registrar.exe - --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH) - --mode=kubelet-registration-probe failureThreshold: 3 initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 30 name: node-driver-registrar resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-lsczl readOnly: true - args: - --v=5 - --endpoint=$(CSI_ENDPOINT) - --nodeid=$(KUBE_NODE_NAME) - --metrics-address=0.0.0.0:29605 - --drivername=disk.csi.azure.com - --volume-attach-limit=-1 - --cloud-config-secret-name=azure-cloud-provider - --cloud-config-secret-namespace=kube-system - --custom-user-agent= - --user-agent-suffix=OSS-helm - --allow-empty-cloud-config=true - --support-zone=true command: - azurediskplugin.exe env: - name: AZURE_CREDENTIAL_FILE valueFrom: configMapKeyRef: key: path-windows name: azure-cred-file optional: true - name: CSI_ENDPOINT value: unix://C:\\var\\lib\\kubelet\\plugins\\disk.csi.azure.com\\csi.sock - name: KUBE_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: AZURE_GO_SDK_LOG_LEVEL image: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /healthz port: healthz scheme: HTTP initialDelaySeconds: 30 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 10 name: azuredisk ports: - containerPort: 29603 hostPort: 29603 name: healthz protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-lsczl readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true initContainers: - command: - powershell.exe - -c - New-Item - -ItemType - Directory - -Path - C:\var\lib\kubelet\plugins\disk.csi.azure.com\ - -Force image: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0 imagePullPolicy: IfNotPresent name: init resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-lsczl readOnly: true nodeName: capz-conf-vvvcd nodeSelector: kubernetes.io/os: windows preemptionPolicy: PreemptLowerPriority priority: 2000001000 priorityClassName: system-node-critical restartPolicy: Always schedulerName: default-scheduler securityContext: windowsOptions: hostProcess: true runAsUserName: NT AUTHORITY\SYSTEM serviceAccount: csi-azuredisk-node-sa serviceAccountName: csi-azuredisk-node-sa terminationGracePeriodSeconds: 30 tolerations: - effect: NoSchedule key: node.kubernetes.io/os operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: - name: kube-api-access-lsczl projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2023-03-20T20:42:07Z" message: 'containers with incomplete status: [init]' reason: ContainersNotInitialized status: "False" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-03-20T20:42:07Z" message: 'containers with unready status: [liveness-probe node-driver-registrar azuredisk]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-03-20T20:42:07Z" message: 'containers with unready status: [liveness-probe node-driver-registrar azuredisk]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-03-20T20:42:07Z" status: "True" type: PodScheduled containerStatuses: - image: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0 imageID: "" lastState: {} name: azuredisk ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing - image: mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0 imageID: "" lastState: {} name: liveness-probe ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing - image: mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2 imageID: "" lastState: {} name: node-driver-registrar ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing hostIP: 10.1.0.5 initContainerStatuses: - containerID: containerd://bf2042d318e27ea5a4d17cf712b8797b655b9d38c48db67d1a6ce74d750eacad image: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0 imageID: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:5f9044f5ddfba19c4fcb1d4c41984d17b72c1050692bcaeaee3a1e93cd0a17ca lastState: running: startedAt: "2023-03-20T20:54:57Z" name: init ready: false restartCount: 172 state: running: startedAt: "2023-03-20T20:54:59Z" phase: Pending podIP: 10.1.0.5 podIPs: - ip: 10.1.0.5 qosClass: BestEffort startTime: "2023-03-20T20:42:07Z" - metadata: creationTimestamp: "2023-03-20T20:44:46Z" generateName: csi-proxy- labels: controller-revision-hash: 69f9986785 k8s-app: csi-proxy pod-template-generation: "1" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:controller-revision-hash: {} f:k8s-app: {} f:pod-template-generation: {} f:ownerReferences: .: {} k:{"uid":"b1adb031-d26f-490d-96a9-90231879b4f1"}: {} f:spec: f:affinity: .: {} f:nodeAffinity: .: {} f:requiredDuringSchedulingIgnoredDuringExecution: {} f:containers: k:{"name":"csi-proxy"}: .: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:hostNetwork: {} f:nodeSelector: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:windowsOptions: .: {} f:hostProcess: {} f:runAsUserName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: "2023-03-20T20:44:46Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.1.0.4"}: .: {} f:ip: {} f:startTime: {} manager: kubelet.exe operation: Update subresource: status time: "2023-03-20T20:51:04Z" name: csi-proxy-4v7zg namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: csi-proxy uid: b1adb031-d26f-490d-96a9-90231879b4f1 resourceVersion: "4025" uid: 4bfb48ce-a08e-4c4b-8d11-594ea6912696 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - capz-conf-gm7xg containers: - image: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2 imagePullPolicy: IfNotPresent name: csi-proxy resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-dg7rl readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true nodeName: capz-conf-gm7xg nodeSelector: kubernetes.io/os: windows preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: windowsOptions: hostProcess: true runAsUserName: NT AUTHORITY\SYSTEM serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: - name: kube-api-access-dg7rl projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2023-03-20T20:44:46Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-03-20T20:51:04Z" message: 'containers with unready status: [csi-proxy]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-03-20T20:51:04Z" message: 'containers with unready status: [csi-proxy]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-03-20T20:44:46Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://54e337036669b590080a33498de1e29bfe77c5ddc7a9d11a0edabf34a8b47d31 image: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2 imageID: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba lastState: terminated: containerID: containerd://54e337036669b590080a33498de1e29bfe77c5ddc7a9d11a0edabf34a8b47d31 exitCode: -1073741510 finishedAt: "2023-03-20T20:50:59Z" reason: Error startedAt: "2023-03-20T20:50:58Z" name: csi-proxy ready: false restartCount: 7 started: false state: waiting: message: back-off 5m0s restarting failed container=csi-proxy pod=csi-proxy-4v7zg_kube-system(4bfb48ce-a08e-4c4b-8d11-594ea6912696) reason: CrashLoopBackOff hostIP: 10.1.0.4 phase: Running podIP: 10.1.0.4 podIPs: - ip: 10.1.0.4 qosClass: BestEffort startTime: "2023-03-20T20:44:46Z" - metadata: creationTimestamp: "2023-03-20T20:42:07Z" generateName: csi-proxy- labels: controller-revision-hash: 69f9986785 k8s-app: csi-proxy pod-template-generation: "1" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:controller-revision-hash: {} f:k8s-app: {} f:pod-template-generation: {} f:ownerReferences: .: {} k:{"uid":"b1adb031-d26f-490d-96a9-90231879b4f1"}: {} f:spec: f:affinity: .: {} f:nodeAffinity: .: {} f:requiredDuringSchedulingIgnoredDuringExecution: {} f:containers: k:{"name":"csi-proxy"}: .: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:hostNetwork: {} f:nodeSelector: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:windowsOptions: .: {} f:hostProcess: {} f:runAsUserName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: "2023-03-20T20:42:07Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.1.0.5"}: .: {} f:ip: {} f:startTime: {} manager: kubelet.exe operation: Update subresource: status time: "2023-03-20T20:53:19Z" name: csi-proxy-bnsgh namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: csi-proxy uid: b1adb031-d26f-490d-96a9-90231879b4f1 resourceVersion: "4585" uid: d8246000-ea4b-4f56-a4b8-755b44656004 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - capz-conf-vvvcd containers: - image: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2 imagePullPolicy: IfNotPresent name: csi-proxy resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-2cg4z readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true nodeName: capz-conf-vvvcd nodeSelector: kubernetes.io/os: windows preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: windowsOptions: hostProcess: true runAsUserName: NT AUTHORITY\SYSTEM serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: - name: kube-api-access-2cg4z projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2023-03-20T20:42:07Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-03-20T20:53:19Z" message: 'containers with unready status: [csi-proxy]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-03-20T20:53:19Z" message: 'containers with unready status: [csi-proxy]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-03-20T20:42:07Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://2353a4851cba8f20bd2fc79edf323753812abc396be6ff59943c0f1f8776e173 image: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2 imageID: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba lastState: terminated: containerID: containerd://2353a4851cba8f20bd2fc79edf323753812abc396be6ff59943c0f1f8776e173 exitCode: -1073741510 finishedAt: "2023-03-20T20:53:14Z" reason: Error startedAt: "2023-03-20T20:53:13Z" name: csi-proxy ready: false restartCount: 9 started: false state: waiting: message: back-off 5m0s restarting failed container=csi-proxy pod=csi-proxy-bnsgh_kube-system(d8246000-ea4b-4f56-a4b8-755b44656004) reason: CrashLoopBackOff hostIP: 10.1.0.5 phase: Running podIP: 10.1.0.5 podIPs: - ip: 10.1.0.5 qosClass: BestEffort startTime: "2023-03-20T20:42:07Z" - metadata: creationTimestamp: "2023-03-20T20:41:01Z" generateName: kube-proxy-windows- labels: controller-revision-hash: cf7c74ff8 k8s-app: kube-proxy-windows pod-template-generation: "1" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:controller-revision-hash: {} f:k8s-app: {} f:pod-template-generation: {} f:ownerReferences: .: {} k:{"uid":"920da589-7754-4740-b125-e024301498be"}: {} f:spec: f:affinity: .: {} f:nodeAffinity: .: {} f:requiredDuringSchedulingIgnoredDuringExecution: {} f:containers: k:{"name":"kube-proxy"}: .: {} f:args: {} f:env: .: {} k:{"name":"KUBEPROXY_PATH"}: .: {} f:name: {} f:valueFrom: .: {} f:configMapKeyRef: {} k:{"name":"NODE_NAME"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: {} k:{"name":"POD_IP"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/var/lib/kube-proxy"}: .: {} f:mountPath: {} f:name: {} f:workingDir: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:hostNetwork: {} f:nodeSelector: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:windowsOptions: .: {} f:hostProcess: {} f:runAsUserName: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} f:volumes: .: {} k:{"name":"kube-proxy"}: .: {} f:configMap: .: {} f:defaultMode: {} f:name: {} f:name: {} manager: kube-controller-manager operation: Update time: "2023-03-20T20:41:01Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.1.0.4"}: .: {} f:ip: {} f:startTime: {} manager: kubelet.exe operation: Update subresource: status time: "2023-03-20T20:52:02Z" name: kube-proxy-windows-527hb namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: kube-proxy-windows uid: 920da589-7754-4740-b125-e024301498be resourceVersion: "4280" uid: 00140840-3274-4053-b4b9-49e8d5996de7 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - capz-conf-gm7xg containers: - args: - $env:CONTAINER_SANDBOX_MOUNT_POINT/kube-proxy/start.ps1 env: - name: NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP - name: KUBEPROXY_PATH valueFrom: configMapKeyRef: key: KUBEPROXY_PATH name: windows-kubeproxy-ci optional: true image: sigwindowstools/kube-proxy:v1.27.0-beta.0.25_15894cfc85cab6-calico-hostprocess imagePullPolicy: IfNotPresent name: kube-proxy resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/lib/kube-proxy name: kube-proxy - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-nbk58 readOnly: true workingDir: $env:CONTAINER_SANDBOX_MOUNT_POINT/kube-proxy/ dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true nodeName: capz-conf-gm7xg nodeSelector: kubernetes.io/os: windows preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: windowsOptions: hostProcess: true runAsUserName: NT AUTHORITY\system serviceAccount: kube-proxy serviceAccountName: kube-proxy terminationGracePeriodSeconds: 30 tolerations: - key: CriticalAddonsOnly operator: Exists - operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: - configMap: defaultMode: 420 name: kube-proxy name: kube-proxy - name: kube-api-access-nbk58 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2023-03-20T20:41:01Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-03-20T20:52:02Z" message: 'containers with unready status: [kube-proxy]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-03-20T20:52:02Z" message: 'containers with unready status: [kube-proxy]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-03-20T20:41:01Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://39f24b2731756787768c3581c711a27fc3bc56470b3c79d1d4bf2d3bae83468b image: docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess imageID: sha256:066f734ecf45f03f1a29b2c4432153044af372540aec60a4e46e4a8b627cf1ed lastState: terminated: containerID: containerd://39f24b2731756787768c3581c711a27fc3bc56470b3c79d1d4bf2d3bae83468b exitCode: -1073741510 finishedAt: "2023-03-20T20:51:56Z" reason: Error startedAt: "2023-03-20T20:51:56Z" name: kube-proxy ready: false restartCount: 9 started: false state: waiting: message: back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-windows-527hb_kube-system(00140840-3274-4053-b4b9-49e8d5996de7) reason: CrashLoopBackOff hostIP: 10.1.0.4 phase: Running podIP: 10.1.0.4 podIPs: - ip: 10.1.0.4 qosClass: BestEffort startTime: "2023-03-20T20:41:01Z" - metadata: creationTimestamp: "2023-03-20T20:40:54Z" generateName: kube-proxy-windows- labels: controller-revision-hash: cf7c74ff8 k8s-app: kube-proxy-windows pod-template-generation: "1" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:controller-revision-hash: {} f:k8s-app: {} f:pod-template-generation: {} f:ownerReferences: .: {} k:{"uid":"920da589-7754-4740-b125-e024301498be"}: {} f:spec: f:affinity: .: {} f:nodeAffinity: .: {} f:requiredDuringSchedulingIgnoredDuringExecution: {} f:containers: k:{"name":"kube-proxy"}: .: {} f:args: {} f:env: .: {} k:{"name":"KUBEPROXY_PATH"}: .: {} f:name: {} f:valueFrom: .: {} f:configMapKeyRef: {} k:{"name":"NODE_NAME"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: {} k:{"name":"POD_IP"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/var/lib/kube-proxy"}: .: {} f:mountPath: {} f:name: {} f:workingDir: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:hostNetwork: {} f:nodeSelector: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:windowsOptions: .: {} f:hostProcess: {} f:runAsUserName: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} f:volumes: .: {} k:{"name":"kube-proxy"}: .: {} f:configMap: .: {} f:defaultMode: {} f:name: {} f:name: {} manager: kube-controller-manager operation: Update time: "2023-03-20T20:40:54Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.1.0.5"}: .: {} f:ip: {} f:startTime: {} manager: kubelet.exe operation: Update subresource: status time: "2023-03-20T20:51:58Z" name: kube-proxy-windows-wmp2s namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: kube-proxy-windows uid: 920da589-7754-4740-b125-e024301498be resourceVersion: "4264" uid: bcd38796-26a8-4f15-9513-2a8ac58d2ba4 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - capz-conf-vvvcd containers: - args: - $env:CONTAINER_SANDBOX_MOUNT_POINT/kube-proxy/start.ps1 env: - name: NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP - name: KUBEPROXY_PATH valueFrom: configMapKeyRef: key: KUBEPROXY_PATH name: windows-kubeproxy-ci optional: true image: sigwindowstools/kube-proxy:v1.27.0-beta.0.25_15894cfc85cab6-calico-hostprocess imagePullPolicy: IfNotPresent name: kube-proxy resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/lib/kube-proxy name: kube-proxy - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-46zmt readOnly: true workingDir: $env:CONTAINER_SANDBOX_MOUNT_POINT/kube-proxy/ dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true nodeName: capz-conf-vvvcd nodeSelector: kubernetes.io/os: windows preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: windowsOptions: hostProcess: true runAsUserName: NT AUTHORITY\system serviceAccount: kube-proxy serviceAccountName: kube-proxy terminationGracePeriodSeconds: 30 tolerations: - key: CriticalAddonsOnly operator: Exists - operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: - configMap: defaultMode: 420 name: kube-proxy name: kube-proxy - name: kube-api-access-46zmt projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2023-03-20T20:40:55Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-03-20T20:51:58Z" message: 'containers with unready status: [kube-proxy]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-03-20T20:51:58Z" message: 'containers with unready status: [kube-proxy]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-03-20T20:40:54Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://2e0305ad0952156deb179bd5d9b7d8b1583328d2294ce9dddab65ae4da035397 image: docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess imageID: sha256:066f734ecf45f03f1a29b2c4432153044af372540aec60a4e46e4a8b627cf1ed lastState: terminated: containerID: containerd://2e0305ad0952156deb179bd5d9b7d8b1583328d2294ce9dddab65ae4da035397 exitCode: -1073741510 finishedAt: "2023-03-20T20:51:53Z" reason: Error startedAt: "2023-03-20T20:51:52Z" name: kube-proxy ready: false restartCount: 9 started: false state: waiting: message: back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-windows-wmp2s_kube-system(bcd38796-26a8-4f15-9513-2a8ac58d2ba4) reason: CrashLoopBackOff hostIP: 10.1.0.5 phase: Running podIP: 10.1.0.5 podIPs: - ip: 10.1.0.5 qosClass: BestEffort startTime: "2023-03-20T20:40:55Z"�[0m �[38;5;9mIn �[1m[SynchronizedBeforeSuite]�[0m�[38;5;9m at: �[1mtest/e2e/e2e.go:242�[0m �[38;5;243m@ 03/20/23 20:55:12.014�[0m �[38;5;243m------------------------------�[0m �[38;5;9m[SynchronizedBeforeSuite] [FAILED] [758.311 seconds]�[0m �[38;5;9m�[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:77�[0m �[38;5;9m[FAILED] �[1m�[38;5;9mSynchronizedBeforeSuite failed on Ginkgo parallel process #1�[0m The first SynchronizedBeforeSuite function running on Ginkgo parallel process #1 failed. This suite will now abort. �[0m �[38;5;9mIn �[1m[SynchronizedBeforeSuite]�[0m�[38;5;9m at: �[1mtest/e2e/e2e.go:77�[0m �[38;5;243m@ 03/20/23 20:55:12.036�[0m �[38;5;243m------------------------------�[0m �[38;5;9m[SynchronizedBeforeSuite] [FAILED] [758.335 seconds]�[0m �[38;5;9m�[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:77�[0m �[38;5;9m[FAILED] �[1m�[38;5;9mSynchronizedBeforeSuite failed on Ginkgo parallel process #1�[0m The first SynchronizedBeforeSuite function running on Ginkgo parallel process #1 failed. This suite will now abort. �[0m �[38;5;9mIn �[1m[SynchronizedBeforeSuite]�[0m�[38;5;9m at: �[1mtest/e2e/e2e.go:77�[0m �[38;5;243m@ 03/20/23 20:55:12.036�[0m �[38;5;243m------------------------------�[0m �[38;5;9m[SynchronizedBeforeSuite] [FAILED] [758.331 seconds]�[0m �[38;5;9m�[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:77�[0m �[38;5;9m[FAILED] �[1m�[38;5;9mSynchronizedBeforeSuite failed on Ginkgo parallel process #1�[0m The first SynchronizedBeforeSuite function running on Ginkgo parallel process #1 failed. This suite will now abort. �[0m �[38;5;9mIn �[1m[SynchronizedBeforeSuite]�[0m�[38;5;9m at: �[1mtest/e2e/e2e.go:77�[0m �[38;5;243m@ 03/20/23 20:55:12.036�[0m �[38;5;243m------------------------------�[0m �[38;5;9m�[1mSummarizing 4 Failures:�[0m �[38;5;9m[FAIL]�[0m �[38;5;9m�[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:77�[0m �[38;5;9m[FAIL]�[0m �[38;5;9m�[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:77�[0m �[38;5;9m[FAIL]�[0m �[38;5;9m�[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:77�[0m �[38;5;9m[FAIL]�[0m �[38;5;9m�[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:242�[0m �[38;5;9m�[1mRan 0 of 7207 Specs in 758.428 seconds�[0m �[38;5;9m�[1mFAIL!�[0m -- �[38;5;14m�[1mA BeforeSuite node failed so all tests were skipped.�[0m I0320 20:42:33.233579 15 e2e.go:117] Starting e2e run "6de80a66-9fe0-470f-a3de-d8b524a156e7" on Ginkgo node 1 �[38;5;228mYou're using deprecated Ginkgo functionality:�[0m �[38;5;228m=============================================�[0m �[38;5;11m--ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead�[0m �[1mLearn more at:�[0m �[38;5;14m�[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags�[0m �[38;5;11m--ginkgo.progress is deprecated . The functionality provided by --progress was confusing and is no longer needed. Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs. Or you can run with -vv to always see all node events. Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.�[0m �[38;5;11m--ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo. This feature has proved to be more noisy than useful. You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.�[0m �[38;5;243mTo silence deprecations that can be silenced set the following environment variable:�[0m �[38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1�[0m --- FAIL: TestE2E (758.86s) FAIL I0320 20:42:33.231585 16 e2e.go:117] Starting e2e run "1c4d63ab-4761-4bda-93a7-5f81513b835c" on Ginkgo node 2 �[38;5;228mYou're using deprecated Ginkgo functionality:�[0m �[38;5;228m=============================================�[0m �[38;5;11m--ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead�[0m �[1mLearn more at:�[0m �[38;5;14m�[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags�[0m �[38;5;11m--ginkgo.progress is deprecated . The functionality provided by --progress was confusing and is no longer needed. Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs. Or you can run with -vv to always see all node events. Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.�[0m �[38;5;11m--ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo. This feature has proved to be more noisy than useful. You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.�[0m �[38;5;243mTo silence deprecations that can be silenced set the following environment variable:�[0m �[38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1�[0m --- FAIL: TestE2E (758.81s) FAIL I0320 20:42:33.243068 17 e2e.go:117] Starting e2e run "7672dc9e-5156-48cf-a02d-283c07070e7d" on Ginkgo node 3 �[38;5;228mYou're using deprecated Ginkgo functionality:�[0m �[38;5;228m=============================================�[0m �[38;5;11m--ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead�[0m �[1mLearn more at:�[0m �[38;5;14m�[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags�[0m �[38;5;11m--ginkgo.progress is deprecated . The functionality provided by --progress was confusing and is no longer needed. Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs. Or you can run with -vv to always see all node events. Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.�[0m �[38;5;11m--ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo. This feature has proved to be more noisy than useful. You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.�[0m �[38;5;243mTo silence deprecations that can be silenced set the following environment variable:�[0m �[38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1�[0m --- FAIL: TestE2E (758.80s) FAIL I0320 20:42:33.229510 19 e2e.go:117] Starting e2e run "def2f57d-e7a3-42b4-89c4-1100437748c0" on Ginkgo node 4 �[38;5;228mYou're using deprecated Ginkgo functionality:�[0m �[38;5;228m=============================================�[0m �[38;5;11m--ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo. This feature has proved to be more noisy than useful. You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.�[0m �[38;5;11m--ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead�[0m �[1mLearn more at:�[0m �[38;5;14m�[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags�[0m �[38;5;11m--ginkgo.progress is deprecated . The functionality provided by --progress was confusing and is no longer needed. Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs. Or you can run with -vv to always see all node events. Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.�[0m �[38;5;243mTo silence deprecations that can be silenced set the following environment variable:�[0m �[38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1�[0m --- FAIL: TestE2E (758.81s) FAIL Ginkgo ran 1 suite in 12m38.988656037s Test Suite Failed �[38;5;228mYou're using deprecated Ginkgo functionality:�[0m �[38;5;228m=============================================�[0m �[38;5;11m--slowSpecThreshold is deprecated use --slow-spec-threshold instead and pass in a duration string (e.g. '5s', not '5.0')�[0m �[1mLearn more at:�[0m �[38;5;14m�[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed--slowspecthreshold�[0m �[38;5;243mTo silence deprecations that can be silenced set the following environment variable:�[0m �[38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1�[0m [FAILED] Unexpected error: <*errors.withStack | 0xc000f9b470>: { error: <*errors.withMessage | 0xc002656300>{ cause: <*errors.errorString | 0xc00021f130>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x34b656e, 0x376dca7, 0x196a59b, 0x197e6d8, 0x14ec761], } Unable to run conformance tests: error container run failed with exit code 1 occurred In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227 @ 03/20/23 20:55:12.532 < Exit [It] conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:98 @ 03/20/23 20:55:12.532 (21m50.199s) > Enter [AfterEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:231 @ 03/20/23 20:55:12.532 Mar 20 20:55:12.532: INFO: FAILED! Mar 20 20:55:12.533: INFO: Cleaning up after "Conformance Tests conformance-tests" spec Mar 20 20:55:12.533: INFO: Dumping all the Cluster API resources in the "capz-conf-1plfqp" namespace STEP: Dumping logs from the "capz-conf-1plfqp" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:96 @ 03/20/23 20:55:12.877 Mar 20 20:55:12.877: INFO: Dumping workload cluster capz-conf-1plfqp/capz-conf-1plfqp logs Mar 20 20:55:12.914: INFO: Collecting logs for Linux node capz-conf-1plfqp-control-plane-2j2gm in cluster capz-conf-1plfqp in namespace capz-conf-1plfqp Mar 20 20:55:25.757: INFO: Collecting boot logs for AzureMachine capz-conf-1plfqp-control-plane-2j2gm Mar 20 20:55:26.746: INFO: Collecting logs for Windows node capz-conf-gm7xg in cluster capz-conf-1plfqp in namespace capz-conf-1plfqp Mar 20 20:58:06.583: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-gm7xg to /logs/artifacts/clusters/capz-conf-1plfqp/machines/capz-conf-1plfqp-md-win-65dbf97bf6-csgg7/crashdumps.tar Mar 20 20:58:08.384: INFO: Collecting boot logs for AzureMachine capz-conf-1plfqp-md-win-gm7xg Mar 20 20:58:09.375: INFO: Collecting logs for Windows node capz-conf-vvvcd in cluster capz-conf-1plfqp in namespace capz-conf-1plfqp Mar 20 21:00:38.829: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-vvvcd to /logs/artifacts/clusters/capz-conf-1plfqp/machines/capz-conf-1plfqp-md-win-65dbf97bf6-j9qvz/crashdumps.tar Mar 20 21:00:40.689: INFO: Collecting boot logs for AzureMachine capz-conf-1plfqp-md-win-vvvcd Mar 20 21:00:41.549: INFO: Dumping workload cluster capz-conf-1plfqp/capz-conf-1plfqp nodes Mar 20 21:00:41.850: INFO: Describing Node capz-conf-1plfqp-control-plane-2j2gm Mar 20 21:00:42.067: INFO: Describing Node capz-conf-gm7xg Mar 20 21:00:42.265: INFO: Describing Node capz-conf-vvvcd Mar 20 21:00:42.461: INFO: Fetching nodes took 912.555835ms Mar 20 21:00:42.462: INFO: Dumping workload cluster capz-conf-1plfqp/capz-conf-1plfqp pod logs Mar 20 21:00:42.741: INFO: Describing Pod calico-apiserver/calico-apiserver-5467959f9d-8zg79 Mar 20 21:00:42.741: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-5467959f9d-8zg79, container calico-apiserver Mar 20 21:00:42.810: INFO: Describing Pod calico-apiserver/calico-apiserver-5467959f9d-n9qxv Mar 20 21:00:42.811: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-5467959f9d-n9qxv, container calico-apiserver Mar 20 21:00:42.883: INFO: Describing Pod calico-system/calico-kube-controllers-59d9cb8fbb-8ft2d Mar 20 21:00:42.883: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-59d9cb8fbb-8ft2d, container calico-kube-controllers Mar 20 21:00:42.959: INFO: Describing Pod calico-system/calico-node-bdvzb Mar 20 21:00:42.959: INFO: Creating log watcher for controller calico-system/calico-node-bdvzb, container calico-node Mar 20 21:00:43.043: INFO: Describing Pod calico-system/calico-node-windows-9f96h Mar 20 21:00:43.043: INFO: Creating log watcher for controller calico-system/calico-node-windows-9f96h, container calico-node-startup Mar 20 21:00:43.044: INFO: Creating log watcher for controller calico-system/calico-node-windows-9f96h, container calico-node-felix Mar 20 21:00:43.100: INFO: Error starting logs stream for pod calico-system/calico-node-windows-9f96h, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-9f96h" is waiting to start: PodInitializing Mar 20 21:00:43.100: INFO: Error starting logs stream for pod calico-system/calico-node-windows-9f96h, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-9f96h" is waiting to start: PodInitializing Mar 20 21:00:43.115: INFO: Describing Pod calico-system/calico-node-windows-k9kth Mar 20 21:00:43.115: INFO: Creating log watcher for controller calico-system/calico-node-windows-k9kth, container calico-node-startup Mar 20 21:00:43.115: INFO: Creating log watcher for controller calico-system/calico-node-windows-k9kth, container calico-node-felix Mar 20 21:00:43.169: INFO: Error starting logs stream for pod calico-system/calico-node-windows-k9kth, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-k9kth" is waiting to start: PodInitializing Mar 20 21:00:43.170: INFO: Error starting logs stream for pod calico-system/calico-node-windows-k9kth, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-k9kth" is waiting to start: PodInitializing Mar 20 21:00:43.507: INFO: Describing Pod calico-system/calico-typha-96fb785dc-c7sr9 Mar 20 21:00:43.507: INFO: Creating log watcher for controller calico-system/calico-typha-96fb785dc-c7sr9, container calico-typha Mar 20 21:00:43.908: INFO: Describing Pod calico-system/csi-node-driver-j9ptp Mar 20 21:00:43.908: INFO: Creating log watcher for controller calico-system/csi-node-driver-j9ptp, container csi-node-driver-registrar Mar 20 21:00:43.908: INFO: Creating log watcher for controller calico-system/csi-node-driver-j9ptp, container calico-csi Mar 20 21:00:44.310: INFO: Describing Pod kube-system/containerd-logger-ng4wl Mar 20 21:00:44.310: INFO: Creating log watcher for controller kube-system/containerd-logger-ng4wl, container containerd-logger Mar 20 21:00:44.709: INFO: Describing Pod kube-system/containerd-logger-xxz7w Mar 20 21:00:44.709: INFO: Creating log watcher for controller kube-system/containerd-logger-xxz7w, container containerd-logger Mar 20 21:00:45.109: INFO: Describing Pod kube-system/coredns-5d78c9869d-c58vk Mar 20 21:00:45.109: INFO: Creating log watcher for controller kube-system/coredns-5d78c9869d-c58vk, container coredns Mar 20 21:00:45.508: INFO: Describing Pod kube-system/coredns-5d78c9869d-wh4l9 Mar 20 21:00:45.508: INFO: Creating log watcher for controller kube-system/coredns-5d78c9869d-wh4l9, container coredns Mar 20 21:00:45.911: INFO: Describing Pod kube-system/csi-azuredisk-controller-56db99df6c-sbnn7 Mar 20 21:00:45.911: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-56db99df6c-sbnn7, container csi-snapshotter Mar 20 21:00:45.911: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-56db99df6c-sbnn7, container liveness-probe Mar 20 21:00:45.911: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-56db99df6c-sbnn7, container csi-provisioner Mar 20 21:00:45.911: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-56db99df6c-sbnn7, container azuredisk Mar 20 21:00:45.911: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-56db99df6c-sbnn7, container csi-resizer Mar 20 21:00:45.911: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-56db99df6c-sbnn7, container csi-attacher Mar 20 21:00:46.309: INFO: Describing Pod kube-system/csi-azuredisk-node-jtlzl Mar 20 21:00:46.309: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-jtlzl, container liveness-probe Mar 20 21:00:46.309: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-jtlzl, container node-driver-registrar Mar 20 21:00:46.309: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-jtlzl, container azuredisk Mar 20 21:00:46.707: INFO: Describing Pod kube-system/csi-azuredisk-node-win-778bd Mar 20 21:00:46.707: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-778bd, container liveness-probe Mar 20 21:00:46.707: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-778bd, container azuredisk Mar 20 21:00:46.707: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-778bd, container node-driver-registrar Mar 20 21:00:46.755: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-778bd, container liveness-probe: container "liveness-probe" in pod "csi-azuredisk-node-win-778bd" is waiting to start: PodInitializing Mar 20 21:00:46.755: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-778bd, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-win-778bd" is waiting to start: PodInitializing Mar 20 21:00:46.755: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-778bd, container node-driver-registrar: container "node-driver-registrar" in pod "csi-azuredisk-node-win-778bd" is waiting to start: PodInitializing Mar 20 21:00:47.110: INFO: Describing Pod kube-system/csi-azuredisk-node-win-nrh82 Mar 20 21:00:47.110: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-nrh82, container node-driver-registrar Mar 20 21:00:47.110: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-nrh82, container liveness-probe Mar 20 21:00:47.110: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-nrh82, container azuredisk Mar 20 21:00:47.148: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-nrh82, container node-driver-registrar: container "node-driver-registrar" in pod "csi-azuredisk-node-win-nrh82" is waiting to start: PodInitializing Mar 20 21:00:47.149: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-nrh82, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-win-nrh82" is waiting to start: PodInitializing Mar 20 21:00:47.149: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-nrh82, container liveness-probe: container "liveness-probe" in pod "csi-azuredisk-node-win-nrh82" is waiting to start: PodInitializing Mar 20 21:00:47.509: INFO: Describing Pod kube-system/csi-proxy-4v7zg Mar 20 21:00:47.510: INFO: Creating log watcher for controller kube-system/csi-proxy-4v7zg, container csi-proxy Mar 20 21:00:47.912: INFO: Describing Pod kube-system/csi-proxy-bnsgh Mar 20 21:00:47.913: INFO: Creating log watcher for controller kube-system/csi-proxy-bnsgh, container csi-proxy Mar 20 21:00:48.308: INFO: Describing Pod kube-system/etcd-capz-conf-1plfqp-control-plane-2j2gm Mar 20 21:00:48.309: INFO: Creating log watcher for controller kube-system/etcd-capz-conf-1plfqp-control-plane-2j2gm, container etcd Mar 20 21:00:48.708: INFO: Describing Pod kube-system/kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm Mar 20 21:00:48.708: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm, container kube-apiserver Mar 20 21:00:49.108: INFO: Describing Pod kube-system/kube-controller-manager-capz-conf-1plfqp-control-plane-2j2gm Mar 20 21:00:49.108: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-conf-1plfqp-control-plane-2j2gm, container kube-controller-manager Mar 20 21:00:49.508: INFO: Describing Pod kube-system/kube-proxy-7gqj4 Mar 20 21:00:49.508: INFO: Creating log watcher for controller kube-system/kube-proxy-7gqj4, container kube-proxy Mar 20 21:00:49.909: INFO: Describing Pod kube-system/kube-proxy-windows-527hb Mar 20 21:00:49.909: INFO: Creating log watcher for controller kube-system/kube-proxy-windows-527hb, container kube-proxy Mar 20 21:00:50.308: INFO: Describing Pod kube-system/kube-proxy-windows-wmp2s Mar 20 21:00:50.308: INFO: Creating log watcher for controller kube-system/kube-proxy-windows-wmp2s, container kube-proxy Mar 20 21:00:50.708: INFO: Describing Pod kube-system/kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm Mar 20 21:00:50.708: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm, container kube-scheduler Mar 20 21:00:51.108: INFO: Describing Pod kube-system/metrics-server-6987569d96-kbkwt Mar 20 21:00:51.109: INFO: Creating log watcher for controller kube-system/metrics-server-6987569d96-kbkwt, container metrics-server Mar 20 21:00:51.506: INFO: Describing Pod tigera-operator/tigera-operator-59c686f986-m7hjf Mar 20 21:00:51.506: INFO: Fetching pod logs took 9.044281974s Mar 20 21:00:51.506: INFO: Dumping workload cluster capz-conf-1plfqp/capz-conf-1plfqp Azure activity log Mar 20 21:00:51.506: INFO: Creating log watcher for controller tigera-operator/tigera-operator-59c686f986-m7hjf, container tigera-operator Mar 20 21:00:53.818: INFO: Fetching activity logs took 2.311776177s Mar 20 21:00:53.818: INFO: Deleting all clusters in the capz-conf-1plfqp namespace STEP: Deleting cluster capz-conf-1plfqp - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/ginkgoextensions/output.go:35 @ 03/20/23 21:00:53.837 INFO: Waiting for the Cluster capz-conf-1plfqp/capz-conf-1plfqp to be deleted STEP: Waiting for cluster capz-conf-1plfqp to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/ginkgoextensions/output.go:35 @ 03/20/23 21:00:53.851 Mar 20 21:06:34.026: INFO: Deleting namespace used for hosting the "conformance-tests" test spec INFO: Deleting namespace capz-conf-1plfqp Mar 20 21:06:34.047: INFO: Checking if any resources are left over in Azure for spec "conformance-tests" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:220 @ 03/20/23 21:06:34.776 < Exit [AfterEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:231 @ 03/20/23 21:06:47.024 (11m34.492s)
Filter through log files
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [It] Workload cluster creation Creating a Flatcar cluster [OPTIONAL] With Flatcar control-plane and worker nodes
capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the intree cloud provider [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster with VMSS flex machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e [It] Workload cluster creation Creating an AKS cluster [Managed Kubernetes] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating clusters on public MEC [OPTIONAL] with 1 control plane nodes and 1 worker node
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
... skipping 139 lines ... Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 138 100 138 0 0 4312 0 --:--:-- --:--:-- --:--:-- 4312 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 32 100 32 0 0 123 0 --:--:-- --:--:-- --:--:-- 2000 using CI_VERSION=v1.27.0-beta.0.25+15894cfc85cab6 using KUBERNETES_VERSION=v1.27.0-beta.0.25+15894cfc85cab6 using IMAGE_TAG=v1.27.0-beta.0.29_117662b4a973d5 Error response from daemon: manifest for capzci.azurecr.io/kube-apiserver:v1.27.0-beta.0.29_117662b4a973d5 not found: manifest unknown: manifest tagged by "v1.27.0-beta.0.29_117662b4a973d5" is not found Building Kubernetes make: Entering directory '/home/prow/go/src/k8s.io/kubernetes' +++ [0320 20:02:45] Verifying Prerequisites.... +++ [0320 20:02:45] Building Docker image kube-build:build-a0d1e9fdaf-5-v1.27.0-go1.20.2-bullseye.0 +++ [0320 20:04:52] Creating data container kube-build-data-a0d1e9fdaf-5-v1.27.0-go1.20.2-bullseye.0 +++ [0320 20:04:54] Syncing sources to container ... skipping 812 lines ... [38;5;243m------------------------------[0m [0mConformance Tests [0m[1mconformance-tests[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:98[0m INFO: Cluster name is capz-conf-1plfqp [1mSTEP:[0m Creating namespace "capz-conf-1plfqp" for hosting the cluster [38;5;243m@ 03/20/23 20:33:22.243[0m Mar 20 20:33:22.243: INFO: starting to create namespace for hosting the "capz-conf-1plfqp" test spec 2023/03/20 20:33:22 failed trying to get namespace (capz-conf-1plfqp):namespaces "capz-conf-1plfqp" not found INFO: Creating namespace capz-conf-1plfqp INFO: Creating event watcher for namespace "capz-conf-1plfqp" [1mconformance-tests[38;5;243m - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 @ 03/20/23 20:33:22.333[0m [1mconformance-tests [0m[1mName[0m | [1mN[0m | [1mMin[0m | [1mMedian[0m | [1mMean[0m | [1mStdDev[0m | [1mMax[0m INFO: Creating the workload cluster with name "capz-conf-1plfqp" using the "conformance-presubmit-artifacts-windows-containerd" template (Kubernetes v1.27.0-beta.0.25+15894cfc85cab6, 1 control-plane machines, 0 worker machines) ... skipping 99 lines ... ==================================================== Random Seed: [1m1679344953[0m - will randomize all specs Will run [1m348[0m of [1m7207[0m specs Running in parallel across [1m4[0m processes [38;5;243m------------------------------[0m [38;5;9m[SynchronizedBeforeSuite] [FAILED] [758.318 seconds][0m [38;5;9m[1m[SynchronizedBeforeSuite] [0m [38;5;243mtest/e2e/e2e.go:77[0m [38;5;243mTimeline >>[0m Mar 20 20:42:33.697: INFO: >>> kubeConfig: /tmp/kubeconfig Mar 20 20:42:33.699: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable ... skipping 63 lines ... Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:59 +0000 UTC - event for kube-proxy-x9kfz: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-proxy-x9kfz to capz-conf-1plfqp-control-plane-2j2gm Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "capzci.azurecr.io/kube-apiserver:v1.27.0-beta.0.29_117662b4a973d5" Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-controller-manager-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "capzci.azurecr.io/kube-controller-manager:v1.27.0-beta.0.29_117662b4a973d5" Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-proxy: {daemonset-controller } SuccessfulCreate: Created pod: kube-proxy-7gqj4 Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-proxy-7gqj4: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-proxy-7gqj4 to capz-conf-1plfqp-control-plane-2j2gm Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-proxy-7gqj4: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "capzci.azurecr.io/kube-proxy:v1.27.0-beta.0.29_117662b4a973d5" Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-proxy-x9kfz: {kubelet capz-conf-1plfqp-control-plane-2j2gm} FailedMount: MountVolume.SetUp failed for volume "kube-proxy" : object "kube-system"/"kube-proxy" not registered Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-proxy-x9kfz: {kubelet capz-conf-1plfqp-control-plane-2j2gm} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-m8dpv" : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:capz-conf-1plfqp-control-plane-2j2gm" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'capz-conf-1plfqp-control-plane-2j2gm' and this object, object "kube-system"/"kube-root-ca.crt" not registered] Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "capzci.azurecr.io/kube-scheduler:v1.27.0-beta.0.29_117662b4a973d5" Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:02 +0000 UTC - event for kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Killing: Stopping container kube-apiserver Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:02 +0000 UTC - event for kube-controller-manager-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Killing: Stopping container kube-controller-manager Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:02 +0000 UTC - event for kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Killing: Stopping container kube-scheduler Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:04 +0000 UTC - event for kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "capzci.azurecr.io/kube-apiserver:v1.27.0-beta.0.29_117662b4a973d5" in 3.562353503s (3.562464604s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:05 +0000 UTC - event for kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container kube-apiserver ... skipping 4 lines ... Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:10 +0000 UTC - event for kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container kube-scheduler Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:10 +0000 UTC - event for kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "capzci.azurecr.io/kube-scheduler:v1.27.0-beta.0.29_117662b4a973d5" in 2.140395691s (8.671982443s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:10 +0000 UTC - event for kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container kube-scheduler Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:13 +0000 UTC - event for kube-proxy-7gqj4: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container kube-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:13 +0000 UTC - event for kube-proxy-7gqj4: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "capzci.azurecr.io/kube-proxy:v1.27.0-beta.0.29_117662b4a973d5" in 3.499445571s (11.923340955s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:13 +0000 UTC - event for kube-proxy-7gqj4: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container kube-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:21 +0000 UTC - event for kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Unhealthy: Startup probe failed: HTTP probe failed with statuscode: 500 Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:24 +0000 UTC - event for kube-controller-manager: {kube-controller-manager } LeaderElection: capz-conf-1plfqp-control-plane-2j2gm_11cc3f7d-b40e-4cbe-be22-ee508e31eb2b became leader Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:26 +0000 UTC - event for kube-scheduler: {default-scheduler } LeaderElection: capz-conf-1plfqp-control-plane-2j2gm_d0286c4b-aa0a-48d9-b282-91d3450fb492 became leader Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:31 +0000 UTC - event for metrics-server: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-6987569d96 to 1 Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:31 +0000 UTC - event for metrics-server-6987569d96: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-6987569d96-kbkwt Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:20 +0000 UTC - event for coredns-5d78c9869d-c58vk: {kubelet capz-conf-1plfqp-control-plane-2j2gm} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "9e13897e7fed205b2819620b91a752b5b98b00008e7f1e2aad8184773be3dc43": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/ Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:20 +0000 UTC - event for coredns-5d78c9869d-wh4l9: {kubelet capz-conf-1plfqp-control-plane-2j2gm} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "32f577f0bea9664ec11ac0e5b98a62af85a154812095aa16ee7f9349556e49a7": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/ Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:20 +0000 UTC - event for metrics-server-6987569d96-kbkwt: {kubelet capz-conf-1plfqp-control-plane-2j2gm} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "da59f6a3840523d29dc136abb059229721874304ef229111992d9d331dfd85cf": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/ Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:21 +0000 UTC - event for coredns-5d78c9869d-c58vk: {kubelet capz-conf-1plfqp-control-plane-2j2gm} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:21 +0000 UTC - event for coredns-5d78c9869d-wh4l9: {kubelet capz-conf-1plfqp-control-plane-2j2gm} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:21 +0000 UTC - event for metrics-server-6987569d96-kbkwt: {kubelet capz-conf-1plfqp-control-plane-2j2gm} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:33 +0000 UTC - event for coredns-5d78c9869d-wh4l9: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container coredns Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:33 +0000 UTC - event for coredns-5d78c9869d-wh4l9: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.1" already present on machine Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:33 +0000 UTC - event for coredns-5d78c9869d-wh4l9: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container coredns Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:33 +0000 UTC - event for metrics-server-6987569d96-kbkwt: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "k8s.gcr.io/metrics-server/metrics-server:v0.6.2" Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:38 +0000 UTC - event for coredns-5d78c9869d-c58vk: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.1" already present on machine Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:38 +0000 UTC - event for coredns-5d78c9869d-c58vk: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container coredns Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:38 +0000 UTC - event for coredns-5d78c9869d-c58vk: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:38 +0000 UTC - event for coredns-5d78c9869d-c58vk: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container coredns Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:39 +0000 UTC - event for metrics-server-6987569d96-kbkwt: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "k8s.gcr.io/metrics-server/metrics-server:v0.6.2" in 5.443256687s (6.220329455s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:40 +0000 UTC - event for metrics-server-6987569d96-kbkwt: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container metrics-server Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:41 +0000 UTC - event for metrics-server-6987569d96-kbkwt: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container metrics-server Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:11 +0000 UTC - event for csi-azuredisk-controller: {deployment-controller } ScalingReplicaSet: Scaled up replica set csi-azuredisk-controller-56db99df6c to 1 Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:11 +0000 UTC - event for csi-azuredisk-controller-56db99df6c: {replicaset-controller } SuccessfulCreate: Created pod: csi-azuredisk-controller-56db99df6c-sbnn7 ... skipping 53 lines ... Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:23 +0000 UTC - event for kube-proxy-windows-527hb: {kubelet capz-conf-gm7xg} Killing: Stopping container kube-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:27 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Created: Created container containerd-logger Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:27 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Started: Started container containerd-logger Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:28 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Killing: Stopping container containerd-logger Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:31 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 4.2654753s (9.082484s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:32 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 415.267ms (415.267ms including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:32 +0000 UTC - event for kube-proxy-windows-wmp2s: {kubelet capz-conf-vvvcd} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-windows-wmp2s_kube-system(bcd38796-26a8-4f15-9513-2a8ac58d2ba4) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:43 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Created: Created container containerd-logger Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:43 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Started: Started container containerd-logger Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:44 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Killing: Stopping container containerd-logger Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:44 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 470.5145ms (470.5145ms including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:45 +0000 UTC - event for kube-proxy-windows-527hb: {kubelet capz-conf-gm7xg} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-windows-527hb_kube-system(00140840-3274-4053-b4b9-49e8d5996de7) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:49 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 595.9949ms (595.9949ms including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:55 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 607.4595ms (607.4595ms including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:03 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 525.7424ms (525.7424ms including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:06 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} BackOff: Back-off restarting failed container containerd-logger in pod containerd-logger-xxz7w_kube-system(e7e2ec93-e3fc-4ecc-8c7e-5cdb59f5fa8c) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:07 +0000 UTC - event for csi-azuredisk-node-win: {daemonset-controller } SuccessfulCreate: Created pod: csi-azuredisk-node-win-nrh82 Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:07 +0000 UTC - event for csi-proxy: {daemonset-controller } SuccessfulCreate: Created pod: csi-proxy-bnsgh Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:08 +0000 UTC - event for csi-azuredisk-node-win-nrh82: {kubelet capz-conf-vvvcd} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:08 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} Pulling: Pulling image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:18 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 498.4167ms (498.4167ms including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:26 +0000 UTC - event for csi-azuredisk-node-win-nrh82: {kubelet capz-conf-vvvcd} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" in 17.1572849s (17.1572849s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:26 +0000 UTC - event for csi-azuredisk-node-win-nrh82: {kubelet capz-conf-vvvcd} Created: Created container init Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:26 +0000 UTC - event for csi-azuredisk-node-win-nrh82: {kubelet capz-conf-vvvcd} Started: Started container init Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:27 +0000 UTC - event for csi-azuredisk-node-win-nrh82: {kubelet capz-conf-vvvcd} Killing: Stopping container init Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:27 +0000 UTC - event for csi-azuredisk-node-win-nrh82: {kubelet capz-conf-vvvcd} Pulled: Container image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" already present on machine Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:31 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} BackOff: Back-off restarting failed container containerd-logger in pod containerd-logger-ng4wl_kube-system(bd28dbc9-32d2-41df-8201-42b78981a1f5) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:43 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} Created: Created container csi-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:43 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" in 17.127592s (34.2429305s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:43 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} Started: Started container csi-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:44 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} Killing: Stopping container csi-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:48 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} Pulled: Container image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" already present on machine Mar 20 20:55:04.016: INFO: At 2023-03-20 20:43:05 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} BackOff: Back-off restarting failed container csi-proxy in pod csi-proxy-bnsgh_kube-system(d8246000-ea4b-4f56-a4b8-755b44656004) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:44:46 +0000 UTC - event for csi-azuredisk-node-win: {daemonset-controller } SuccessfulCreate: Created pod: csi-azuredisk-node-win-778bd Mar 20 20:55:04.016: INFO: At 2023-03-20 20:44:46 +0000 UTC - event for csi-proxy: {daemonset-controller } SuccessfulCreate: Created pod: csi-proxy-4v7zg Mar 20 20:55:04.016: INFO: At 2023-03-20 20:44:47 +0000 UTC - event for csi-azuredisk-node-win-778bd: {kubelet capz-conf-gm7xg} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" Mar 20 20:55:04.016: INFO: At 2023-03-20 20:44:47 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} Pulling: Pulling image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:09 +0000 UTC - event for csi-azuredisk-node-win-778bd: {kubelet capz-conf-gm7xg} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" in 21.9331601s (21.933656s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:09 +0000 UTC - event for csi-azuredisk-node-win-778bd: {kubelet capz-conf-gm7xg} Created: Created container init ... skipping 2 lines ... Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:15 +0000 UTC - event for csi-azuredisk-node-win-778bd: {kubelet capz-conf-gm7xg} Pulled: Container image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" already present on machine Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:31 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" in 22.3031678s (44.1908144s including waiting) Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:32 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} Created: Created container csi-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:32 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} Started: Started container csi-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:33 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} Killing: Stopping container csi-proxy Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:37 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} Pulled: Container image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" already present on machine Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:54 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} BackOff: Back-off restarting failed container csi-proxy in pod csi-proxy-4v7zg_kube-system(4bfb48ce-a08e-4c4b-8d11-594ea6912696) Mar 20 20:55:04.070: INFO: POD NODE PHASE GRACE CONDITIONS Mar 20 20:55:04.070: INFO: containerd-logger-ng4wl capz-conf-gm7xg Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:41:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:54:10 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:54:10 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:41:01 +0000 UTC }] Mar 20 20:55:04.070: INFO: containerd-logger-xxz7w capz-conf-vvvcd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:52:45 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:52:45 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:54 +0000 UTC }] Mar 20 20:55:04.070: INFO: coredns-5d78c9869d-c58vk capz-conf-1plfqp-control-plane-2j2gm Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:19 +0000 UTC }] Mar 20 20:55:04.070: INFO: coredns-5d78c9869d-wh4l9 capz-conf-1plfqp-control-plane-2j2gm Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:19 +0000 UTC }] Mar 20 20:55:04.070: INFO: csi-azuredisk-controller-56db99df6c-sbnn7 capz-conf-1plfqp-control-plane-2j2gm Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:11 +0000 UTC }] ... skipping 137 lines ... ] } ], "filters": [ { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == Stats && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::LayerID && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::NameToGuid && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.Stats && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.State && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetProcessProperties && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetComputeSystemProperties && hasnoproperty error" } ], "outputs": [ { "type": "StdOutput" } ... skipping 28 lines ... ] } ], "filters": [ { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == Stats && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::LayerID && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::NameToGuid && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.Stats && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.State && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetProcessProperties && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetComputeSystemProperties && hasnoproperty error" } ], "outputs": [ { "type": "StdOutput" } ], "schemaVersion": "2016-08-11" } Logging started... ENDLOG for container kube-system:containerd-logger-xxz7w:containerd-logger Mar 20 20:55:09.198: INFO: Failed to get logs of pod csi-azuredisk-node-win-778bd, container liveness-probe, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-778bd) Mar 20 20:55:09.198: INFO: Logs of kube-system/csi-azuredisk-node-win-778bd:liveness-probe on node capz-conf-gm7xg Mar 20 20:55:09.198: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-778bd:liveness-probe Mar 20 20:55:09.597: INFO: Failed to get logs of pod csi-azuredisk-node-win-778bd, container node-driver-registrar, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-778bd) Mar 20 20:55:09.597: INFO: Logs of kube-system/csi-azuredisk-node-win-778bd:node-driver-registrar on node capz-conf-gm7xg Mar 20 20:55:09.597: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-778bd:node-driver-registrar Mar 20 20:55:09.998: INFO: Failed to get logs of pod csi-azuredisk-node-win-778bd, container azuredisk, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-778bd) Mar 20 20:55:09.998: INFO: Logs of kube-system/csi-azuredisk-node-win-778bd:azuredisk on node capz-conf-gm7xg Mar 20 20:55:09.998: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-778bd:azuredisk Mar 20 20:55:10.397: INFO: Failed to get logs of pod csi-azuredisk-node-win-nrh82, container liveness-probe, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-nrh82) Mar 20 20:55:10.397: INFO: Logs of kube-system/csi-azuredisk-node-win-nrh82:liveness-probe on node capz-conf-vvvcd Mar 20 20:55:10.397: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-nrh82:liveness-probe Mar 20 20:55:10.798: INFO: Failed to get logs of pod csi-azuredisk-node-win-nrh82, container node-driver-registrar, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-nrh82) Mar 20 20:55:10.798: INFO: Logs of kube-system/csi-azuredisk-node-win-nrh82:node-driver-registrar on node capz-conf-vvvcd Mar 20 20:55:10.798: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-nrh82:node-driver-registrar Mar 20 20:55:11.197: INFO: Failed to get logs of pod csi-azuredisk-node-win-nrh82, container azuredisk, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-nrh82) Mar 20 20:55:11.197: INFO: Logs of kube-system/csi-azuredisk-node-win-nrh82:azuredisk on node capz-conf-vvvcd Mar 20 20:55:11.197: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-nrh82:azuredisk Mar 20 20:55:11.413: INFO: Logs of kube-system/csi-proxy-4v7zg:csi-proxy on node capz-conf-gm7xg Mar 20 20:55:11.413: INFO: : STARTLOG ... skipping 17 lines ... discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-Verb. Running kub-proxy service. Waiting for HNS network Calico to be created... ENDLOG for container kube-system:kube-proxy-windows-wmp2s:kube-proxy [38;5;9m[FAILED][0m in [SynchronizedBeforeSuite] - test/e2e/e2e.go:242 [38;5;243m@ 03/20/23 20:55:12.014[0m [38;5;243m<< Timeline[0m [38;5;9m[FAILED] Error waiting for all pods to be running and ready: Timed out after 600.000s. Expected all pods (need at least 0) in namespace "kube-system" to be running and ready (except for 0). 10 / 18 pods were running and ready. Expected 4 pod replicas, 4 are Running and Ready. Pods that were neither completed nor running: <[]v1.Pod | len:8, cap:8>: - metadata: ... skipping 237 lines ... imageID: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 lastState: terminated: containerID: containerd://1a2e8e1c5f111dd334de452542dd1eb113ab1ba967088428f64f5c344c449e41 exitCode: -1073741510 finishedAt: "2023-03-20T20:54:06Z" reason: Error startedAt: "2023-03-20T20:54:05Z" name: containerd-logger ready: false restartCount: 9 started: false state: waiting: message: back-off 5m0s restarting failed container=containerd-logger pod=containerd-logger-ng4wl_kube-system(bd28dbc9-32d2-41df-8201-42b78981a1f5) reason: CrashLoopBackOff hostIP: 10.1.0.4 phase: Running podIP: 10.1.0.4 podIPs: - ip: 10.1.0.4 ... skipping 240 lines ... imageID: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 lastState: terminated: containerID: containerd://8d389ab98b1e1381585a689130d3632e29d79314045e6027a7505ced598f75ac exitCode: -1073741510 finishedAt: "2023-03-20T20:52:41Z" reason: Error startedAt: "2023-03-20T20:52:40Z" name: containerd-logger ready: false restartCount: 9 started: false state: waiting: message: back-off 5m0s restarting failed container=containerd-logger pod=containerd-logger-xxz7w_kube-system(e7e2ec93-e3fc-4ecc-8c7e-5cdb59f5fa8c) reason: CrashLoopBackOff hostIP: 10.1.0.5 phase: Running podIP: 10.1.0.5 podIPs: - ip: 10.1.0.5 ... skipping 1241 lines ... imageID: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba lastState: terminated: containerID: containerd://54e337036669b590080a33498de1e29bfe77c5ddc7a9d11a0edabf34a8b47d31 exitCode: -1073741510 finishedAt: "2023-03-20T20:50:59Z" reason: Error startedAt: "2023-03-20T20:50:58Z" name: csi-proxy ready: false restartCount: 7 started: false state: waiting: message: back-off 5m0s restarting failed container=csi-proxy pod=csi-proxy-4v7zg_kube-system(4bfb48ce-a08e-4c4b-8d11-594ea6912696) reason: CrashLoopBackOff hostIP: 10.1.0.4 phase: Running podIP: 10.1.0.4 podIPs: - ip: 10.1.0.4 ... skipping 211 lines ... imageID: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba lastState: terminated: containerID: containerd://2353a4851cba8f20bd2fc79edf323753812abc396be6ff59943c0f1f8776e173 exitCode: -1073741510 finishedAt: "2023-03-20T20:53:14Z" reason: Error startedAt: "2023-03-20T20:53:13Z" name: csi-proxy ready: false restartCount: 9 started: false state: waiting: message: back-off 5m0s restarting failed container=csi-proxy pod=csi-proxy-bnsgh_kube-system(d8246000-ea4b-4f56-a4b8-755b44656004) reason: CrashLoopBackOff hostIP: 10.1.0.5 phase: Running podIP: 10.1.0.5 podIPs: - ip: 10.1.0.5 ... skipping 279 lines ... imageID: sha256:066f734ecf45f03f1a29b2c4432153044af372540aec60a4e46e4a8b627cf1ed lastState: terminated: containerID: containerd://39f24b2731756787768c3581c711a27fc3bc56470b3c79d1d4bf2d3bae83468b exitCode: -1073741510 finishedAt: "2023-03-20T20:51:56Z" reason: Error startedAt: "2023-03-20T20:51:56Z" name: kube-proxy ready: false restartCount: 9 started: false state: waiting: message: back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-windows-527hb_kube-system(00140840-3274-4053-b4b9-49e8d5996de7) reason: CrashLoopBackOff hostIP: 10.1.0.4 phase: Running podIP: 10.1.0.4 podIPs: - ip: 10.1.0.4 ... skipping 279 lines ... imageID: sha256:066f734ecf45f03f1a29b2c4432153044af372540aec60a4e46e4a8b627cf1ed lastState: terminated: containerID: containerd://2e0305ad0952156deb179bd5d9b7d8b1583328d2294ce9dddab65ae4da035397 exitCode: -1073741510 finishedAt: "2023-03-20T20:51:53Z" reason: Error startedAt: "2023-03-20T20:51:52Z" name: kube-proxy ready: false restartCount: 9 started: false state: waiting: message: back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-windows-wmp2s_kube-system(bcd38796-26a8-4f15-9513-2a8ac58d2ba4) reason: CrashLoopBackOff hostIP: 10.1.0.5 phase: Running podIP: 10.1.0.5 podIPs: - ip: 10.1.0.5 qosClass: BestEffort startTime: "2023-03-20T20:40:55Z"[0m [38;5;9mIn [1m[SynchronizedBeforeSuite][0m[38;5;9m at: [1mtest/e2e/e2e.go:242[0m [38;5;243m@ 03/20/23 20:55:12.014[0m [38;5;243m------------------------------[0m [38;5;9m[SynchronizedBeforeSuite] [FAILED] [758.311 seconds][0m [38;5;9m[1m[SynchronizedBeforeSuite] [0m [38;5;243mtest/e2e/e2e.go:77[0m [38;5;9m[FAILED] [1m[38;5;9mSynchronizedBeforeSuite failed on Ginkgo parallel process #1[0m The first SynchronizedBeforeSuite function running on Ginkgo parallel process #1 failed. This suite will now abort. [0m [38;5;9mIn [1m[SynchronizedBeforeSuite][0m[38;5;9m at: [1mtest/e2e/e2e.go:77[0m [38;5;243m@ 03/20/23 20:55:12.036[0m [38;5;243m------------------------------[0m [38;5;9m[SynchronizedBeforeSuite] [FAILED] [758.335 seconds][0m [38;5;9m[1m[SynchronizedBeforeSuite] [0m [38;5;243mtest/e2e/e2e.go:77[0m [38;5;9m[FAILED] [1m[38;5;9mSynchronizedBeforeSuite failed on Ginkgo parallel process #1[0m The first SynchronizedBeforeSuite function running on Ginkgo parallel process #1 failed. This suite will now abort. [0m [38;5;9mIn [1m[SynchronizedBeforeSuite][0m[38;5;9m at: [1mtest/e2e/e2e.go:77[0m [38;5;243m@ 03/20/23 20:55:12.036[0m [38;5;243m------------------------------[0m [38;5;9m[SynchronizedBeforeSuite] [FAILED] [758.331 seconds][0m [38;5;9m[1m[SynchronizedBeforeSuite] [0m [38;5;243mtest/e2e/e2e.go:77[0m [38;5;9m[FAILED] [1m[38;5;9mSynchronizedBeforeSuite failed on Ginkgo parallel process #1[0m The first SynchronizedBeforeSuite function running on Ginkgo parallel process #1 failed. This suite will now abort. [0m [38;5;9mIn [1m[SynchronizedBeforeSuite][0m[38;5;9m at: [1mtest/e2e/e2e.go:77[0m [38;5;243m@ 03/20/23 20:55:12.036[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 4 Failures:[0m [38;5;9m[FAIL][0m [38;5;9m[1m[SynchronizedBeforeSuite] [0m [38;5;243mtest/e2e/e2e.go:77[0m [38;5;9m[FAIL][0m [38;5;9m[1m[SynchronizedBeforeSuite] [0m [38;5;243mtest/e2e/e2e.go:77[0m [38;5;9m[FAIL][0m [38;5;9m[1m[SynchronizedBeforeSuite] [0m [38;5;243mtest/e2e/e2e.go:77[0m [38;5;9m[FAIL][0m [38;5;9m[1m[SynchronizedBeforeSuite] [0m [38;5;243mtest/e2e/e2e.go:242[0m [38;5;9m[1mRan 0 of 7207 Specs in 758.428 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;14m[1mA BeforeSuite node failed so all tests were skipped.[0m I0320 20:42:33.233579 15 e2e.go:117] Starting e2e run "6de80a66-9fe0-470f-a3de-d8b524a156e7" on Ginkgo node 1 [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11m--ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags[0m [38;5;11m--ginkgo.progress is deprecated . The functionality provided by --progress was confusing and is no longer needed. Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs. Or you can run with -vv to always see all node events. Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.[0m [38;5;11m--ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo. This feature has proved to be more noisy than useful. You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1[0m --- FAIL: TestE2E (758.86s) FAIL I0320 20:42:33.231585 16 e2e.go:117] Starting e2e run "1c4d63ab-4761-4bda-93a7-5f81513b835c" on Ginkgo node 2 [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11m--ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags[0m [38;5;11m--ginkgo.progress is deprecated . The functionality provided by --progress was confusing and is no longer needed. Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs. Or you can run with -vv to always see all node events. Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.[0m [38;5;11m--ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo. This feature has proved to be more noisy than useful. You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1[0m --- FAIL: TestE2E (758.81s) FAIL I0320 20:42:33.243068 17 e2e.go:117] Starting e2e run "7672dc9e-5156-48cf-a02d-283c07070e7d" on Ginkgo node 3 [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11m--ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags[0m [38;5;11m--ginkgo.progress is deprecated . The functionality provided by --progress was confusing and is no longer needed. Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs. Or you can run with -vv to always see all node events. Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.[0m [38;5;11m--ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo. This feature has proved to be more noisy than useful. You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1[0m --- FAIL: TestE2E (758.80s) FAIL I0320 20:42:33.229510 19 e2e.go:117] Starting e2e run "def2f57d-e7a3-42b4-89c4-1100437748c0" on Ginkgo node 4 [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11m--ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo. This feature has proved to be more noisy than useful. You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.[0m [38;5;11m--ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags[0m [38;5;11m--ginkgo.progress is deprecated . The functionality provided by --progress was confusing and is no longer needed. Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs. Or you can run with -vv to always see all node events. Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1[0m --- FAIL: TestE2E (758.81s) FAIL Ginkgo ran 1 suite in 12m38.988656037s Test Suite Failed [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11m--slowSpecThreshold is deprecated use --slow-spec-threshold instead and pass in a duration string (e.g. '5s', not '5.0')[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed--slowspecthreshold[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1[0m [38;5;9m[FAILED][0m in [It] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227 [38;5;243m@ 03/20/23 20:55:12.532[0m Mar 20 20:55:12.532: INFO: FAILED! Mar 20 20:55:12.533: INFO: Cleaning up after "Conformance Tests conformance-tests" spec Mar 20 20:55:12.533: INFO: Dumping all the Cluster API resources in the "capz-conf-1plfqp" namespace [1mSTEP:[0m Dumping logs from the "capz-conf-1plfqp" workload cluster [38;5;243m@ 03/20/23 20:55:12.877[0m Mar 20 20:55:12.877: INFO: Dumping workload cluster capz-conf-1plfqp/capz-conf-1plfqp logs Mar 20 20:55:12.914: INFO: Collecting logs for Linux node capz-conf-1plfqp-control-plane-2j2gm in cluster capz-conf-1plfqp in namespace capz-conf-1plfqp Mar 20 20:55:25.757: INFO: Collecting boot logs for AzureMachine capz-conf-1plfqp-control-plane-2j2gm Mar 20 20:55:26.746: INFO: Collecting logs for Windows node capz-conf-gm7xg in cluster capz-conf-1plfqp in namespace capz-conf-1plfqp Mar 20 20:58:06.583: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-gm7xg to /logs/artifacts/clusters/capz-conf-1plfqp/machines/capz-conf-1plfqp-md-win-65dbf97bf6-csgg7/crashdumps.tar Mar 20 20:58:08.384: INFO: Collecting boot logs for AzureMachine capz-conf-1plfqp-md-win-gm7xg Failed to get logs for Machine capz-conf-1plfqp-md-win-65dbf97bf6-csgg7, Cluster capz-conf-1plfqp/capz-conf-1plfqp: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Mar 20 20:58:09.375: INFO: Collecting logs for Windows node capz-conf-vvvcd in cluster capz-conf-1plfqp in namespace capz-conf-1plfqp Mar 20 21:00:38.829: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-vvvcd to /logs/artifacts/clusters/capz-conf-1plfqp/machines/capz-conf-1plfqp-md-win-65dbf97bf6-j9qvz/crashdumps.tar Mar 20 21:00:40.689: INFO: Collecting boot logs for AzureMachine capz-conf-1plfqp-md-win-vvvcd Failed to get logs for Machine capz-conf-1plfqp-md-win-65dbf97bf6-j9qvz, Cluster capz-conf-1plfqp/capz-conf-1plfqp: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Mar 20 21:00:41.549: INFO: Dumping workload cluster capz-conf-1plfqp/capz-conf-1plfqp nodes Mar 20 21:00:41.850: INFO: Describing Node capz-conf-1plfqp-control-plane-2j2gm Mar 20 21:00:42.067: INFO: Describing Node capz-conf-gm7xg Mar 20 21:00:42.265: INFO: Describing Node capz-conf-vvvcd Mar 20 21:00:42.461: INFO: Fetching nodes took 912.555835ms Mar 20 21:00:42.462: INFO: Dumping workload cluster capz-conf-1plfqp/capz-conf-1plfqp pod logs ... skipping 5 lines ... Mar 20 21:00:42.883: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-59d9cb8fbb-8ft2d, container calico-kube-controllers Mar 20 21:00:42.959: INFO: Describing Pod calico-system/calico-node-bdvzb Mar 20 21:00:42.959: INFO: Creating log watcher for controller calico-system/calico-node-bdvzb, container calico-node Mar 20 21:00:43.043: INFO: Describing Pod calico-system/calico-node-windows-9f96h Mar 20 21:00:43.043: INFO: Creating log watcher for controller calico-system/calico-node-windows-9f96h, container calico-node-startup Mar 20 21:00:43.044: INFO: Creating log watcher for controller calico-system/calico-node-windows-9f96h, container calico-node-felix Mar 20 21:00:43.100: INFO: Error starting logs stream for pod calico-system/calico-node-windows-9f96h, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-9f96h" is waiting to start: PodInitializing Mar 20 21:00:43.100: INFO: Error starting logs stream for pod calico-system/calico-node-windows-9f96h, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-9f96h" is waiting to start: PodInitializing Mar 20 21:00:43.115: INFO: Describing Pod calico-system/calico-node-windows-k9kth Mar 20 21:00:43.115: INFO: Creating log watcher for controller calico-system/calico-node-windows-k9kth, container calico-node-startup Mar 20 21:00:43.115: INFO: Creating log watcher for controller calico-system/calico-node-windows-k9kth, container calico-node-felix Mar 20 21:00:43.169: INFO: Error starting logs stream for pod calico-system/calico-node-windows-k9kth, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-k9kth" is waiting to start: PodInitializing Mar 20 21:00:43.170: INFO: Error starting logs stream for pod calico-system/calico-node-windows-k9kth, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-k9kth" is waiting to start: PodInitializing Mar 20 21:00:43.507: INFO: Describing Pod calico-system/calico-typha-96fb785dc-c7sr9 Mar 20 21:00:43.507: INFO: Creating log watcher for controller calico-system/calico-typha-96fb785dc-c7sr9, container calico-typha Mar 20 21:00:43.908: INFO: Describing Pod calico-system/csi-node-driver-j9ptp Mar 20 21:00:43.908: INFO: Creating log watcher for controller calico-system/csi-node-driver-j9ptp, container csi-node-driver-registrar Mar 20 21:00:43.908: INFO: Creating log watcher for controller calico-system/csi-node-driver-j9ptp, container calico-csi Mar 20 21:00:44.310: INFO: Describing Pod kube-system/containerd-logger-ng4wl ... skipping 16 lines ... Mar 20 21:00:46.309: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-jtlzl, container node-driver-registrar Mar 20 21:00:46.309: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-jtlzl, container azuredisk Mar 20 21:00:46.707: INFO: Describing Pod kube-system/csi-azuredisk-node-win-778bd Mar 20 21:00:46.707: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-778bd, container liveness-probe Mar 20 21:00:46.707: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-778bd, container azuredisk Mar 20 21:00:46.707: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-778bd, container node-driver-registrar Mar 20 21:00:46.755: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-778bd, container liveness-probe: container "liveness-probe" in pod "csi-azuredisk-node-win-778bd" is waiting to start: PodInitializing Mar 20 21:00:46.755: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-778bd, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-win-778bd" is waiting to start: PodInitializing Mar 20 21:00:46.755: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-778bd, container node-driver-registrar: container "node-driver-registrar" in pod "csi-azuredisk-node-win-778bd" is waiting to start: PodInitializing Mar 20 21:00:47.110: INFO: Describing Pod kube-system/csi-azuredisk-node-win-nrh82 Mar 20 21:00:47.110: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-nrh82, container node-driver-registrar Mar 20 21:00:47.110: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-nrh82, container liveness-probe Mar 20 21:00:47.110: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-nrh82, container azuredisk Mar 20 21:00:47.148: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-nrh82, container node-driver-registrar: container "node-driver-registrar" in pod "csi-azuredisk-node-win-nrh82" is waiting to start: PodInitializing Mar 20 21:00:47.149: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-nrh82, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-win-nrh82" is waiting to start: PodInitializing Mar 20 21:00:47.149: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-nrh82, container liveness-probe: container "liveness-probe" in pod "csi-azuredisk-node-win-nrh82" is waiting to start: PodInitializing Mar 20 21:00:47.509: INFO: Describing Pod kube-system/csi-proxy-4v7zg Mar 20 21:00:47.510: INFO: Creating log watcher for controller kube-system/csi-proxy-4v7zg, container csi-proxy Mar 20 21:00:47.912: INFO: Describing Pod kube-system/csi-proxy-bnsgh Mar 20 21:00:47.913: INFO: Creating log watcher for controller kube-system/csi-proxy-bnsgh, container csi-proxy Mar 20 21:00:48.308: INFO: Describing Pod kube-system/etcd-capz-conf-1plfqp-control-plane-2j2gm Mar 20 21:00:48.309: INFO: Creating log watcher for controller kube-system/etcd-capz-conf-1plfqp-control-plane-2j2gm, container etcd ... skipping 21 lines ... INFO: Waiting for the Cluster capz-conf-1plfqp/capz-conf-1plfqp to be deleted [1mSTEP:[0m Waiting for cluster capz-conf-1plfqp to be deleted [38;5;243m@ 03/20/23 21:00:53.851[0m Mar 20 21:06:34.026: INFO: Deleting namespace used for hosting the "conformance-tests" test spec INFO: Deleting namespace capz-conf-1plfqp Mar 20 21:06:34.047: INFO: Checking if any resources are left over in Azure for spec "conformance-tests" [1mSTEP:[0m Redacting sensitive information from logs [38;5;243m@ 03/20/23 21:06:34.776[0m [38;5;9m• [FAILED] [2004.781 seconds][0m [0mConformance Tests [38;5;9m[1m[It] conformance-tests[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:98[0m [38;5;9m[FAILED] Unexpected error: <*errors.withStack | 0xc000f9b470>: { error: <*errors.withMessage | 0xc002656300>{ cause: <*errors.errorString | 0xc00021f130>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x34b656e, 0x376dca7, 0x196a59b, 0x197e6d8, 0x14ec761], } Unable to run conformance tests: error container run failed with exit code 1 occurred[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227[0m [38;5;243m@ 03/20/23 20:55:12.532[0m [38;5;9mFull Stack Trace[0m sigs.k8s.io/cluster-api-provider-azure/test/e2e.glob..func3.2() /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227 +0x175a ... skipping 6 lines ... [0m[1m[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report[0m [38;5;243mautogenerated by Ginkgo[0m [38;5;10m[ReportAfterSuite] PASSED [0.007 seconds][0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0mConformance Tests [38;5;9m[1m[It] conformance-tests[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227[0m [38;5;9m[1mRan 1 of 25 Specs in 2183.473 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m0 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m24 Skipped[0m --- FAIL: TestE2E (2183.48s) FAIL [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mCurrentGinkgoTestDescription() is deprecated in Ginkgo V2. Use CurrentSpecReport() instead.[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:297[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:300[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.8.4[0m Ginkgo ran 1 suite in 38m22.34137712s Test Suite Failed make[3]: *** [Makefile:663: test-e2e-run] Error 1 make[3]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[2]: *** [Makefile:678: test-e2e-skip-push] Error 2 make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[1]: *** [Makefile:694: test-conformance] Error 2 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:704: test-windows-upstream] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 8 lines ...