Recent runs || View in Spyglass
PR | claudiubelu: Refactored kubelet's kuberuntime_sandbox |
Result | FAILURE |
Tests | 1 failed / 2 succeeded |
Started | |
Elapsed | 1h7m |
Revision | 5e605d81d57e2309b3c08f821c9dc41372f802c7 |
Refs |
114185 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sConformance\sTests\sconformance\-tests$'
[FAILED] Unexpected error: <*errors.withStack | 0xc002e28f60>: { error: <*errors.withMessage | 0xc002b12900>{ cause: <*errors.errorString | 0xc0004fa310>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x34b656e, 0x376dca7, 0x196a59b, 0x197e6d8, 0x14ec761], } Unable to run conformance tests: error container run failed with exit code 1 occurred In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227 @ 03/16/23 22:58:23.048from junit.e2e_suite.1.xml
> Enter [BeforeEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:54 @ 03/16/23 22:38:34.549 INFO: Cluster name is capz-conf-0bueug STEP: Creating namespace "capz-conf-0bueug" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:96 @ 03/16/23 22:38:34.549 Mar 16 22:38:34.549: INFO: starting to create namespace for hosting the "capz-conf-0bueug" test spec INFO: Creating namespace capz-conf-0bueug INFO: Creating event watcher for namespace "capz-conf-0bueug" < Exit [BeforeEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:54 @ 03/16/23 22:38:34.602 (53ms) > Enter [It] conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:98 @ 03/16/23 22:38:34.602 conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 @ 03/16/23 22:38:34.602 conformance-tests Name | N | Min | Median | Mean | StdDev | Max ============================================================================================ cluster creation [duration] | 1 | 7m37.6009s | 7m37.6009s | 7m37.6009s | 0s | 7m37.6009s INFO: Creating the workload cluster with name "capz-conf-0bueug" using the "conformance-presubmit-artifacts-windows-containerd" template (Kubernetes v1.27.0-alpha.3.828+a34e37c9963af5, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-conf-0bueug --infrastructure (default) --kubernetes-version v1.27.0-alpha.3.828+a34e37c9963af5 --control-plane-machine-count 1 --worker-machine-count 0 --flavor conformance-presubmit-artifacts-windows-containerd INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/cluster_helpers.go:134 @ 03/16/23 22:38:37.551 INFO: Waiting for control plane to be initialized STEP: Ensuring KubeadmControlPlane is initialized - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:263 @ 03/16/23 22:40:27.677 STEP: Ensuring API Server is reachable before applying Helm charts - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:269 @ 03/16/23 22:43:07.896 STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:49 @ 03/16/23 22:43:08.333 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:102 @ 03/16/23 22:43:08.334 Mar 16 22:43:08.406: INFO: getting history for release projectcalico Mar 16 22:43:08.440: INFO: Release projectcalico does not exist, installing it Mar 16 22:43:09.369: INFO: creating 1 resource(s) Mar 16 22:43:09.463: INFO: creating 1 resource(s) Mar 16 22:43:09.547: INFO: creating 1 resource(s) Mar 16 22:43:09.630: INFO: creating 1 resource(s) Mar 16 22:43:09.728: INFO: creating 1 resource(s) Mar 16 22:43:09.822: INFO: creating 1 resource(s) Mar 16 22:43:09.952: INFO: creating 1 resource(s) Mar 16 22:43:10.066: INFO: creating 1 resource(s) Mar 16 22:43:10.150: INFO: creating 1 resource(s) Mar 16 22:43:10.236: INFO: creating 1 resource(s) Mar 16 22:43:10.314: INFO: creating 1 resource(s) Mar 16 22:43:10.394: INFO: creating 1 resource(s) Mar 16 22:43:10.476: INFO: creating 1 resource(s) Mar 16 22:43:10.555: INFO: creating 1 resource(s) Mar 16 22:43:10.638: INFO: creating 1 resource(s) Mar 16 22:43:10.740: INFO: creating 1 resource(s) Mar 16 22:43:10.844: INFO: creating 1 resource(s) Mar 16 22:43:10.935: INFO: creating 1 resource(s) Mar 16 22:43:11.044: INFO: creating 1 resource(s) Mar 16 22:43:11.227: INFO: creating 1 resource(s) Mar 16 22:43:11.519: INFO: creating 1 resource(s) Mar 16 22:43:11.564: INFO: Clearing discovery cache Mar 16 22:43:11.565: INFO: beginning wait for 21 resources with timeout of 1m0s Mar 16 22:43:14.107: INFO: creating 1 resource(s) Mar 16 22:43:14.627: INFO: creating 6 resource(s) Mar 16 22:43:15.378: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:58 @ 03/16/23 22:43:15.76 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:96 @ 03/16/23 22:43:16.019 Mar 16 22:43:16.019: INFO: starting to wait for deployment to become available Mar 16 22:43:26.091: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.072046262s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:64 @ 03/16/23 22:43:26.092 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:96 @ 03/16/23 22:43:26.378 Mar 16 22:43:26.378: INFO: starting to wait for deployment to become available Mar 16 22:44:17.096: INFO: Deployment calico-system/calico-kube-controllers is now available, took 50.718139287s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:96 @ 03/16/23 22:44:17.422 Mar 16 22:44:17.422: INFO: starting to wait for deployment to become available Mar 16 22:44:17.455: INFO: Deployment calico-system/calico-typha is now available, took 33.140822ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:69 @ 03/16/23 22:44:17.455 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:96 @ 03/16/23 22:44:17.746 Mar 16 22:44:17.746: INFO: starting to wait for deployment to become available Mar 16 22:44:37.896: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 20.15024355s INFO: Waiting for the first control plane machine managed by capz-conf-0bueug/capz-conf-0bueug-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/controlplane_helpers.go:132 @ 03/16/23 22:44:37.922 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:77 @ 03/16/23 22:44:37.93 Mar 16 22:44:37.985: INFO: getting history for release azuredisk-csi-driver-oot Mar 16 22:44:38.018: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Mar 16 22:44:40.612: INFO: creating 1 resource(s) Mar 16 22:44:40.707: INFO: creating 18 resource(s) Mar 16 22:44:41.057: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:87 @ 03/16/23 22:44:41.057 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:96 @ 03/16/23 22:44:41.32 Mar 16 22:44:41.320: INFO: starting to wait for deployment to become available Mar 16 22:45:12.001: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 30.680902422s INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-conf-0bueug/capz-conf-0bueug-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/controlplane_helpers.go:164 @ 03/16/23 22:45:12.015 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/controlplane_helpers.go:209 @ 03/16/23 22:45:12.022 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/machinedeployment_helpers.go:102 @ 03/16/23 22:45:12.051 STEP: Checking all the machines controlled by capz-conf-0bueug-md-0 are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/ginkgoextensions/output.go:35 @ 03/16/23 22:45:12.062 STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/machinedeployment_helpers.go:102 @ 03/16/23 22:45:12.072 STEP: Checking all the machines controlled by capz-conf-0bueug-md-win are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/ginkgoextensions/output.go:35 @ 03/16/23 22:46:12.16 INFO: Waiting for the machine pools to be provisioned INFO: Using repo-list '' for version 'v1.27.0-alpha.3.828+a34e37c9963af5' STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e, command=["-slowSpecThreshold=120" "-nodes=4" "/usr/local/bin/e2e.test" "--" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "--num-nodes=2" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "-ginkgo.timeout=3h" "-prepull-images=true" "-ginkgo.flakeAttempts=0" "-ginkgo.skip=\\[LinuxOnly\\]|\\[Serial\\]|\\[Slow\\]|\\[Excluded:WindowsDocker\\]|Networking.Granular.Checks(.*)node-pod.communication|Guestbook.application.should.create.and.stop.a.working.application|device.plugin.for.Windows|Container.Lifecycle.Hook.when.create.a.pod.with.lifecycle.hook.should.execute(.*)http.hook.properly|\\[sig-api-machinery\\].Garbage.collector" "-ginkgo.slow-spec-threshold=120s" "-ginkgo.progress=true" "-ginkgo.trace=true" "-ginkgo.v=true" "-node-os-distro=windows" "-disable-log-dump=true" "-dump-logs-on-failure=true" "-ginkgo.focus=\\[Conformance\\]|\\[NodeConformance\\]|\\[sig-windows\\]|\\[sig-apps\\].CronJob|\\[sig-api-machinery\\].ResourceQuota|\\[sig-scheduling\\].SchedulerPreemption"] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/ginkgoextensions/output.go:35 @ 03/16/23 22:46:12.274 Running Suite: Kubernetes e2e suite - /usr/local/bin ==================================================== Random Seed: �[1m1679006773�[0m - will randomize all specs Will run �[1m348�[0m of �[1m7207�[0m specs Running in parallel across �[1m4�[0m processes �[38;5;243m------------------------------�[0m �[38;5;9m[SynchronizedBeforeSuite] [FAILED] [728.313 seconds]�[0m �[38;5;9m�[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:77�[0m �[38;5;243mTimeline >>�[0m Mar 16 22:46:13.817: INFO: >>> kubeConfig: /tmp/kubeconfig Mar 16 22:46:13.820: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 16 22:46:14.003: INFO: Condition Ready of node capz-conf-275z6 is true, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoExecute 2023-03-16 22:45:10 +0000 UTC}]. Failure Mar 16 22:46:14.003: INFO: Condition Ready of node capz-conf-scwjd is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-03-16 22:45:20 +0000 UTC}]. Failure Mar 16 22:46:14.003: INFO: Unschedulable nodes= 2, maximum value for starting tests= 0 Mar 16 22:46:14.003: INFO: -> Node capz-conf-275z6 [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/not-ready NoExecute 2023-03-16 22:45:10 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master ]]] Mar 16 22:46:14.003: INFO: -> Node capz-conf-scwjd [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-03-16 22:45:20 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master ]]] Mar 16 22:46:14.003: INFO: ==== node wait: 1 out of 3 nodes are ready, max notReady allowed 0. Need 2 more before starting. Mar 16 22:46:44.049: INFO: Condition Ready of node capz-conf-scwjd is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-03-16 22:45:20 +0000 UTC}]. Failure Mar 16 22:46:44.049: INFO: Unschedulable nodes= 1, maximum value for starting tests= 0 Mar 16 22:46:44.049: INFO: -> Node capz-conf-scwjd [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-03-16 22:45:20 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master ]]] Mar 16 22:46:44.049: INFO: ==== node wait: 2 out of 3 nodes are ready, max notReady allowed 0. Need 1 more before starting. Mar 16 22:47:14.049: INFO: Condition Ready of node capz-conf-scwjd is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-03-16 22:45:20 +0000 UTC}]. Failure Mar 16 22:47:14.049: INFO: Unschedulable nodes= 1, maximum value for starting tests= 0 Mar 16 22:47:14.049: INFO: -> Node capz-conf-scwjd [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-03-16 22:45:20 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master ]]] Mar 16 22:47:14.049: INFO: ==== node wait: 2 out of 3 nodes are ready, max notReady allowed 0. Need 1 more before starting. Mar 16 22:47:44.049: INFO: Condition Ready of node capz-conf-scwjd is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-03-16 22:45:20 +0000 UTC}]. Failure Mar 16 22:47:44.049: INFO: Unschedulable nodes= 1, maximum value for starting tests= 0 Mar 16 22:47:44.049: INFO: -> Node capz-conf-scwjd [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-03-16 22:45:20 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master ]]] Mar 16 22:47:44.049: INFO: ==== node wait: 2 out of 3 nodes are ready, max notReady allowed 0. Need 1 more before starting. �[1mSTEP:�[0m Collecting events from namespace "kube-system". �[38;5;243m@ 03/16/23 22:58:14.09�[0m �[1mSTEP:�[0m Found 193 events. �[38;5;243m@ 03/16/23 22:58:14.154�[0m Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:23 +0000 UTC - event for etcd-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:23 +0000 UTC - event for kube-apiserver-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Container image "gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.3.828_a34e37c9963af5" already present on machine Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:23 +0000 UTC - event for kube-controller-manager-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Container image "gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.3.828_a34e37c9963af5" already present on machine Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:23 +0000 UTC - event for kube-scheduler-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Container image "gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.3.828_a34e37c9963af5" already present on machine Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:25 +0000 UTC - event for etcd-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container etcd Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:25 +0000 UTC - event for etcd-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container etcd Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:25 +0000 UTC - event for kube-apiserver-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container kube-apiserver Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:25 +0000 UTC - event for kube-apiserver-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container kube-apiserver Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:25 +0000 UTC - event for kube-controller-manager-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container kube-controller-manager Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:25 +0000 UTC - event for kube-controller-manager-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container kube-controller-manager Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:25 +0000 UTC - event for kube-scheduler-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container kube-scheduler Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:25 +0000 UTC - event for kube-scheduler-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container kube-scheduler Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:31 +0000 UTC - event for kube-controller-manager: {kube-controller-manager } LeaderElection: capz-conf-0bueug-control-plane-mj5bc_bbfd42a7-eac2-47b4-b495-d1a6ba09b200 became leader Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:34 +0000 UTC - event for kube-scheduler: {default-scheduler } LeaderElection: capz-conf-0bueug-control-plane-mj5bc_2d8d789c-93aa-4770-aece-79a314eac692 became leader Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:44 +0000 UTC - event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-5d78c9869d to 2 Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:44 +0000 UTC - event for coredns-5d78c9869d: {replicaset-controller } SuccessfulCreate: Created pod: coredns-5d78c9869d-nbrqn Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:44 +0000 UTC - event for coredns-5d78c9869d: {replicaset-controller } SuccessfulCreate: Created pod: coredns-5d78c9869d-jg2mq Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:44 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:44 +0000 UTC - event for coredns-5d78c9869d-nbrqn: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:44 +0000 UTC - event for kube-proxy: {daemonset-controller } SuccessfulCreate: Created pod: kube-proxy-6n5z6 Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:44 +0000 UTC - event for kube-proxy: {daemonset-controller } SuccessfulDelete: Deleted pod: kube-proxy-6n5z6 Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:44 +0000 UTC - event for kube-proxy-6n5z6: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-proxy-6n5z6 to capz-conf-0bueug-control-plane-mj5bc Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-apiserver-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "capzci.azurecr.io/kube-apiserver:v1.27.0-alpha.3.830_9fce3cd4b80206" Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-controller-manager-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "capzci.azurecr.io/kube-controller-manager:v1.27.0-alpha.3.830_9fce3cd4b80206" Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-proxy: {daemonset-controller } SuccessfulCreate: Created pod: kube-proxy-xbdgr Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-proxy-6n5z6: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedMount: MountVolume.SetUp failed for volume "kube-proxy" : object "kube-system"/"kube-proxy" not registered Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-proxy-6n5z6: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-rhjfr" : object "kube-system"/"kube-root-ca.crt" not registered Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-proxy-xbdgr: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "capzci.azurecr.io/kube-proxy:v1.27.0-alpha.3.830_9fce3cd4b80206" Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-proxy-xbdgr: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-proxy-xbdgr to capz-conf-0bueug-control-plane-mj5bc Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-scheduler-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "capzci.azurecr.io/kube-scheduler:v1.27.0-alpha.3.830_9fce3cd4b80206" Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:47 +0000 UTC - event for kube-apiserver-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Killing: Stopping container kube-apiserver Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:47 +0000 UTC - event for kube-controller-manager-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Killing: Stopping container kube-controller-manager Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:47 +0000 UTC - event for kube-scheduler-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Killing: Stopping container kube-scheduler Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:48 +0000 UTC - event for kube-scheduler-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container kube-scheduler Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:48 +0000 UTC - event for kube-scheduler-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container kube-scheduler Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:48 +0000 UTC - event for kube-scheduler-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Successfully pulled image "capzci.azurecr.io/kube-scheduler:v1.27.0-alpha.3.830_9fce3cd4b80206" in 2.041926168s (2.042037077s including waiting) Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:50 +0000 UTC - event for kube-controller-manager-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container kube-controller-manager Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:50 +0000 UTC - event for kube-controller-manager-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Successfully pulled image "capzci.azurecr.io/kube-controller-manager:v1.27.0-alpha.3.830_9fce3cd4b80206" in 2.255240691s (4.269356823s including waiting) Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:51 +0000 UTC - event for kube-controller-manager-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container kube-controller-manager Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:53 +0000 UTC - event for kube-apiserver-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Successfully pulled image "capzci.azurecr.io/kube-apiserver:v1.27.0-alpha.3.830_9fce3cd4b80206" in 2.510111229s (6.77641643s including waiting) Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:53 +0000 UTC - event for kube-apiserver-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container kube-apiserver Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:53 +0000 UTC - event for kube-apiserver-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container kube-apiserver Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:54 +0000 UTC - event for kube-proxy-xbdgr: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Successfully pulled image "capzci.azurecr.io/kube-proxy:v1.27.0-alpha.3.830_9fce3cd4b80206" in 1.489804312s (7.905868712s including waiting) Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:54 +0000 UTC - event for kube-proxy-xbdgr: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container kube-proxy Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:55 +0000 UTC - event for kube-proxy-xbdgr: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container kube-proxy Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:13 +0000 UTC - event for kube-controller-manager: {kube-controller-manager } LeaderElection: capz-conf-0bueug-control-plane-mj5bc_a022eaa2-18c0-4537-8e40-ee10fa566d66 became leader Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:15 +0000 UTC - event for etcd-capz-conf-0bueug-control-plane-mj5bc: {node-controller } NodeNotReady: Node is not ready Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:15 +0000 UTC - event for kube-apiserver-capz-conf-0bueug-control-plane-mj5bc: {node-controller } NodeNotReady: Node is not ready Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:15 +0000 UTC - event for kube-controller-manager-capz-conf-0bueug-control-plane-mj5bc: {node-controller } NodeNotReady: Node is not ready Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:15 +0000 UTC - event for kube-proxy-xbdgr: {node-controller } NodeNotReady: Node is not ready Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:15 +0000 UTC - event for kube-scheduler-capz-conf-0bueug-control-plane-mj5bc: {node-controller } NodeNotReady: Node is not ready Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:15 +0000 UTC - event for metrics-server: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-6987569d96 to 1 Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:15 +0000 UTC - event for metrics-server-6987569d96: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-6987569d96-8kswn Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:18 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:18 +0000 UTC - event for coredns-5d78c9869d-nbrqn: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:18 +0000 UTC - event for kube-scheduler: {default-scheduler } LeaderElection: capz-conf-0bueug-control-plane-mj5bc_4790ca42-5f76-4363-8a8d-bc2307d9f033 became leader Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:18 +0000 UTC - event for metrics-server-6987569d96-8kswn: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:35 +0000 UTC - event for kube-apiserver-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:51 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-5d78c9869d-jg2mq to capz-conf-0bueug-control-plane-mj5bc Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:51 +0000 UTC - event for coredns-5d78c9869d-nbrqn: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-5d78c9869d-nbrqn to capz-conf-0bueug-control-plane-mj5bc Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:51 +0000 UTC - event for metrics-server-6987569d96-8kswn: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-6987569d96-8kswn to capz-conf-0bueug-control-plane-mj5bc Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:52 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:52 +0000 UTC - event for coredns-5d78c9869d-nbrqn: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:53 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "927e714bcc0b5ae751075c38c9b7988d11d9f9ca0742dcc8ba26334e5813d4b8": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/ Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:53 +0000 UTC - event for coredns-5d78c9869d-nbrqn: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "946dd33ebcc4c32f473c66188ba91c8675b4c7a0b2183ebdecaba866f615d02d": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/ Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:53 +0000 UTC - event for metrics-server-6987569d96-8kswn: {kubelet capz-conf-0bueug-control-plane-mj5bc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:53 +0000 UTC - event for metrics-server-6987569d96-8kswn: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f06ca875435501c5124ae9ffa6822484534de14eb5e4418f383a442d84e03e54": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/ Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:54 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {kubelet capz-conf-0bueug-control-plane-mj5bc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:54 +0000 UTC - event for coredns-5d78c9869d-nbrqn: {kubelet capz-conf-0bueug-control-plane-mj5bc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:07 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.1" already present on machine Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:07 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container coredns Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:07 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container coredns Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:08 +0000 UTC - event for metrics-server-6987569d96-8kswn: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "k8s.gcr.io/metrics-server/metrics-server:v0.6.2" Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:13 +0000 UTC - event for coredns-5d78c9869d-nbrqn: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container coredns Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:13 +0000 UTC - event for coredns-5d78c9869d-nbrqn: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.1" already present on machine Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:13 +0000 UTC - event for coredns-5d78c9869d-nbrqn: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container coredns Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:16 +0000 UTC - event for metrics-server-6987569d96-8kswn: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Successfully pulled image "k8s.gcr.io/metrics-server/metrics-server:v0.6.2" in 2.557425539s (8.516519402s including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:17 +0000 UTC - event for metrics-server-6987569d96-8kswn: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container metrics-server Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:17 +0000 UTC - event for metrics-server-6987569d96-8kswn: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container metrics-server Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:40 +0000 UTC - event for csi-azuredisk-controller: {deployment-controller } ScalingReplicaSet: Scaled up replica set csi-azuredisk-controller-56db99df6c to 1 Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:40 +0000 UTC - event for csi-azuredisk-controller-56db99df6c: {replicaset-controller } SuccessfulCreate: Created pod: csi-azuredisk-controller-56db99df6c-9zdpw Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:40 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {default-scheduler } Scheduled: Successfully assigned kube-system/csi-azuredisk-controller-56db99df6c-9zdpw to capz-conf-0bueug-control-plane-mj5bc Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:40 +0000 UTC - event for csi-azuredisk-node: {daemonset-controller } SuccessfulCreate: Created pod: csi-azuredisk-node-v7lzh Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:40 +0000 UTC - event for csi-azuredisk-node-v7lzh: {default-scheduler } Scheduled: Successfully assigned kube-system/csi-azuredisk-node-v7lzh to capz-conf-0bueug-control-plane-mj5bc Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:41 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.3.0" Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:41 +0000 UTC - event for csi-azuredisk-node-v7lzh: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0" Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:42 +0000 UTC - event for csi-azuredisk-node-v7lzh: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container liveness-probe Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:42 +0000 UTC - event for csi-azuredisk-node-v7lzh: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0" in 796.771041ms (796.870843ms including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:42 +0000 UTC - event for csi-azuredisk-node-v7lzh: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2" Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:42 +0000 UTC - event for csi-azuredisk-node-v7lzh: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container liveness-probe Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:44 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container csi-provisioner Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:44 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.3.0" in 2.368523274s (3.07285123s including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:44 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container csi-provisioner Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:44 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v4.0.0" Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:45 +0000 UTC - event for csi-azuredisk-node-v7lzh: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2" in 1.211315428s (3.408183986s including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:45 +0000 UTC - event for csi-azuredisk-node-v7lzh: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container node-driver-registrar Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:46 +0000 UTC - event for csi-azuredisk-node-v7lzh: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:46 +0000 UTC - event for csi-azuredisk-node-v7lzh: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container node-driver-registrar Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:49 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v4.0.0" in 4.071220567s (4.946360278s including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:49 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container csi-attacher Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:50 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1" Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:50 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container csi-attacher Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:56 +0000 UTC - event for csi-azuredisk-node-v7lzh: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container azuredisk Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:56 +0000 UTC - event for csi-azuredisk-node-v7lzh: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" in 6.39706756s (10.086373919s including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:56 +0000 UTC - event for csi-azuredisk-node-v7lzh: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container azuredisk Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:02 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1" in 6.574943175s (12.677549292s including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:03 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container csi-snapshotter Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:03 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container csi-snapshotter Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:03 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0" Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:05 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Container image "mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0" already present on machine Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:05 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container azuredisk Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:05 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container csi-resizer Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:05 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0" in 2.20984546s (2.209857861s including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:05 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container azuredisk Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:05 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Container image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" already present on machine Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:05 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container liveness-probe Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:05 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container csi-resizer Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:05 +0000 UTC - event for csi-azuredisk-controller-56db99df6c-9zdpw: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container liveness-probe Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:06 +0000 UTC - event for disk-csi-azure-com: {disk.csi.azure.com/1679006706011-8081-disk.csi.azure.com } LeaderElection: 1679006706011-8081-disk-csi-azure-com became leader Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:06 +0000 UTC - event for external-attacher-leader-disk-csi-azure-com: {external-attacher-leader-disk.csi.azure.com/capz-conf-0bueug-control-plane-mj5bc } LeaderElection: capz-conf-0bueug-control-plane-mj5bc became leader Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:06 +0000 UTC - event for external-resizer-disk-csi-azure-com: {external-resizer-disk-csi-azure-com/capz-conf-0bueug-control-plane-mj5bc } LeaderElection: capz-conf-0bueug-control-plane-mj5bc became leader Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:06 +0000 UTC - event for external-snapshotter-leader-disk-csi-azure-com: {external-snapshotter-leader-disk.csi.azure.com/capz-conf-0bueug-control-plane-mj5bc } LeaderElection: capz-conf-0bueug-control-plane-mj5bc became leader Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:07 +0000 UTC - event for containerd-logger: {daemonset-controller } SuccessfulCreate: Created pod: containerd-logger-dv27w Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:07 +0000 UTC - event for containerd-logger-dv27w: {default-scheduler } Scheduled: Successfully assigned kube-system/containerd-logger-dv27w to capz-conf-275z6 Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:07 +0000 UTC - event for kube-proxy-windows: {daemonset-controller } SuccessfulCreate: Created pod: kube-proxy-windows-x8pwv Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:07 +0000 UTC - event for kube-proxy-windows-x8pwv: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-proxy-windows-x8pwv to capz-conf-275z6 Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:08 +0000 UTC - event for containerd-logger: {daemonset-controller } SuccessfulCreate: Created pod: containerd-logger-lsh6r Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:08 +0000 UTC - event for containerd-logger-lsh6r: {default-scheduler } Scheduled: Successfully assigned kube-system/containerd-logger-lsh6r to capz-conf-scwjd Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:08 +0000 UTC - event for kube-proxy-windows: {daemonset-controller } SuccessfulCreate: Created pod: kube-proxy-windows-bgfqk Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:08 +0000 UTC - event for kube-proxy-windows-bgfqk: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-proxy-windows-bgfqk to capz-conf-scwjd Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:22 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Pulling: Pulling image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:22 +0000 UTC - event for kube-proxy-windows-x8pwv: {kubelet capz-conf-275z6} Pulled: Container image "sigwindowstools/kube-proxy:v1.27.0-alpha.3.828_a34e37c9963af5-calico-hostprocess" already present on machine Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:22 +0000 UTC - event for kube-proxy-windows-x8pwv: {kubelet capz-conf-275z6} Created: Created container kube-proxy Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:23 +0000 UTC - event for containerd-logger-lsh6r: {kubelet capz-conf-scwjd} Pulling: Pulling image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:23 +0000 UTC - event for kube-proxy-windows-bgfqk: {kubelet capz-conf-scwjd} Pulled: Container image "sigwindowstools/kube-proxy:v1.27.0-alpha.3.828_a34e37c9963af5-calico-hostprocess" already present on machine Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:23 +0000 UTC - event for kube-proxy-windows-bgfqk: {kubelet capz-conf-scwjd} Created: Created container kube-proxy Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:23 +0000 UTC - event for kube-proxy-windows-x8pwv: {kubelet capz-conf-275z6} Started: Started container kube-proxy Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:23 +0000 UTC - event for kube-proxy-windows-x8pwv: {kubelet capz-conf-275z6} Killing: Stopping container kube-proxy Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:24 +0000 UTC - event for kube-proxy-windows-bgfqk: {kubelet capz-conf-scwjd} Started: Started container kube-proxy Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:25 +0000 UTC - event for kube-proxy-windows-bgfqk: {kubelet capz-conf-scwjd} Killing: Stopping container kube-proxy Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:26 +0000 UTC - event for containerd-logger-lsh6r: {kubelet capz-conf-scwjd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 3.0525164s (3.0525164s including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:29 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 3.0834111s (6.3479906s including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:32 +0000 UTC - event for containerd-logger-lsh6r: {kubelet capz-conf-scwjd} Created: Created container containerd-logger Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:33 +0000 UTC - event for containerd-logger-lsh6r: {kubelet capz-conf-scwjd} Started: Started container containerd-logger Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:34 +0000 UTC - event for containerd-logger-lsh6r: {kubelet capz-conf-scwjd} Killing: Stopping container containerd-logger Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:35 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Started: Started container containerd-logger Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:35 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Killing: Stopping container containerd-logger Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:35 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Created: Created container containerd-logger Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:38 +0000 UTC - event for containerd-logger-lsh6r: {kubelet capz-conf-scwjd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 326.9977ms (326.9977ms including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:40 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 444.5104ms (444.5104ms including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:40 +0000 UTC - event for kube-proxy-windows-bgfqk: {kubelet capz-conf-scwjd} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-windows-bgfqk_kube-system(1b0f5228-df77-4180-b53a-20f0f3d5acb4) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:44 +0000 UTC - event for kube-proxy-windows-x8pwv: {kubelet capz-conf-275z6} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-windows-x8pwv_kube-system(434d370f-88b5-4ede-acf0-2fe2029b30d0) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:49 +0000 UTC - event for containerd-logger-lsh6r: {kubelet capz-conf-scwjd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 374.7954ms (374.7954ms including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:51 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 411.3733ms (411.3733ms including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:00 +0000 UTC - event for containerd-logger-lsh6r: {kubelet capz-conf-scwjd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 362.5522ms (362.5522ms including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:02 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 470.8619ms (471.347ms including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:11 +0000 UTC - event for containerd-logger-lsh6r: {kubelet capz-conf-scwjd} BackOff: Back-off restarting failed container containerd-logger in pod containerd-logger-lsh6r_kube-system(017a5a4a-d9d2-4bc3-8671-6ed7c34dd141) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:12 +0000 UTC - event for csi-azuredisk-node-win: {daemonset-controller } SuccessfulCreate: Created pod: csi-azuredisk-node-win-vrwwk Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:12 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {kubelet capz-conf-275z6} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:12 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {default-scheduler } Scheduled: Successfully assigned kube-system/csi-azuredisk-node-win-vrwwk to capz-conf-275z6 Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:12 +0000 UTC - event for csi-proxy: {daemonset-controller } SuccessfulCreate: Created pod: csi-proxy-fwgj7 Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:12 +0000 UTC - event for csi-proxy-fwgj7: {default-scheduler } Scheduled: Successfully assigned kube-system/csi-proxy-fwgj7 to capz-conf-275z6 Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:12 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} Pulling: Pulling image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:13 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 460.8675ms (460.8675ms including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:23 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} BackOff: Back-off restarting failed container containerd-logger in pod containerd-logger-dv27w_kube-system(8b158921-6e6f-4293-aa4d-f1ba3f8d6022) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:27 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} Created: Created container csi-proxy Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:27 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" in 14.3317146s (14.6425719s including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:27 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} Started: Started container csi-proxy Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:28 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} Killing: Stopping container csi-proxy Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:32 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} Pulled: Container image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" already present on machine Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:43 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {kubelet capz-conf-275z6} Created: Created container init Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:43 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {kubelet capz-conf-275z6} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" in 16.0298164s (30.8268854s including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:43 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} BackOff: Back-off restarting failed container csi-proxy in pod csi-proxy-fwgj7_kube-system(ec53bf42-2782-4e41-954c-24c0694b8136) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:44 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {kubelet capz-conf-275z6} Started: Started container init Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:44 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {kubelet capz-conf-275z6} Killing: Stopping container init Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:49 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {kubelet capz-conf-275z6} Pulled: Container image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" already present on machine Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:01 +0000 UTC - event for csi-azuredisk-node-win: {daemonset-controller } SuccessfulCreate: Created pod: csi-azuredisk-node-win-tf9rw Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:01 +0000 UTC - event for csi-azuredisk-node-win-tf9rw: {default-scheduler } Scheduled: Successfully assigned kube-system/csi-azuredisk-node-win-tf9rw to capz-conf-scwjd Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:01 +0000 UTC - event for csi-proxy: {daemonset-controller } SuccessfulCreate: Created pod: csi-proxy-dm54w Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:01 +0000 UTC - event for csi-proxy-dm54w: {default-scheduler } Scheduled: Successfully assigned kube-system/csi-proxy-dm54w to capz-conf-scwjd Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:02 +0000 UTC - event for csi-azuredisk-node-win-tf9rw: {kubelet capz-conf-scwjd} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:02 +0000 UTC - event for csi-proxy-dm54w: {kubelet capz-conf-scwjd} Pulling: Pulling image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:17 +0000 UTC - event for csi-azuredisk-node-win-tf9rw: {kubelet capz-conf-scwjd} Created: Created container init Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:17 +0000 UTC - event for csi-azuredisk-node-win-tf9rw: {kubelet capz-conf-scwjd} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" in 14.7947712s (14.7947712s including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:18 +0000 UTC - event for csi-azuredisk-node-win-tf9rw: {kubelet capz-conf-scwjd} Started: Started container init Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:18 +0000 UTC - event for csi-azuredisk-node-win-tf9rw: {kubelet capz-conf-scwjd} Killing: Stopping container init Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:19 +0000 UTC - event for csi-azuredisk-node-win-tf9rw: {kubelet capz-conf-scwjd} Pulled: Container image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" already present on machine Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:32 +0000 UTC - event for csi-proxy-dm54w: {kubelet capz-conf-scwjd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" in 14.7685822s (29.5448352s including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:32 +0000 UTC - event for csi-proxy-dm54w: {kubelet capz-conf-scwjd} Started: Started container csi-proxy Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:32 +0000 UTC - event for csi-proxy-dm54w: {kubelet capz-conf-scwjd} Created: Created container csi-proxy Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:33 +0000 UTC - event for csi-proxy-dm54w: {kubelet capz-conf-scwjd} Killing: Stopping container csi-proxy Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:37 +0000 UTC - event for csi-proxy-dm54w: {kubelet capz-conf-scwjd} Pulled: Container image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" already present on machine Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:48 +0000 UTC - event for csi-proxy-dm54w: {kubelet capz-conf-scwjd} BackOff: Back-off restarting failed container csi-proxy in pod csi-proxy-dm54w_kube-system(1dafe25d-5961-4f8a-8685-e52c2150ab68) Mar 16 22:58:14.216: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 22:58:14.216: INFO: containerd-logger-dv27w capz-conf-275z6 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:57:20 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:57:20 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:07 +0000 UTC }] Mar 16 22:58:14.216: INFO: containerd-logger-lsh6r capz-conf-scwjd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:57:03 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:57:03 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:08 +0000 UTC }] Mar 16 22:58:14.216: INFO: coredns-5d78c9869d-jg2mq capz-conf-0bueug-control-plane-mj5bc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:51 +0000 UTC }] Mar 16 22:58:14.216: INFO: coredns-5d78c9869d-nbrqn capz-conf-0bueug-control-plane-mj5bc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:51 +0000 UTC }] Mar 16 22:58:14.216: INFO: csi-azuredisk-controller-56db99df6c-9zdpw capz-conf-0bueug-control-plane-mj5bc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:05 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:40 +0000 UTC }] Mar 16 22:58:14.216: INFO: csi-azuredisk-node-v7lzh capz-conf-0bueug-control-plane-mj5bc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:40 +0000 UTC }] Mar 16 22:58:14.216: INFO: csi-azuredisk-node-win-tf9rw capz-conf-scwjd Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:48:01 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:48:01 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:48:01 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:48:01 +0000 UTC }] Mar 16 22:58:14.216: INFO: csi-azuredisk-node-win-vrwwk capz-conf-275z6 Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:53:10 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:46:12 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:46:12 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:46:12 +0000 UTC }] Mar 16 22:58:14.216: INFO: csi-proxy-dm54w capz-conf-scwjd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:48:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:54:06 +0000 UTC ContainersNotReady containers with unready status: [csi-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:54:06 +0000 UTC ContainersNotReady containers with unready status: [csi-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:48:01 +0000 UTC }] Mar 16 22:58:14.216: INFO: csi-proxy-fwgj7 capz-conf-275z6 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:46:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:57:07 +0000 UTC ContainersNotReady containers with unready status: [csi-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:57:07 +0000 UTC ContainersNotReady containers with unready status: [csi-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:46:12 +0000 UTC }] Mar 16 22:58:14.216: INFO: etcd-capz-conf-0bueug-control-plane-mj5bc capz-conf-0bueug-control-plane-mj5bc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:42:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:42:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:42:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:42:46 +0000 UTC }] Mar 16 22:58:14.216: INFO: kube-apiserver-capz-conf-0bueug-control-plane-mj5bc capz-conf-0bueug-control-plane-mj5bc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:42:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:42:21 +0000 UTC }] Mar 16 22:58:14.216: INFO: kube-controller-manager-capz-conf-0bueug-control-plane-mj5bc capz-conf-0bueug-control-plane-mj5bc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:42:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:42:57 +0000 UTC }] Mar 16 22:58:14.216: INFO: kube-proxy-windows-bgfqk capz-conf-scwjd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:56:04 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:56:04 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:08 +0000 UTC }] Mar 16 22:58:14.216: INFO: kube-proxy-windows-x8pwv capz-conf-275z6 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:55:58 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:55:58 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:07 +0000 UTC }] Mar 16 22:58:14.216: INFO: kube-proxy-xbdgr capz-conf-0bueug-control-plane-mj5bc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:42:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:42:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:42:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:42:46 +0000 UTC }] Mar 16 22:58:14.216: INFO: kube-scheduler-capz-conf-0bueug-control-plane-mj5bc capz-conf-0bueug-control-plane-mj5bc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:42:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:42:21 +0000 UTC }] Mar 16 22:58:14.216: INFO: metrics-server-6987569d96-8kswn capz-conf-0bueug-control-plane-mj5bc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:51 +0000 UTC }] Mar 16 22:58:14.216: INFO: Mar 16 22:58:14.736: INFO: Logging node info for node capz-conf-0bueug-control-plane-mj5bc Mar 16 22:58:14.930: INFO: Node Info: &Node{ObjectMeta:{capz-conf-0bueug-control-plane-mj5bc 5acfd927-a427-40bb-9f34-e39e413f4fea 4084 0 2023-03-16 22:42:42 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_B2s beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-2 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-0bueug-control-plane-mj5bc kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_B2s topology.disk.csi.azure.com/zone:eastus-2 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-2] map[cluster.x-k8s.io/cluster-name:capz-conf-0bueug cluster.x-k8s.io/cluster-namespace:capz-conf-0bueug cluster.x-k8s.io/machine:capz-conf-0bueug-control-plane-9rfgf cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-0bueug-control-plane csi.volume.kubernetes.io/nodeid:{"csi.tigera.io":"capz-conf-0bueug-control-plane-mj5bc","disk.csi.azure.com":"capz-conf-0bueug-control-plane-mj5bc"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.153.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-16 22:42:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-16 22:42:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2023-03-16 22:42:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2023-03-16 22:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {calico-node Update v1 2023-03-16 22:44:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-03-16 22:55:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.disk.csi.azure.com/zone":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-0bueug/providers/Microsoft.Compute/virtualMachines/capz-conf-0bueug-control-plane-mj5bc,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4123176960 0} {<nil>} 4026540Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4018319360 0} {<nil>} 3924140Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-16 22:44:01 +0000 UTC,LastTransitionTime:2023-03-16 22:44:01 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-16 22:55:31 +0000 UTC,LastTransitionTime:2023-03-16 22:42:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-16 22:55:31 +0000 UTC,LastTransitionTime:2023-03-16 22:42:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-16 22:55:31 +0000 UTC,LastTransitionTime:2023-03-16 22:42:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-16 22:55:31 +0000 UTC,LastTransitionTime:2023-03-16 22:43:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-0bueug-control-plane-mj5bc,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:98eacf6cdf2f4271815d96a2ce13a4d9,SystemUUID:eaf33ec8-b07f-4744-857a-42608e4dfa4a,BootID:ac5d5320-b111-4e85-adc1-ee6a6ff55212,KernelVersion:5.4.0-1104-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-alpha.3.830+9fce3cd4b80206,KubeProxyVersion:v1.27.0-alpha.3.830+9fce3cd4b80206,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/calico/cni@sha256:a38d53cb8688944eafede2f0eadc478b1b403cefeff7953da57fe9cd2d65e977 docker.io/calico/cni:v3.25.0],SizeBytes:87984941,},ContainerImage{Names:[docker.io/calico/node@sha256:a85123d1882832af6c45b5e289c6bb99820646cb7d4f6006f98095168808b1e6 docker.io/calico/node:v3.25.0],SizeBytes:87185935,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner@sha256:3ef7d954946bd1cf9e5e3564a8d1acf8e5852616f7ae96bcbc5ced8c275483ee mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.3.0],SizeBytes:61391360,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-resizer@sha256:9ba6483d2f8aa6051cb3a50e42d638fc17a6e4699a6689f054969024b7c12944 mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0],SizeBytes:58560473,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-attacher@sha256:bc317fea7e7bbaff65130d7ac6ea7c96bc15eb1f086374b8c3359f11988ac024 mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v4.0.0],SizeBytes:57948644,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:5f9044f5ddfba19c4fcb1d4c41984d17b72c1050692bcaeaee3a1e93cd0a17ca mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0],SizeBytes:56451605,},ContainerImage{Names:[docker.io/calico/apiserver@sha256:9819c1b569e60eec4dbab82c1b41cee80fe8af282b25ba2c174b2a00ae555af6 docker.io/calico/apiserver:v3.25.0],SizeBytes:35624155,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:0f03b93af45f39704b7da175db31e20da63d2ab369f350e59de8cbbef9d703e0 registry.k8s.io/kube-apiserver:v1.26.2],SizeBytes:35329425,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver@sha256:9f4dbd6080f3fe1b6ac39c344e4709225ff9e6acdf0d9d04b56febd2dea2cbe9 gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.3.828_a34e37c9963af5],SizeBytes:33207963,},ContainerImage{Names:[capzci.azurecr.io/kube-apiserver@sha256:7ab9a4d89e95b1eb4f71e631181baa2629cf87a80dd0be83336eb8b5d7630b9c capzci.azurecr.io/kube-apiserver:v1.27.0-alpha.3.830_9fce3cd4b80206],SizeBytes:33206596,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:5434d52f88eb16bc5e98ccb65e97e97cb5cf7861749afbf26174d27c4ece1fad registry.k8s.io/kube-controller-manager:v1.26.2],SizeBytes:32180749,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:c45af3a9692d87a527451cf544557138fedf86f92b6e39bf2003e2fdb848dce3 docker.io/calico/kube-controllers:v3.25.0],SizeBytes:31271800,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager@sha256:a947f615693df3fedee79f387a0662021d8daa9a5af7aeae5fea7de5465c5a9b gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.3.828_a34e37c9963af5],SizeBytes:30822732,},ContainerImage{Names:[capzci.azurecr.io/kube-controller-manager@sha256:7d231a5d3d9e9cd210ab462993a848ce43e134be46926ccbb66592efbd56d10e capzci.azurecr.io/kube-controller-manager:v1.27.0-alpha.3.830_9fce3cd4b80206],SizeBytes:30821169,},ContainerImage{Names:[docker.io/calico/typha@sha256:f7e0557e03f422c8ba5fcf64ef0fac054ee99935b5d101a0a50b5e9b65f6a5c5 docker.io/calico/typha:v3.25.0],SizeBytes:28533187,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:f977ad859fb500c1302d9c3428c6271db031bb7431e7076213b676b345a88dc2 k8s.gcr.io/metrics-server/metrics-server:v0.6.2],SizeBytes:28135299,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy@sha256:e94e3fd5a946063b77a46ff46b36b49f3a2684477ccc424d7ae21510c6e06e41 gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.3.828_a34e37c9963af5],SizeBytes:23897922,},ContainerImage{Names:[capzci.azurecr.io/kube-proxy@sha256:beb5ef042e3f6f0090582048bda6e0ef66aa20dc6e672183965173c1b9ce242e capzci.azurecr.io/kube-proxy:v1.27.0-alpha.3.830_9fce3cd4b80206],SizeBytes:23896355,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter@sha256:a889e925e15f9423f7842f1b769f64cbcf6a20b6956122836fc835cf22d9073f mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1],SizeBytes:22192414,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:5dac6611aceb1452a5d4036108a15ceb0699c083a942977e30640d521e7d2078 registry.k8s.io/kube-proxy:v1.26.2],SizeBytes:21541935,},ContainerImage{Names:[quay.io/tigera/operator@sha256:89eef35e1bbe8c88792ce69c3f3f38fb9838e58602c570524350b5f3ab127582 quay.io/tigera/operator:v1.29.0],SizeBytes:21108896,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler@sha256:f81a7333ad42431b18ad78d6b76dd43ecf3d1f0eb4fc8a62c619a6093d9c7e71 gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.3.828_a34e37c9963af5],SizeBytes:18079486,},ContainerImage{Names:[capzci.azurecr.io/kube-scheduler@sha256:6f976158ba6313156a151f224fc1b3f4d8682bbdc33c66627d840bde6d13041c capzci.azurecr.io/kube-scheduler:v1.27.0-alpha.3.830_9fce3cd4b80206],SizeBytes:18078145,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:da109877fd8fd0feba2f9a4cb6a199797452c17ddcfaf7b023cf0bac09e51417 registry.k8s.io/kube-scheduler:v1.26.2],SizeBytes:17489559,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/calico/node-driver-registrar@sha256:f559ee53078266d2126732303f588b9d4266607088e457ea04286f31727676f7 docker.io/calico/node-driver-registrar:v3.25.0],SizeBytes:11133658,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:515b883deb0ae8d58eef60312f4d460ff8a3f52a2a5e487c94a8ebb2ca362720 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2],SizeBytes:10076715,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:fcb73e1939d9abeb2d1e1680b476a10a422a04a73ea5a65e64eec3fde1f2a5a1 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0],SizeBytes:9117963,},ContainerImage{Names:[docker.io/calico/csi@sha256:61a95f3ee79a7e591aff9eff535be73e62d2c3931d07c2ea8a1305f7bea19b31 docker.io/calico/csi:v3.25.0],SizeBytes:9076936,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:01ddd57d428787b3ac689daa685660defe4bd7810069544bd43a9103a7b0a789 docker.io/calico/pod2daemon-flexvol:v3.25.0],SizeBytes:7076045,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 16 22:58:14.932: INFO: Logging kubelet events for node capz-conf-0bueug-control-plane-mj5bc Mar 16 22:58:15.125: INFO: Logging pods the kubelet thinks is on node capz-conf-0bueug-control-plane-mj5bc Mar 16 22:58:15.396: INFO: calico-typha-7998d677cf-226xr started at 2023-03-16 22:43:26 +0000 UTC (0+1 container statuses recorded) Mar 16 22:58:15.396: INFO: Container calico-typha ready: true, restart count 0 Mar 16 22:58:15.396: INFO: coredns-5d78c9869d-nbrqn started at 2023-03-16 22:43:51 +0000 UTC (0+1 container statuses recorded) Mar 16 22:58:15.396: INFO: Container coredns ready: true, restart count 0 Mar 16 22:58:15.396: INFO: calico-apiserver-d5667676d-p688x started at 2023-03-16 22:44:15 +0000 UTC (0+1 container statuses recorded) Mar 16 22:58:15.396: INFO: Container calico-apiserver ready: true, restart count 0 Mar 16 22:58:15.396: INFO: calico-apiserver-d5667676d-n4fs8 started at 2023-03-16 22:44:15 +0000 UTC (0+1 container statuses recorded) Mar 16 22:58:15.396: INFO: Container calico-apiserver ready: true, restart count 0 Mar 16 22:58:15.396: INFO: csi-azuredisk-node-v7lzh started at 2023-03-16 22:44:40 +0000 UTC (0+3 container statuses recorded) Mar 16 22:58:15.396: INFO: Container azuredisk ready: true, restart count 0 Mar 16 22:58:15.396: INFO: Container liveness-probe ready: true, restart count 0 Mar 16 22:58:15.396: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 16 22:58:15.396: INFO: kube-controller-manager-capz-conf-0bueug-control-plane-mj5bc started at 2023-03-16 22:42:57 +0000 UTC (0+1 container statuses recorded) Mar 16 22:58:15.396: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 16 22:58:15.396: INFO: tigera-operator-59c686f986-rt8kc started at 2023-03-16 22:43:18 +0000 UTC (0+1 container statuses recorded) Mar 16 22:58:15.396: INFO: Container tigera-operator ready: true, restart count 0 Mar 16 22:58:15.396: INFO: kube-proxy-xbdgr started at 2023-03-16 22:42:46 +0000 UTC (0+1 container statuses recorded) Mar 16 22:58:15.396: INFO: Container kube-proxy ready: true, restart count 0 Mar 16 22:58:15.396: INFO: calico-node-h559n started at 2023-03-16 22:43:26 +0000 UTC (2+1 container statuses recorded) Mar 16 22:58:15.396: INFO: Init container flexvol-driver ready: true, restart count 0 Mar 16 22:58:15.396: INFO: Init container install-cni ready: true, restart count 0 Mar 16 22:58:15.396: INFO: Container calico-node ready: true, restart count 0 Mar 16 22:58:15.396: INFO: kube-scheduler-capz-conf-0bueug-control-plane-mj5bc started at 2023-03-16 22:42:21 +0000 UTC (0+1 container statuses recorded) Mar 16 22:58:15.396: INFO: Container kube-scheduler ready: true, restart count 0 Mar 16 22:58:15.396: INFO: etcd-capz-conf-0bueug-control-plane-mj5bc started at 2023-03-16 22:42:46 +0000 UTC (0+1 container statuses recorded) Mar 16 22:58:15.396: INFO: Container etcd ready: true, restart count 0 Mar 16 22:58:15.396: INFO: csi-node-driver-svgcw started at 2023-03-16 22:43:52 +0000 UTC (0+2 container statuses recorded) Mar 16 22:58:15.396: INFO: Container calico-csi ready: true, restart count 0 Mar 16 22:58:15.396: INFO: Container csi-node-driver-registrar ready: true, restart count 0 Mar 16 22:58:15.396: INFO: csi-azuredisk-controller-56db99df6c-9zdpw started at 2023-03-16 22:44:40 +0000 UTC (0+6 container statuses recorded) Mar 16 22:58:15.396: INFO: Container azuredisk ready: true, restart count 0 Mar 16 22:58:15.396: INFO: Container csi-attacher ready: true, restart count 0 Mar 16 22:58:15.396: INFO: Container csi-provisioner ready: true, restart count 0 Mar 16 22:58:15.396: INFO: Container csi-resizer ready: true, restart count 0 Mar 16 22:58:15.396: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 16 22:58:15.396: INFO: Container liveness-probe ready: true, restart count 0 Mar 16 22:58:15.396: INFO: calico-kube-controllers-59d9cb8fbb-5jzmf started at 2023-03-16 22:43:51 +0000 UTC (0+1 container statuses recorded) Mar 16 22:58:15.396: INFO: Container calico-kube-controllers ready: true, restart count 0 Mar 16 22:58:15.396: INFO: coredns-5d78c9869d-jg2mq started at 2023-03-16 22:43:51 +0000 UTC (0+1 container statuses recorded) Mar 16 22:58:15.396: INFO: Container coredns ready: true, restart count 0 Mar 16 22:58:15.396: INFO: kube-apiserver-capz-conf-0bueug-control-plane-mj5bc started at 2023-03-16 22:42:21 +0000 UTC (0+1 container statuses recorded) Mar 16 22:58:15.396: INFO: Container kube-apiserver ready: true, restart count 0 Mar 16 22:58:15.396: INFO: metrics-server-6987569d96-8kswn started at 2023-03-16 22:43:51 +0000 UTC (0+1 container statuses recorded) Mar 16 22:58:15.396: INFO: Container metrics-server ready: true, restart count 0 Mar 16 22:58:15.999: INFO: Latency metrics for node capz-conf-0bueug-control-plane-mj5bc Mar 16 22:58:15.999: INFO: Logging node info for node capz-conf-275z6 Mar 16 22:58:16.124: INFO: Node Info: &Node{ObjectMeta:{capz-conf-275z6 6173e158-308c-4cdd-af0b-b8f373959f1e 4521 0 2023-03-16 22:45:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-275z6 kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-0bueug cluster.x-k8s.io/cluster-namespace:capz-conf-0bueug cluster.x-k8s.io/machine:capz-conf-0bueug-md-win-786c6dcc6f-j5vpk cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-0bueug-md-win-786c6dcc6f kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-16 22:45:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2023-03-16 22:45:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {manager Update v1 2023-03-16 22:45:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2023-03-16 22:57:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet.exe Update v1 2023-03-16 22:57:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-0bueug/providers/Microsoft.Compute/virtualMachines/capz-conf-275z6,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31686914048 0} {<nil>} 30944252Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{28518222596 0} {<nil>} 28518222596 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-16 22:57:13 +0000 UTC,LastTransitionTime:2023-03-16 22:45:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-16 22:57:13 +0000 UTC,LastTransitionTime:2023-03-16 22:45:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-16 22:57:13 +0000 UTC,LastTransitionTime:2023-03-16 22:45:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-16 22:57:13 +0000 UTC,LastTransitionTime:2023-03-16 22:57:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-275z6,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-275z6,SystemUUID:3D93AFA0-5F3F-40C2-9EEF-6D425C24C807,BootID:9,KernelVersion:10.0.17763.4010,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.7.0,KubeletVersion:v1.27.0-alpha.3.830+9fce3cd4b80206-dirty,KubeProxyVersion:v1.27.0-alpha.3.830+9fce3cd4b80206-dirty,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:269513752,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:5f9044f5ddfba19c4fcb1d4c41984d17b72c1050692bcaeaee3a1e93cd0a17ca mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0],SizeBytes:130192348,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.27.0-alpha.3.828_a34e37c9963af5-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:014b93f2aae432969c3ffe0f99d4c30537e101572f1007e9a15ace393df47e7b docker.io/sigwindowstools/calico-install:v3.25.0-hostprocess],SizeBytes:49946025,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 16 22:58:16.125: INFO: Logging kubelet events for node capz-conf-275z6 Mar 16 22:58:16.325: INFO: Logging pods the kubelet thinks is on node capz-conf-275z6 Mar 16 22:58:16.537: INFO: kube-proxy-windows-x8pwv started at 2023-03-16 22:45:08 +0000 UTC (0+1 container statuses recorded) Mar 16 22:58:16.537: INFO: Container kube-proxy ready: false, restart count 9 Mar 16 22:58:16.537: INFO: calico-node-windows-ptp8l started at 2023-03-16 22:45:08 +0000 UTC (1+2 container statuses recorded) Mar 16 22:58:16.537: INFO: Init container install-cni ready: false, restart count 39 Mar 16 22:58:16.537: INFO: Container calico-node-felix ready: false, restart count 0 Mar 16 22:58:16.537: INFO: Container calico-node-startup ready: false, restart count 0 Mar 16 22:58:16.537: INFO: containerd-logger-dv27w started at 2023-03-16 22:45:08 +0000 UTC (0+1 container statuses recorded) Mar 16 22:58:16.537: INFO: Container containerd-logger ready: false, restart count 10 Mar 16 22:58:16.537: INFO: csi-proxy-fwgj7 started at 2023-03-16 22:46:12 +0000 UTC (0+1 container statuses recorded) Mar 16 22:58:16.537: INFO: Container csi-proxy ready: false, restart count 9 Mar 16 22:58:16.537: INFO: csi-azuredisk-node-win-vrwwk started at 2023-03-16 22:46:12 +0000 UTC (1+3 container statuses recorded) Mar 16 22:58:16.537: INFO: Init container init ready: false, restart count 75 Mar 16 22:58:16.537: INFO: Container azuredisk ready: false, restart count 0 Mar 16 22:58:16.537: INFO: Container liveness-probe ready: false, restart count 0 Mar 16 22:58:16.537: INFO: Container node-driver-registrar ready: false, restart count 0 Mar 16 22:58:17.170: INFO: Latency metrics for node capz-conf-275z6 Mar 16 22:58:17.170: INFO: Logging node info for node capz-conf-scwjd Mar 16 22:58:17.326: INFO: Node Info: &Node{ObjectMeta:{capz-conf-scwjd 3f4faabb-6032-4c4a-90ac-62ff8b27ee4f 3677 0 2023-03-16 22:45:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-scwjd kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-0bueug cluster.x-k8s.io/cluster-namespace:capz-conf-0bueug cluster.x-k8s.io/machine:capz-conf-0bueug-md-win-786c6dcc6f-d9khz cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-0bueug-md-win-786c6dcc6f kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2023-03-16 22:45:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-16 22:45:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {manager Update v1 2023-03-16 22:46:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2023-03-16 22:48:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet.exe Update v1 2023-03-16 22:53:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-0bueug/providers/Microsoft.Compute/virtualMachines/capz-conf-scwjd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31686914048 0} {<nil>} 30944252Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{28518222596 0} {<nil>} 28518222596 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-16 22:53:49 +0000 UTC,LastTransitionTime:2023-03-16 22:45:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-16 22:53:49 +0000 UTC,LastTransitionTime:2023-03-16 22:45:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-16 22:53:49 +0000 UTC,LastTransitionTime:2023-03-16 22:45:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-16 22:53:49 +0000 UTC,LastTransitionTime:2023-03-16 22:48:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-scwjd,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-scwjd,SystemUUID:2933BC02-87B2-47BA-996D-3DE3B6CBD02D,BootID:9,KernelVersion:10.0.17763.4010,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.7.0,KubeletVersion:v1.27.0-alpha.3.830+9fce3cd4b80206-dirty,KubeProxyVersion:v1.27.0-alpha.3.830+9fce3cd4b80206-dirty,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:269513752,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:5f9044f5ddfba19c4fcb1d4c41984d17b72c1050692bcaeaee3a1e93cd0a17ca mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0],SizeBytes:130192348,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.27.0-alpha.3.828_a34e37c9963af5-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:014b93f2aae432969c3ffe0f99d4c30537e101572f1007e9a15ace393df47e7b docker.io/sigwindowstools/calico-install:v3.25.0-hostprocess],SizeBytes:49946025,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 16 22:58:17.327: INFO: Logging kubelet events for node capz-conf-scwjd Mar 16 22:58:17.526: INFO: Logging pods the kubelet thinks is on node capz-conf-scwjd Mar 16 22:58:17.737: INFO: calico-node-windows-64sf9 started at 2023-03-16 22:45:09 +0000 UTC (1+2 container statuses recorded) Mar 16 22:58:17.737: INFO: Init container install-cni ready: false, restart count 139 Mar 16 22:58:17.737: INFO: Container calico-node-felix ready: false, restart count 0 Mar 16 22:58:17.737: INFO: Container calico-node-startup ready: false, restart count 0 Mar 16 22:58:17.737: INFO: containerd-logger-lsh6r started at 2023-03-16 22:45:09 +0000 UTC (0+1 container statuses recorded) Mar 16 22:58:17.737: INFO: Container containerd-logger ready: false, restart count 9 Mar 16 22:58:17.737: INFO: kube-proxy-windows-bgfqk started at 2023-03-16 22:45:09 +0000 UTC (0+1 container statuses recorded) Mar 16 22:58:17.737: INFO: Container kube-proxy ready: false, restart count 9 Mar 16 22:58:17.737: INFO: csi-proxy-dm54w started at 2023-03-16 22:48:01 +0000 UTC (0+1 container statuses recorded) Mar 16 22:58:17.737: INFO: Container csi-proxy ready: false, restart count 7 Mar 16 22:58:17.737: INFO: csi-azuredisk-node-win-tf9rw started at 2023-03-16 22:48:01 +0000 UTC (1+3 container statuses recorded) Mar 16 22:58:17.737: INFO: Init container init ready: false, restart count 1 Mar 16 22:58:17.737: INFO: Container azuredisk ready: false, restart count 0 Mar 16 22:58:17.737: INFO: Container liveness-probe ready: false, restart count 0 Mar 16 22:58:17.737: INFO: Container node-driver-registrar ready: false, restart count 0 Mar 16 22:58:18.364: INFO: Latency metrics for node capz-conf-scwjd Mar 16 22:58:18.542: INFO: Running kubectl logs on non-ready containers in kube-system Mar 16 22:58:18.730: INFO: Logs of kube-system/containerd-logger-dv27w:containerd-logger on node capz-conf-275z6 Mar 16 22:58:18.730: INFO: : STARTLOG Using configuration file config.json: { "inputs": [ { "type": "ETW", "sessionNamePrefix": "containerd", "cleanupOldSessions": true, "reuseExistingSession": true, "providers": [ { "providerName": "Microsoft.Virtualization.RunHCS", "providerGuid": "0B52781F-B24D-5685-DDF6-69830ED40EC3", "level": "Verbose" }, { "providerName": "ContainerD", "providerGuid": "2acb92c0-eb9b-571a-69cf-8f3410f383ad", "level": "Verbose" } ] } ], "filters": [ { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == Stats && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::LayerID && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::NameToGuid && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.Stats && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.State && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetProcessProperties && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetComputeSystemProperties && hasnoproperty error" } ], "outputs": [ { "type": "StdOutput" } ], "schemaVersion": "2016-08-11" } Logging started... ENDLOG for container kube-system:containerd-logger-dv27w:containerd-logger Mar 16 22:58:18.930: INFO: Logs of kube-system/containerd-logger-lsh6r:containerd-logger on node capz-conf-scwjd Mar 16 22:58:18.930: INFO: : STARTLOG Using configuration file config.json: { "inputs": [ { "type": "ETW", "sessionNamePrefix": "containerd", "cleanupOldSessions": true, "reuseExistingSession": true, "providers": [ { "providerName": "Microsoft.Virtualization.RunHCS", "providerGuid": "0B52781F-B24D-5685-DDF6-69830ED40EC3", "level": "Verbose" }, { "providerName": "ContainerD", "providerGuid": "2acb92c0-eb9b-571a-69cf-8f3410f383ad", "level": "Verbose" } ] } ], "filters": [ { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == Stats && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::LayerID && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::NameToGuid && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.Stats && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.State && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetProcessProperties && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetComputeSystemProperties && hasnoproperty error" } ], "outputs": [ { "type": "StdOutput" } ], "schemaVersion": "2016-08-11" } Logging started... ENDLOG for container kube-system:containerd-logger-lsh6r:containerd-logger Mar 16 22:58:19.327: INFO: Failed to get logs of pod csi-azuredisk-node-win-tf9rw, container liveness-probe, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-tf9rw) Mar 16 22:58:19.327: INFO: Logs of kube-system/csi-azuredisk-node-win-tf9rw:liveness-probe on node capz-conf-scwjd Mar 16 22:58:19.327: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-tf9rw:liveness-probe Mar 16 22:58:19.727: INFO: Failed to get logs of pod csi-azuredisk-node-win-tf9rw, container node-driver-registrar, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-tf9rw) Mar 16 22:58:19.727: INFO: Logs of kube-system/csi-azuredisk-node-win-tf9rw:node-driver-registrar on node capz-conf-scwjd Mar 16 22:58:19.727: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-tf9rw:node-driver-registrar Mar 16 22:58:20.127: INFO: Failed to get logs of pod csi-azuredisk-node-win-tf9rw, container azuredisk, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-tf9rw) Mar 16 22:58:20.127: INFO: Logs of kube-system/csi-azuredisk-node-win-tf9rw:azuredisk on node capz-conf-scwjd Mar 16 22:58:20.127: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-tf9rw:azuredisk Mar 16 22:58:20.527: INFO: Failed to get logs of pod csi-azuredisk-node-win-vrwwk, container liveness-probe, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-vrwwk) Mar 16 22:58:20.527: INFO: Logs of kube-system/csi-azuredisk-node-win-vrwwk:liveness-probe on node capz-conf-275z6 Mar 16 22:58:20.527: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-vrwwk:liveness-probe Mar 16 22:58:20.926: INFO: Failed to get logs of pod csi-azuredisk-node-win-vrwwk, container node-driver-registrar, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-vrwwk) Mar 16 22:58:20.926: INFO: Logs of kube-system/csi-azuredisk-node-win-vrwwk:node-driver-registrar on node capz-conf-275z6 Mar 16 22:58:20.926: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-vrwwk:node-driver-registrar Mar 16 22:58:21.327: INFO: Failed to get logs of pod csi-azuredisk-node-win-vrwwk, container azuredisk, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-vrwwk) Mar 16 22:58:21.327: INFO: Logs of kube-system/csi-azuredisk-node-win-vrwwk:azuredisk on node capz-conf-275z6 Mar 16 22:58:21.327: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-vrwwk:azuredisk Mar 16 22:58:21.537: INFO: Logs of kube-system/csi-proxy-dm54w:csi-proxy on node capz-conf-scwjd Mar 16 22:58:21.537: INFO: : STARTLOG I0316 22:54:00.735419 1336 main.go:54] Starting CSI-Proxy Server ... I0316 22:54:00.773922 1336 main.go:55] Version: v1.0.2-0-g51a6f06 ENDLOG for container kube-system:csi-proxy-dm54w:csi-proxy Mar 16 22:58:21.741: INFO: Logs of kube-system/csi-proxy-fwgj7:csi-proxy on node capz-conf-275z6 Mar 16 22:58:21.741: INFO: : STARTLOG I0316 22:57:02.346573 3708 main.go:54] Starting CSI-Proxy Server ... I0316 22:57:02.398581 3708 main.go:55] Version: v1.0.2-0-g51a6f06 ENDLOG for container kube-system:csi-proxy-fwgj7:csi-proxy Mar 16 22:58:21.928: INFO: Logs of kube-system/kube-proxy-windows-bgfqk:kube-proxy on node capz-conf-scwjd Mar 16 22:58:21.928: INFO: : STARTLOG ENDLOG for container kube-system:kube-proxy-windows-bgfqk:kube-proxy Mar 16 22:58:22.128: INFO: Logs of kube-system/kube-proxy-windows-x8pwv:kube-proxy on node capz-conf-275z6 Mar 16 22:58:22.128: INFO: : STARTLOG ENDLOG for container kube-system:kube-proxy-windows-x8pwv:kube-proxy �[38;5;9m[FAILED]�[0m in [SynchronizedBeforeSuite] - test/e2e/e2e.go:242 �[38;5;243m@ 03/16/23 22:58:22.129�[0m �[38;5;243m<< Timeline�[0m �[38;5;9m[FAILED] Error waiting for all pods to be running and ready: Timed out after 600.001s. Expected all pods (need at least 0) in namespace "kube-system" to be running and ready (except for 0). 10 / 18 pods were running and ready. Expected 4 pod replicas, 4 are Running and Ready. Pods that were neither completed nor running: <[]v1.Pod | len:8, cap:8>: - metadata: creationTimestamp: "2023-03-16T22:45:07Z" generateName: containerd-logger- labels: controller-revision-hash: 56b7f4bb6 k8s-app: containerd-logger pod-template-generation: "1" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:controller-revision-hash: {} f:k8s-app: {} f:pod-template-generation: {} f:ownerReferences: .: {} k:{"uid":"12eff340-8317-4323-9a1d-fda6027ced9b"}: {} f:spec: f:affinity: .: {} f:nodeAffinity: .: {} f:requiredDuringSchedulingIgnoredDuringExecution: {} f:containers: k:{"name":"containerd-logger"}: .: {} f:args: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/config.json"}: .: {} f:mountPath: {} f:name: {} f:subPath: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:hostNetwork: {} f:nodeSelector: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:windowsOptions: .: {} f:hostProcess: {} f:runAsUserName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} f:volumes: .: {} k:{"name":"containerd-logger-config"}: .: {} f:configMap: .: {} f:defaultMode: {} f:name: {} f:name: {} manager: kube-controller-manager operation: Update time: "2023-03-16T22:45:07Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.1.0.4"}: .: {} f:ip: {} f:startTime: {} manager: kubelet.exe operation: Update subresource: status time: "2023-03-16T22:57:20Z" name: containerd-logger-dv27w namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: containerd-logger uid: 12eff340-8317-4323-9a1d-fda6027ced9b resourceVersion: "4545" uid: 8b158921-6e6f-4293-aa4d-f1ba3f8d6022 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - capz-conf-275z6 containers: - args: - config.json image: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0 imagePullPolicy: Always name: containerd-logger resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /config.json name: containerd-logger-config subPath: config.json - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-x6629 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true nodeName: capz-conf-275z6 nodeSelector: kubernetes.io/os: windows preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: windowsOptions: hostProcess: true runAsUserName: NT AUTHORITY\system serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - key: CriticalAddonsOnly operator: Exists - operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: - configMap: defaultMode: 420 name: containerd-logger-config name: containerd-logger-config - name: kube-api-access-x6629 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2023-03-16T22:45:08Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-03-16T22:57:20Z" message: 'containers with unready status: [containerd-logger]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-03-16T22:57:20Z" message: 'containers with unready status: [containerd-logger]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-03-16T22:45:07Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://a13b3dcc37d95a1f2869569555c1ea190f9f67ae6660b3e78f036574afeeaabb image: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0 imageID: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 lastState: terminated: containerID: containerd://a13b3dcc37d95a1f2869569555c1ea190f9f67ae6660b3e78f036574afeeaabb exitCode: -1073741510 finishedAt: "2023-03-16T22:57:15Z" reason: Error startedAt: "2023-03-16T22:57:14Z" name: containerd-logger ready: false restartCount: 10 started: false state: waiting: message: back-off 5m0s restarting failed container=containerd-logger pod=containerd-logger-dv27w_kube-system(8b158921-6e6f-4293-aa4d-f1ba3f8d6022) reason: CrashLoopBackOff hostIP: 10.1.0.4 phase: Running podIP: 10.1.0.4 podIPs: - ip: 10.1.0.4 qosClass: BestEffort startTime: "2023-03-16T22:45:08Z" - metadata: creationTimestamp: "2023-03-16T22:45:08Z" generateName: containerd-logger- labels: controller-revision-hash: 56b7f4bb6 k8s-app: containerd-logger pod-template-generation: "1" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:controller-revision-hash: {} f:k8s-app: {} f:pod-template-generation: {} f:ownerReferences: .: {} k:{"uid":"12eff340-8317-4323-9a1d-fda6027ced9b"}: {} f:spec: f:affinity: .: {} f:nodeAffinity: .: {} f:requiredDuringSchedulingIgnoredDuringExecution: {} f:containers: k:{"name":"containerd-logger"}: .: {} f:args: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/config.json"}: .: {} f:mountPath: {} f:name: {} f:subPath: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:hostNetwork: {} f:nodeSelector: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:windowsOptions: .: {} f:hostProcess: {} f:runAsUserName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} f:volumes: .: {} k:{"name":"containerd-logger-config"}: .: {} f:configMap: .: {} f:defaultMode: {} f:name: {} f:name: {} manager: kube-controller-manager operation: Update time: "2023-03-16T22:45:08Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.1.0.5"}: .: {} f:ip: {} f:startTime: {} manager: kubelet.exe operation: Update subresource: status time: "2023-03-16T22:57:03Z" name: containerd-logger-lsh6r namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: containerd-logger uid: 12eff340-8317-4323-9a1d-fda6027ced9b resourceVersion: "4458" uid: 017a5a4a-d9d2-4bc3-8671-6ed7c34dd141 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - capz-conf-scwjd containers: - args: - config.json image: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0 imagePullPolicy: Always name: containerd-logger resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /config.json name: containerd-logger-config subPath: config.json - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-9jj9w readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true nodeName: capz-conf-scwjd nodeSelector: kubernetes.io/os: windows preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: windowsOptions: hostProcess: true runAsUserName: NT AUTHORITY\system serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - key: CriticalAddonsOnly operator: Exists - operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: - configMap: defaultMode: 420 name: containerd-logger-config name: containerd-logger-config - name: kube-api-access-9jj9w projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2023-03-16T22:45:09Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-03-16T22:57:03Z" message: 'containers with unready status: [containerd-logger]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-03-16T22:57:03Z" message: 'containers with unready status: [containerd-logger]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-03-16T22:45:08Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://5466d33baabec2eac4ebb86646586f775059d171a0498b0b8ab965a8c10f0639 image: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0 imageID: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 lastState: terminated: containerID: containerd://5466d33baabec2eac4ebb86646586f775059d171a0498b0b8ab965a8c10f0639 exitCode: -1073741510 finishedAt: "2023-03-16T22:56:57Z" reason: Error startedAt: "2023-03-16T22:56:57Z" name: containerd-logger ready: false restartCount: 9 started: false state: waiting: message: back-off 5m0s restarting failed container=containerd-logger pod=containerd-logger-lsh6r_kube-system(017a5a4a-d9d2-4bc3-8671-6ed7c34dd141) reason: CrashLoopBackOff hostIP: 10.1.0.5 phase: Running podIP: 10.1.0.5 podIPs: - ip: 10.1.0.5 qosClass: BestEffort startTime: "2023-03-16T22:45:09Z" - metadata: creationTimestamp: "2023-03-16T22:48:01Z" generateName: csi-azuredisk-node-win- labels: app: csi-azuredisk-node-win app.kubernetes.io/instance: azuredisk-csi-driver-oot app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: azuredisk-csi-driver app.kubernetes.io/version: v1.27.0 controller-revision-hash: d9d49cd64 helm.sh/chart: azuredisk-csi-driver-v1.27.0 pod-template-generation: "1" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:app: {} f:app.kubernetes.io/instance: {} f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/version: {} f:controller-revision-hash: {} f:helm.sh/chart: {} f:pod-template-generation: {} f:ownerReferences: .: {} k:{"uid":"131c24de-1998-4717-a91f-9d46e1a45c37"}: {} f:spec: f:affinity: .: {} f:nodeAffinity: .: {} f:requiredDuringSchedulingIgnoredDuringExecution: {} f:containers: k:{"name":"azuredisk"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"AZURE_CREDENTIAL_FILE"}: .: {} f:name: {} f:valueFrom: .: {} f:configMapKeyRef: {} k:{"name":"AZURE_GO_SDK_LOG_LEVEL"}: .: {} f:name: {} k:{"name":"CSI_ENDPOINT"}: .: {} f:name: {} f:value: {} k:{"name":"KUBE_NODE_NAME"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:ports: .: {} k:{"containerPort":29603,"protocol":"TCP"}: .: {} f:containerPort: {} f:hostPort: {} f:name: {} f:protocol: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} k:{"name":"liveness-probe"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"CSI_ENDPOINT"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} k:{"name":"node-driver-registrar"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"CSI_ENDPOINT"}: .: {} f:name: {} f:value: {} k:{"name":"DRIVER_REG_SOCK_PATH"}: .: {} f:name: {} f:value: {} k:{"name":"KUBE_NODE_NAME"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: {} k:{"name":"PLUGIN_REG_DIR"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:exec: .: {} f:command: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:hostNetwork: {} f:initContainers: .: {} k:{"name":"init"}: .: {} f:command: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:nodeSelector: {} f:priorityClassName: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:windowsOptions: .: {} f:hostProcess: {} f:runAsUserName: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: "2023-03-16T22:48:01Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:initContainerStatuses: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.1.0.5"}: .: {} f:ip: {} f:startTime: {} manager: kubelet.exe operation: Update subresource: status time: "2023-03-16T22:58:11Z" name: csi-azuredisk-node-win-tf9rw namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: csi-azuredisk-node-win uid: 131c24de-1998-4717-a91f-9d46e1a45c37 resourceVersion: "4742" uid: ed265ba8-a975-4791-b063-89971be5c679 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - capz-conf-scwjd containers: - args: - --csi-address=$(CSI_ENDPOINT) - --probe-timeout=3s - --health-port=29603 - --v=2 command: - livenessprobe.exe env: - name: CSI_ENDPOINT value: unix://C:\\var\\lib\\kubelet\\plugins\\disk.csi.azure.com\\csi.sock image: mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0 imagePullPolicy: IfNotPresent name: liveness-probe resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-8hr65 readOnly: true - args: - --v=2 - --csi-address=$(CSI_ENDPOINT) - --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH) - --plugin-registration-path=$(PLUGIN_REG_DIR) command: - csi-node-driver-registrar.exe env: - name: CSI_ENDPOINT value: unix://C:\\var\\lib\\kubelet\\plugins\\disk.csi.azure.com\\csi.sock - name: DRIVER_REG_SOCK_PATH value: C:\\var\\lib\\kubelet\\plugins\\disk.csi.azure.com\\csi.sock - name: PLUGIN_REG_DIR value: C:\\var\\lib\\kubelet\\plugins_registry\\ - name: KUBE_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName image: mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2 imagePullPolicy: IfNotPresent livenessProbe: exec: command: - csi-node-driver-registrar.exe - --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH) - --mode=kubelet-registration-probe failureThreshold: 3 initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 30 name: node-driver-registrar resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-8hr65 readOnly: true - args: - --v=5 - --endpoint=$(CSI_ENDPOINT) - --nodeid=$(KUBE_NODE_NAME) - --metrics-address=0.0.0.0:29605 - --drivername=disk.csi.azure.com - --volume-attach-limit=-1 - --cloud-config-secret-name=azure-cloud-provider - --cloud-config-secret-namespace=kube-system - --custom-user-agent= - --user-agent-suffix=OSS-helm - --allow-empty-cloud-config=true - --support-zone=true command: - azurediskplugin.exe env: - name: AZURE_CREDENTIAL_FILE valueFrom: configMapKeyRef: key: path-windows name: azure-cred-file optional: true - name: CSI_ENDPOINT value: unix://C:\\var\\lib\\kubelet\\plugins\\disk.csi.azure.com\\csi.sock - name: KUBE_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: AZURE_GO_SDK_LOG_LEVEL image: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /healthz port: healthz scheme: HTTP initialDelaySeconds: 30 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 10 name: azuredisk ports: - containerPort: 29603 hostPort: 29603 name: healthz protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-8hr65 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true initContainers: - command: - powershell.exe - -c - New-Item - -ItemType - Directory - -Path - C:\var\lib\kubelet\plugins\disk.csi.azure.com\ - -Force image: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0 imagePullPolicy: IfNotPresent name: init resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-8hr65 readOnly: true nodeName: capz-conf-scwjd nodeSelector: kubernetes.io/os: windows preemptionPolicy: PreemptLowerPriority priority: 2000001000 priorityClassName: system-node-critical restartPolicy: Always schedulerName: default-scheduler securityContext: windowsOptions: hostProcess: true runAsUserName: NT AUTHORITY\SYSTEM serviceAccount: csi-azuredisk-node-sa serviceAccountName: csi-azuredisk-node-sa terminationGracePeriodSeconds: 30 tolerations: - effect: NoSchedule key: node.kubernetes.io/os operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: - name: kube-api-access-8hr65 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2023-03-16T22:48:01Z" message: 'containers with incomplete status: [init]' reason: ContainersNotInitialized status: "False" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-03-16T22:48:01Z" message: 'containers with unready status: [liveness-probe node-driver-registrar azuredisk]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-03-16T22:48:01Z" message: 'containers with unready status: [liveness-probe node-driver-registrar azuredisk]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-03-16T22:48:01Z" status: "True" type: PodScheduled containerStatuses: - image: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0 imageID: "" lastState: {} name: azuredisk ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing - image: mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0 imageID: "" lastState: {} name: liveness-probe ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing - image: mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2 imageID: "" lastState: {} name: node-driver-registrar ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing hostIP: 10.1.0.5 initContainerStatuses: - containerID: containerd://443057a1a90054d7fb752f186a05802c5ad902a500d09bdead52353da9cad0cd image: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0 imageID: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:5f9044f5ddfba19c4fcb1d4c41984d17b72c1050692bcaeaee3a1e93cd0a17ca lastState: {} name: init ready: false restartCount: 0 state: running: startedAt: "2023-03-16T22:58:10Z" phase: Pending podIP: 10.1.0.5 podIPs: - ip: 10.1.0.5 qosClass: BestEffort startTime: "2023-03-16T22:48:01Z" - metadata: creationTimestamp: "2023-03-16T22:46:12Z" generateName: csi-azuredisk-node-win- labels: app: csi-azuredisk-node-win app.kubernetes.io/instance: azuredisk-csi-driver-oot app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: azuredisk-csi-driver app.kubernetes.io/version: v1.27.0 controller-revision-hash: d9d49cd64 helm.sh/chart: azuredisk-csi-driver-v1.27.0 pod-template-generation: "1" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:app: {} f:app.kubernetes.io/instance: {} f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/version: {} f:controller-revision-hash: {} f:helm.sh/chart: {} f:pod-template-generation: {} f:ownerReferences: .: {} k:{"uid":"131c24de-1998-4717-a91f-9d46e1a45c37"}: {} f:spec: f:affinity: .: {} f:nodeAffinity: .: {} f:requiredDuringSchedulingIgnoredDuringExecution: {} f:containers: k:{"name":"azuredisk"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"AZURE_CREDENTIAL_FILE"}: .: {} f:name: {} f:valueFrom: .: {} f:configMapKeyRef: {} k:{"name":"AZURE_GO_SDK_LOG_LEVEL"}: .: {} f:name: {} k:{"name":"CSI_ENDPOINT"}: .: {} f:name: {} f:value: {} k:{"name":"KUBE_NODE_NAME"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:ports: .: {} k:{"containerPort":29603,"protocol":"TCP"}: .: {} f:containerPort: {} f:hostPort: {} f:name: {} f:protocol: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} k:{"name":"liveness-probe"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"CSI_ENDPOINT"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} k:{"name":"node-driver-registrar"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"CSI_ENDPOINT"}: .: {} f:name: {} f:value: {} k:{"name":"DRIVER_REG_SOCK_PATH"}: .: {} f:name: {} f:value: {} k:{"name":"KUBE_NODE_NAME"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: {} k:{"name":"PLUGIN_REG_DIR"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:exec: .: {} f:command: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:hostNetwork: {} f:initContainers: .: {} k:{"name":"init"}: .: {} f:command: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:nodeSelector: {} f:priorityClassName: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:windowsOptions: .: {} f:hostProcess: {} f:runAsUserName: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: "2023-03-16T22:46:12Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:initContainerStatuses: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.1.0.4"}: .: {} f:ip: {} f:startTime: {} manager: kubelet.exe operation: Update subresource: status time: "2023-03-16T22:58:10Z" name: csi-azuredisk-node-win-vrwwk namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: csi-azuredisk-node-win uid: 131c24de-1998-4717-a91f-9d46e1a45c37 resourceVersion: "4741" uid: 4de6fac1-17db-4fef-9dca-c9014a55b211 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - capz-conf-275z6 containers: - args: - --csi-address=$(CSI_ENDPOINT) - --probe-timeout=3s - --health-port=29603 - --v=2 command: - livenessprobe.exe env: - name: CSI_ENDPOINT value: unix://C:\\var\\lib\\kubelet\\plugins\\disk.csi.azure.com\\csi.sock image: mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0 imagePullPolicy: IfNotPresent name: liveness-probe resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rr8wx readOnly: true - args: - --v=2 - --csi-address=$(CSI_ENDPOINT) - --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH) - --plugin-registration-path=$(PLUGIN_REG_DIR) command: - csi-node-driver-registrar.exe env: - name: CSI_ENDPOINT value: unix://C:\\var\\lib\\kubelet\\plugins\\disk.csi.azure.com\\csi.sock - name: DRIVER_REG_SOCK_PATH value: C:\\var\\lib\\kubelet\\plugins\\disk.csi.azure.com\\csi.sock - name: PLUGIN_REG_DIR value: C:\\var\\lib\\kubelet\\plugins_registry\\ - name: KUBE_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName image: mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2 imagePullPolicy: IfNotPresent livenessProbe: exec: command: - csi-node-driver-registrar.exe - --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH) - --mode=kubelet-registration-probe failureThreshold: 3 initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 30 name: node-driver-registrar resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rr8wx readOnly: true - args: - --v=5 - --endpoint=$(CSI_ENDPOINT) - --nodeid=$(KUBE_NODE_NAME) - --metrics-address=0.0.0.0:29605 - --drivername=disk.csi.azure.com - --volume-attach-limit=-1 - --cloud-config-secret-name=azure-cloud-provider - --cloud-config-secret-namespace=kube-system - --custom-user-agent= - --user-agent-suffix=OSS-helm - --allow-empty-cloud-config=true - --support-zone=true command: - azurediskplugin.exe env: - name: AZURE_CREDENTIAL_FILE valueFrom: configMapKeyRef: key: path-windows name: azure-cred-file optional: true - name: CSI_ENDPOINT value: unix://C:\\var\\lib\\kubelet\\plugins\\disk.csi.azure.com\\csi.sock - name: KUBE_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: AZURE_GO_SDK_LOG_LEVEL image: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /healthz port: healthz scheme: HTTP initialDelaySeconds: 30 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 10 name: azuredisk ports: - containerPort: 29603 hostPort: 29603 name: healthz protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rr8wx readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true initContainers: - command: - powershell.exe - -c - New-Item - -ItemType - Directory - -Path - C:\var\lib\kubelet\plugins\disk.csi.azure.com\ - -Force image: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0 imagePullPolicy: IfNotPresent name: init resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rr8wx readOnly: true nodeName: capz-conf-275z6 nodeSelector: kubernetes.io/os: windows preemptionPolicy: PreemptLowerPriority priority: 2000001000 priorityClassName: system-node-critical restartPolicy: Always schedulerName: default-scheduler securityContext: windowsOptions: hostProcess: true runAsUserName: NT AUTHORITY\SYSTEM serviceAccount: csi-azuredisk-node-sa serviceAccountName: csi-azuredisk-node-sa terminationGracePeriodSeconds: 30 tolerations: - effect: NoSchedule key: node.kubernetes.io/os operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: - name: kube-api-access-rr8wx projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2023-03-16T22:53:10Z" message: 'containers with incomplete status: [init]' reason: ContainersNotInitialized status: "False" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-03-16T22:46:12Z" message: 'containers with unready status: [liveness-probe node-driver-registrar azuredisk]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-03-16T22:46:12Z" message: 'containers with unready status: [liveness-probe node-driver-registrar azuredisk]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-03-16T22:46:12Z" status: "True" type: PodScheduled containerStatuses: - image: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0 imageID: "" lastState: {} name: azuredisk ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing - image: mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0 imageID: "" lastState: {} name: liveness-probe ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing - image: mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2 imageID: "" lastState: {} name: node-driver-registrar ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing hostIP: 10.1.0.4 initContainerStatuses: - containerID: containerd://6d89c8889c3f27dc8cba9e26fecf45ab6f9f0c0fd6b369652e904e368da3b12e image: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0 imageID: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:5f9044f5ddfba19c4fcb1d4c41984d17b72c1050692bcaeaee3a1e93cd0a17ca lastState: {} name: init ready: false restartCount: 74 state: running: startedAt: "2023-03-16T22:58:10Z" phase: Pending podIP: 10.1.0.4 podIPs: - ip: 10.1.0.4 qosClass: BestEffort startTime: "2023-03-16T22:46:12Z" - metadata: creationTimestamp: "2023-03-16T22:48:01Z" generateName: csi-proxy- labels: controller-revision-hash: 69f9986785 k8s-app: csi-proxy pod-template-generation: "1" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:controller-revision-hash: {} f:k8s-app: {} f:pod-template-generation: {} f:ownerReferences: .: {} k:{"uid":"bd8e1d49-132f-4628-8635-2be07d8cb21b"}: {} f:spec: f:affinity: .: {} f:nodeAffinity: .: {} f:requiredDuringSchedulingIgnoredDuringExecution: {} f:containers: k:{"name":"csi-proxy"}: .: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:hostNetwork: {} f:nodeSelector: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:windowsOptions: .: {} f:hostProcess: {} f:runAsUserName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: "2023-03-16T22:48:01Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.1.0.5"}: .: {} f:ip: {} f:startTime: {} manager: kubelet.exe operation: Update subresource: status time: "2023-03-16T22:54:06Z" name: csi-proxy-dm54w namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: csi-proxy uid: bd8e1d49-132f-4628-8635-2be07d8cb21b resourceVersion: "3747" uid: 1dafe25d-5961-4f8a-8685-e52c2150ab68 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - capz-conf-scwjd containers: - image: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2 imagePullPolicy: IfNotPresent name: csi-proxy resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-kkvcf readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true nodeName: capz-conf-scwjd nodeSelector: kubernetes.io/os: windows preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: windowsOptions: hostProcess: true runAsUserName: NT AUTHORITY\SYSTEM serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: - name: kube-api-access-kkvcf projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2023-03-16T22:48:01Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-03-16T22:54:06Z" message: 'containers with unready status: [csi-proxy]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-03-16T22:54:06Z" message: 'containers with unready status: [csi-proxy]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-03-16T22:48:01Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://5eca80256499b374f486996578e81b3aff277eb497e5452059f2b0a6b584e98f image: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2 imageID: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba lastState: terminated: containerID: containerd://5eca80256499b374f486996578e81b3aff277eb497e5452059f2b0a6b584e98f exitCode: -1073741510 finishedAt: "2023-03-16T22:54:01Z" reason: Error startedAt: "2023-03-16T22:54:00Z" name: csi-proxy ready: false restartCount: 7 started: false state: waiting: message: back-off 5m0s restarting failed container=csi-proxy pod=csi-proxy-dm54w_kube-system(1dafe25d-5961-4f8a-8685-e52c2150ab68) reason: CrashLoopBackOff hostIP: 10.1.0.5 phase: Running podIP: 10.1.0.5 podIPs: - ip: 10.1.0.5 qosClass: BestEffort startTime: "2023-03-16T22:48:01Z" - metadata: creationTimestamp: "2023-03-16T22:46:12Z" generateName: csi-proxy- labels: controller-revision-hash: 69f9986785 k8s-app: csi-proxy pod-template-generation: "1" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:controller-revision-hash: {} f:k8s-app: {} f:pod-template-generation: {} f:ownerReferences: .: {} k:{"uid":"bd8e1d49-132f-4628-8635-2be07d8cb21b"}: {} f:spec: f:affinity: .: {} f:nodeAffinity: .: {} f:requiredDuringSchedulingIgnoredDuringExecution: {} f:containers: k:{"name":"csi-proxy"}: .: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:hostNetwork: {} f:nodeSelector: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:windowsOptions: .: {} f:hostProcess: {} f:runAsUserName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: "2023-03-16T22:46:12Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.1.0.4"}: .: {} f:ip: {} f:startTime: {} manager: kubelet.exe operation: Update subresource: status time: "2023-03-16T22:57:07Z" name: csi-proxy-fwgj7 namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: csi-proxy uid: bd8e1d49-132f-4628-8635-2be07d8cb21b resourceVersion: "4486" uid: ec53bf42-2782-4e41-954c-24c0694b8136 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - capz-conf-275z6 containers: - image: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2 imagePullPolicy: IfNotPresent name: csi-proxy resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rgp8p readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true nodeName: capz-conf-275z6 nodeSelector: kubernetes.io/os: windows preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: windowsOptions: hostProcess: true runAsUserName: NT AUTHORITY\SYSTEM serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: - name: kube-api-access-rgp8p projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2023-03-16T22:46:12Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-03-16T22:57:07Z" message: 'containers with unready status: [csi-proxy]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-03-16T22:57:07Z" message: 'containers with unready status: [csi-proxy]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-03-16T22:46:12Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://b743aa1990fa5cacdeef2891f352986125e55081e9e1f09b4f53855b942578d2 image: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2 imageID: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba lastState: terminated: containerID: containerd://b743aa1990fa5cacdeef2891f352986125e55081e9e1f09b4f53855b942578d2 exitCode: -1073741510 finishedAt: "2023-03-16T22:57:02Z" reason: Error startedAt: "2023-03-16T22:57:02Z" name: csi-proxy ready: false restartCount: 9 started: false state: waiting: message: back-off 5m0s restarting failed container=csi-proxy pod=csi-proxy-fwgj7_kube-system(ec53bf42-2782-4e41-954c-24c0694b8136) reason: CrashLoopBackOff hostIP: 10.1.0.4 phase: Running podIP: 10.1.0.4 podIPs: - ip: 10.1.0.4 qosClass: BestEffort startTime: "2023-03-16T22:46:12Z" - metadata: creationTimestamp: "2023-03-16T22:45:08Z" generateName: kube-proxy-windows- labels: controller-revision-hash: 7d95f445dc k8s-app: kube-proxy-windows pod-template-generation: "1" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:controller-revision-hash: {} f:k8s-app: {} f:pod-template-generation: {} f:ownerReferences: .: {} k:{"uid":"643ba401-8bf9-4cf3-96d3-0a7a87852a6d"}: {} f:spec: f:affinity: .: {} f:nodeAffinity: .: {} f:requiredDuringSchedulingIgnoredDuringExecution: {} f:containers: k:{"name":"kube-proxy"}: .: {} f:args: {} f:env: .: {} k:{"name":"KUBEPROXY_PATH"}: .: {} f:name: {} f:valueFrom: .: {} f:configMapKeyRef: {} k:{"name":"NODE_NAME"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: {} k:{"name":"POD_IP"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/var/lib/kube-proxy"}: .: {} f:mountPath: {} f:name: {} f:workingDir: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:hostNetwork: {} f:nodeSelector: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:windowsOptions: .: {} f:hostProcess: {} f:runAsUserName: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} f:volumes: .: {} k:{"name":"kube-proxy"}: .: {} f:configMap: .: {} f:defaultMode: {} f:name: {} f:name: {} manager: kube-controller-manager operation: Update time: "2023-03-16T22:45:08Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.1.0.5"}: .: {} f:ip: {} f:startTime: {} manager: kubelet.exe operation: Update subresource: status time: "2023-03-16T22:56:04Z" name: kube-proxy-windows-bgfqk namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: kube-proxy-windows uid: 643ba401-8bf9-4cf3-96d3-0a7a87852a6d resourceVersion: "4220" uid: 1b0f5228-df77-4180-b53a-20f0f3d5acb4 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - capz-conf-scwjd containers: - args: - $env:CONTAINER_SANDBOX_MOUNT_POINT/kube-proxy/start.ps1 env: - name: NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP - name: KUBEPROXY_PATH valueFrom: configMapKeyRef: key: KUBEPROXY_PATH name: windows-kubeproxy-ci optional: true image: sigwindowstools/kube-proxy:v1.27.0-alpha.3.828_a34e37c9963af5-calico-hostprocess imagePullPolicy: IfNotPresent name: kube-proxy resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/lib/kube-proxy name: kube-proxy - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-fvpdd readOnly: true workingDir: $env:CONTAINER_SANDBOX_MOUNT_POINT/kube-proxy/ dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true nodeName: capz-conf-scwjd nodeSelector: kubernetes.io/os: windows preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: windowsOptions: hostProcess: true runAsUserName: NT AUTHORITY\system serviceAccount: kube-proxy serviceAccountName: kube-proxy terminationGracePeriodSeconds: 30 tolerations: - key: CriticalAddonsOnly operator: Exists - operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: - configMap: defaultMode: 420 name: kube-proxy name: kube-proxy - name: kube-api-access-fvpdd projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2023-03-16T22:45:09Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-03-16T22:56:04Z" message: 'containers with unready status: [kube-proxy]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-03-16T22:56:04Z" message: 'containers with unready status: [kube-proxy]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-03-16T22:45:08Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://be073a99f597d9da07bf80b7b793854fe444f5f6230fd708f5d008ae2e736908 image: docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess imageID: sha256:066f734ecf45f03f1a29b2c4432153044af372540aec60a4e46e4a8b627cf1ed lastState: terminated: containerID: containerd://be073a99f597d9da07bf80b7b793854fe444f5f6230fd708f5d008ae2e736908 exitCode: -1073741510 finishedAt: "2023-03-16T22:55:58Z" reason: Error startedAt: "2023-03-16T22:55:58Z" name: kube-proxy ready: false restartCount: 9 started: false state: waiting: message: back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-windows-bgfqk_kube-system(1b0f5228-df77-4180-b53a-20f0f3d5acb4) reason: CrashLoopBackOff hostIP: 10.1.0.5 phase: Running podIP: 10.1.0.5 podIPs: - ip: 10.1.0.5 qosClass: BestEffort startTime: "2023-03-16T22:45:09Z" - metadata: creationTimestamp: "2023-03-16T22:45:07Z" generateName: kube-proxy-windows- labels: controller-revision-hash: 7d95f445dc k8s-app: kube-proxy-windows pod-template-generation: "1" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:controller-revision-hash: {} f:k8s-app: {} f:pod-template-generation: {} f:ownerReferences: .: {} k:{"uid":"643ba401-8bf9-4cf3-96d3-0a7a87852a6d"}: {} f:spec: f:affinity: .: {} f:nodeAffinity: .: {} f:requiredDuringSchedulingIgnoredDuringExecution: {} f:containers: k:{"name":"kube-proxy"}: .: {} f:args: {} f:env: .: {} k:{"name":"KUBEPROXY_PATH"}: .: {} f:name: {} f:valueFrom: .: {} f:configMapKeyRef: {} k:{"name":"NODE_NAME"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: {} k:{"name":"POD_IP"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/var/lib/kube-proxy"}: .: {} f:mountPath: {} f:name: {} f:workingDir: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:hostNetwork: {} f:nodeSelector: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:windowsOptions: .: {} f:hostProcess: {} f:runAsUserName: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} f:volumes: .: {} k:{"name":"kube-proxy"}: .: {} f:configMap: .: {} f:defaultMode: {} f:name: {} f:name: {} manager: kube-controller-manager operation: Update time: "2023-03-16T22:45:07Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.1.0.4"}: .: {} f:ip: {} f:startTime: {} manager: kubelet.exe operation: Update subresource: status time: "2023-03-16T22:55:58Z" name: kube-proxy-windows-x8pwv namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: kube-proxy-windows uid: 643ba401-8bf9-4cf3-96d3-0a7a87852a6d resourceVersion: "4197" uid: 434d370f-88b5-4ede-acf0-2fe2029b30d0 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - capz-conf-275z6 containers: - args: - $env:CONTAINER_SANDBOX_MOUNT_POINT/kube-proxy/start.ps1 env: - name: NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP - name: KUBEPROXY_PATH valueFrom: configMapKeyRef: key: KUBEPROXY_PATH name: windows-kubeproxy-ci optional: true image: sigwindowstools/kube-proxy:v1.27.0-alpha.3.828_a34e37c9963af5-calico-hostprocess imagePullPolicy: IfNotPresent name: kube-proxy resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/lib/kube-proxy name: kube-proxy - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-m7nkr readOnly: true workingDir: $env:CONTAINER_SANDBOX_MOUNT_POINT/kube-proxy/ dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true nodeName: capz-conf-275z6 nodeSelector: kubernetes.io/os: windows preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: windowsOptions: hostProcess: true runAsUserName: NT AUTHORITY\system serviceAccount: kube-proxy serviceAccountName: kube-proxy terminationGracePeriodSeconds: 30 tolerations: - key: CriticalAddonsOnly operator: Exists - operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: - configMap: defaultMode: 420 name: kube-proxy name: kube-proxy - name: kube-api-access-m7nkr projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2023-03-16T22:45:08Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-03-16T22:55:58Z" message: 'containers with unready status: [kube-proxy]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-03-16T22:55:58Z" message: 'containers with unready status: [kube-proxy]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-03-16T22:45:07Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://195e6a8c7720308f7313bf0022da068f10c0d49a9d7d1a6411692b1d316f2c8d image: docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess imageID: sha256:066f734ecf45f03f1a29b2c4432153044af372540aec60a4e46e4a8b627cf1ed lastState: terminated: containerID: containerd://195e6a8c7720308f7313bf0022da068f10c0d49a9d7d1a6411692b1d316f2c8d exitCode: -1073741510 finishedAt: "2023-03-16T22:55:52Z" reason: Error startedAt: "2023-03-16T22:55:52Z" name: kube-proxy ready: false restartCount: 9 started: false state: waiting: message: back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-windows-x8pwv_kube-system(434d370f-88b5-4ede-acf0-2fe2029b30d0) reason: CrashLoopBackOff hostIP: 10.1.0.4 phase: Running podIP: 10.1.0.4 podIPs: - ip: 10.1.0.4 qosClass: BestEffort startTime: "2023-03-16T22:45:08Z"�[0m �[38;5;9mIn �[1m[SynchronizedBeforeSuite]�[0m�[38;5;9m at: �[1mtest/e2e/e2e.go:242�[0m �[38;5;243m@ 03/16/23 22:58:22.129�[0m �[38;5;243m------------------------------�[0m �[38;5;9m[SynchronizedBeforeSuite] [FAILED] [728.265 seconds]�[0m �[38;5;9m�[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:77�[0m �[38;5;9m[FAILED] �[1m�[38;5;9mSynchronizedBeforeSuite failed on Ginkgo parallel process #1�[0m The first SynchronizedBeforeSuite function running on Ginkgo parallel process #1 failed. This suite will now abort. �[0m �[38;5;9mIn �[1m[SynchronizedBeforeSuite]�[0m�[38;5;9m at: �[1mtest/e2e/e2e.go:77�[0m �[38;5;243m@ 03/16/23 22:58:22.165�[0m �[38;5;243m------------------------------�[0m �[38;5;9m[SynchronizedBeforeSuite] [FAILED] [728.292 seconds]�[0m �[38;5;9m�[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:77�[0m �[38;5;9m[FAILED] �[1m�[38;5;9mSynchronizedBeforeSuite failed on Ginkgo parallel process #1�[0m The first SynchronizedBeforeSuite function running on Ginkgo parallel process #1 failed. This suite will now abort. �[0m �[38;5;9mIn �[1m[SynchronizedBeforeSuite]�[0m�[38;5;9m at: �[1mtest/e2e/e2e.go:77�[0m �[38;5;243m@ 03/16/23 22:58:22.167�[0m �[38;5;243m------------------------------�[0m �[38;5;9m[SynchronizedBeforeSuite] [FAILED] [728.296 seconds]�[0m �[38;5;9m�[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:77�[0m �[38;5;9m[FAILED] �[1m�[38;5;9mSynchronizedBeforeSuite failed on Ginkgo parallel process #1�[0m The first SynchronizedBeforeSuite function running on Ginkgo parallel process #1 failed. This suite will now abort. �[0m �[38;5;9mIn �[1m[SynchronizedBeforeSuite]�[0m�[38;5;9m at: �[1mtest/e2e/e2e.go:77�[0m �[38;5;243m@ 03/16/23 22:58:22.167�[0m �[38;5;243m------------------------------�[0m �[38;5;9m�[1mSummarizing 4 Failures:�[0m �[38;5;9m[FAIL]�[0m �[38;5;9m�[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:77�[0m �[38;5;9m[FAIL]�[0m �[38;5;9m�[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:77�[0m �[38;5;9m[FAIL]�[0m �[38;5;9m�[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:77�[0m �[38;5;9m[FAIL]�[0m �[38;5;9m�[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:242�[0m �[38;5;9m�[1mRan 0 of 7207 Specs in 728.460 seconds�[0m �[38;5;9m�[1mFAIL!�[0m -- �[38;5;14m�[1mA BeforeSuite node failed so all tests were skipped.�[0m I0316 22:46:13.399650 14 e2e.go:117] Starting e2e run "4af2e184-7c1d-4a05-ae29-fb6d39ca4fea" on Ginkgo node 1 �[38;5;228mYou're using deprecated Ginkgo functionality:�[0m �[38;5;228m=============================================�[0m �[38;5;11m--ginkgo.progress is deprecated . The functionality provided by --progress was confusing and is no longer needed. Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs. Or you can run with -vv to always see all node events. Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.�[0m �[38;5;11m--ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo. This feature has proved to be more noisy than useful. You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.�[0m �[38;5;11m--ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead�[0m �[1mLearn more at:�[0m �[38;5;14m�[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags�[0m �[38;5;243mTo silence deprecations that can be silenced set the following environment variable:�[0m �[38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1�[0m --- FAIL: TestE2E (728.90s) FAIL I0316 22:46:13.397133 16 e2e.go:117] Starting e2e run "f4cfa78f-3b54-4e19-9629-095930e680bb" on Ginkgo node 2 �[38;5;228mYou're using deprecated Ginkgo functionality:�[0m �[38;5;228m=============================================�[0m �[38;5;11m--ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead�[0m �[1mLearn more at:�[0m �[38;5;14m�[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags�[0m �[38;5;11m--ginkgo.progress is deprecated . The functionality provided by --progress was confusing and is no longer needed. Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs. Or you can run with -vv to always see all node events. Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.�[0m �[38;5;11m--ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo. This feature has proved to be more noisy than useful. You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.�[0m �[38;5;243mTo silence deprecations that can be silenced set the following environment variable:�[0m �[38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1�[0m --- FAIL: TestE2E (728.79s) FAIL I0316 22:46:13.402781 17 e2e.go:117] Starting e2e run "e39c4442-71a1-4ace-8627-54aecbc25947" on Ginkgo node 3 �[38;5;228mYou're using deprecated Ginkgo functionality:�[0m �[38;5;228m=============================================�[0m �[38;5;11m--ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead�[0m �[1mLearn more at:�[0m �[38;5;14m�[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags�[0m �[38;5;11m--ginkgo.progress is deprecated . The functionality provided by --progress was confusing and is no longer needed. Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs. Or you can run with -vv to always see all node events. Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.�[0m �[38;5;11m--ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo. This feature has proved to be more noisy than useful. You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.�[0m �[38;5;243mTo silence deprecations that can be silenced set the following environment variable:�[0m �[38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1�[0m --- FAIL: TestE2E (728.79s) FAIL I0316 22:46:13.401674 19 e2e.go:117] Starting e2e run "c0b35fcf-0d00-4215-aaaf-3c73b83e8307" on Ginkgo node 4 �[38;5;228mYou're using deprecated Ginkgo functionality:�[0m �[38;5;228m=============================================�[0m �[38;5;11m--ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead�[0m �[1mLearn more at:�[0m �[38;5;14m�[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags�[0m �[38;5;11m--ginkgo.progress is deprecated . The functionality provided by --progress was confusing and is no longer needed. Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs. Or you can run with -vv to always see all node events. Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.�[0m �[38;5;11m--ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo. This feature has proved to be more noisy than useful. You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.�[0m �[38;5;243mTo silence deprecations that can be silenced set the following environment variable:�[0m �[38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1�[0m --- FAIL: TestE2E (728.78s) FAIL Ginkgo ran 1 suite in 12m9.051658491s Test Suite Failed �[38;5;228mYou're using deprecated Ginkgo functionality:�[0m �[38;5;228m=============================================�[0m �[38;5;11m--slowSpecThreshold is deprecated use --slow-spec-threshold instead and pass in a duration string (e.g. '5s', not '5.0')�[0m �[1mLearn more at:�[0m �[38;5;14m�[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed--slowspecthreshold�[0m �[38;5;243mTo silence deprecations that can be silenced set the following environment variable:�[0m �[38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1�[0m [FAILED] Unexpected error: <*errors.withStack | 0xc002e28f60>: { error: <*errors.withMessage | 0xc002b12900>{ cause: <*errors.errorString | 0xc0004fa310>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x34b656e, 0x376dca7, 0x196a59b, 0x197e6d8, 0x14ec761], } Unable to run conformance tests: error container run failed with exit code 1 occurred In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227 @ 03/16/23 22:58:23.048 < Exit [It] conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:98 @ 03/16/23 22:58:23.049 (19m48.447s) > Enter [AfterEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:231 @ 03/16/23 22:58:23.049 Mar 16 22:58:23.049: INFO: FAILED! Mar 16 22:58:23.050: INFO: Cleaning up after "Conformance Tests conformance-tests" spec Mar 16 22:58:23.050: INFO: Dumping all the Cluster API resources in the "capz-conf-0bueug" namespace STEP: Dumping logs from the "capz-conf-0bueug" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:96 @ 03/16/23 22:58:23.785 Mar 16 22:58:23.785: INFO: Dumping workload cluster capz-conf-0bueug/capz-conf-0bueug logs Mar 16 22:58:23.866: INFO: Collecting logs for Linux node capz-conf-0bueug-control-plane-mj5bc in cluster capz-conf-0bueug in namespace capz-conf-0bueug Mar 16 22:58:38.112: INFO: Collecting boot logs for AzureMachine capz-conf-0bueug-control-plane-mj5bc Mar 16 22:58:39.087: INFO: Collecting logs for Windows node capz-conf-scwjd in cluster capz-conf-0bueug in namespace capz-conf-0bueug Mar 16 23:01:06.966: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-scwjd to /logs/artifacts/clusters/capz-conf-0bueug/machines/capz-conf-0bueug-md-win-786c6dcc6f-d9khz/crashdumps.tar Mar 16 23:01:08.508: INFO: Collecting boot logs for AzureMachine capz-conf-0bueug-md-win-scwjd Mar 16 23:01:09.335: INFO: Collecting logs for Windows node capz-conf-275z6 in cluster capz-conf-0bueug in namespace capz-conf-0bueug Mar 16 23:03:39.025: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-275z6 to /logs/artifacts/clusters/capz-conf-0bueug/machines/capz-conf-0bueug-md-win-786c6dcc6f-j5vpk/crashdumps.tar Mar 16 23:03:40.640: INFO: Collecting boot logs for AzureMachine capz-conf-0bueug-md-win-275z6 Mar 16 23:03:41.422: INFO: Dumping workload cluster capz-conf-0bueug/capz-conf-0bueug nodes Mar 16 23:03:41.731: INFO: Describing Node capz-conf-0bueug-control-plane-mj5bc Mar 16 23:03:41.930: INFO: Describing Node capz-conf-275z6 Mar 16 23:03:42.120: INFO: Describing Node capz-conf-scwjd Mar 16 23:03:42.303: INFO: Fetching nodes took 880.352112ms Mar 16 23:03:42.303: INFO: Dumping workload cluster capz-conf-0bueug/capz-conf-0bueug pod logs Mar 16 23:03:42.565: INFO: Describing Pod calico-apiserver/calico-apiserver-d5667676d-n4fs8 Mar 16 23:03:42.565: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-d5667676d-n4fs8, container calico-apiserver Mar 16 23:03:42.631: INFO: Describing Pod calico-apiserver/calico-apiserver-d5667676d-p688x Mar 16 23:03:42.632: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-d5667676d-p688x, container calico-apiserver Mar 16 23:03:42.699: INFO: Describing Pod calico-system/calico-kube-controllers-59d9cb8fbb-5jzmf Mar 16 23:03:42.700: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-59d9cb8fbb-5jzmf, container calico-kube-controllers Mar 16 23:03:42.774: INFO: Describing Pod calico-system/calico-node-h559n Mar 16 23:03:42.775: INFO: Creating log watcher for controller calico-system/calico-node-h559n, container calico-node Mar 16 23:03:42.870: INFO: Describing Pod calico-system/calico-node-windows-64sf9 Mar 16 23:03:42.870: INFO: Creating log watcher for controller calico-system/calico-node-windows-64sf9, container calico-node-startup Mar 16 23:03:42.870: INFO: Creating log watcher for controller calico-system/calico-node-windows-64sf9, container calico-node-felix Mar 16 23:03:42.923: INFO: Error starting logs stream for pod calico-system/calico-node-windows-64sf9, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-64sf9" is waiting to start: PodInitializing Mar 16 23:03:42.924: INFO: Error starting logs stream for pod calico-system/calico-node-windows-64sf9, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-64sf9" is waiting to start: PodInitializing Mar 16 23:03:42.936: INFO: Describing Pod calico-system/calico-node-windows-ptp8l Mar 16 23:03:42.936: INFO: Creating log watcher for controller calico-system/calico-node-windows-ptp8l, container calico-node-startup Mar 16 23:03:42.936: INFO: Creating log watcher for controller calico-system/calico-node-windows-ptp8l, container calico-node-felix Mar 16 23:03:42.985: INFO: Error starting logs stream for pod calico-system/calico-node-windows-ptp8l, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-ptp8l" is waiting to start: PodInitializing Mar 16 23:03:42.985: INFO: Error starting logs stream for pod calico-system/calico-node-windows-ptp8l, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-ptp8l" is waiting to start: PodInitializing Mar 16 23:03:43.331: INFO: Describing Pod calico-system/calico-typha-7998d677cf-226xr Mar 16 23:03:43.331: INFO: Creating log watcher for controller calico-system/calico-typha-7998d677cf-226xr, container calico-typha Mar 16 23:03:43.731: INFO: Describing Pod calico-system/csi-node-driver-svgcw Mar 16 23:03:43.731: INFO: Creating log watcher for controller calico-system/csi-node-driver-svgcw, container calico-csi Mar 16 23:03:43.732: INFO: Creating log watcher for controller calico-system/csi-node-driver-svgcw, container csi-node-driver-registrar Mar 16 23:03:44.133: INFO: Describing Pod kube-system/containerd-logger-dv27w Mar 16 23:03:44.133: INFO: Creating log watcher for controller kube-system/containerd-logger-dv27w, container containerd-logger Mar 16 23:03:44.532: INFO: Describing Pod kube-system/containerd-logger-lsh6r Mar 16 23:03:44.533: INFO: Creating log watcher for controller kube-system/containerd-logger-lsh6r, container containerd-logger Mar 16 23:03:44.951: INFO: Describing Pod kube-system/coredns-5d78c9869d-jg2mq Mar 16 23:03:44.951: INFO: Creating log watcher for controller kube-system/coredns-5d78c9869d-jg2mq, container coredns Mar 16 23:03:45.336: INFO: Describing Pod kube-system/coredns-5d78c9869d-nbrqn Mar 16 23:03:45.336: INFO: Creating log watcher for controller kube-system/coredns-5d78c9869d-nbrqn, container coredns Mar 16 23:03:45.739: INFO: Describing Pod kube-system/csi-azuredisk-controller-56db99df6c-9zdpw Mar 16 23:03:45.739: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-56db99df6c-9zdpw, container csi-snapshotter Mar 16 23:03:45.740: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-56db99df6c-9zdpw, container csi-resizer Mar 16 23:03:45.742: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-56db99df6c-9zdpw, container azuredisk Mar 16 23:03:45.742: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-56db99df6c-9zdpw, container liveness-probe Mar 16 23:03:45.743: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-56db99df6c-9zdpw, container csi-attacher Mar 16 23:03:45.742: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-56db99df6c-9zdpw, container csi-provisioner Mar 16 23:03:46.136: INFO: Describing Pod kube-system/csi-azuredisk-node-v7lzh Mar 16 23:03:46.136: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-v7lzh, container liveness-probe Mar 16 23:03:46.136: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-v7lzh, container node-driver-registrar Mar 16 23:03:46.136: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-v7lzh, container azuredisk Mar 16 23:03:46.544: INFO: Describing Pod kube-system/csi-azuredisk-node-win-tf9rw Mar 16 23:03:46.544: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-tf9rw, container node-driver-registrar Mar 16 23:03:46.544: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-tf9rw, container liveness-probe Mar 16 23:03:46.544: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-tf9rw, container azuredisk Mar 16 23:03:46.590: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-tf9rw, container node-driver-registrar: container "node-driver-registrar" in pod "csi-azuredisk-node-win-tf9rw" is waiting to start: PodInitializing Mar 16 23:03:46.590: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-tf9rw, container liveness-probe: container "liveness-probe" in pod "csi-azuredisk-node-win-tf9rw" is waiting to start: PodInitializing Mar 16 23:03:46.590: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-tf9rw, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-win-tf9rw" is waiting to start: PodInitializing Mar 16 23:03:46.933: INFO: Describing Pod kube-system/csi-azuredisk-node-win-vrwwk Mar 16 23:03:46.933: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-vrwwk, container liveness-probe Mar 16 23:03:46.933: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-vrwwk, container azuredisk Mar 16 23:03:46.933: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-vrwwk, container node-driver-registrar Mar 16 23:03:46.972: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-vrwwk, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-win-vrwwk" is waiting to start: PodInitializing Mar 16 23:03:46.972: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-vrwwk, container node-driver-registrar: container "node-driver-registrar" in pod "csi-azuredisk-node-win-vrwwk" is waiting to start: PodInitializing Mar 16 23:03:46.972: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-vrwwk, container liveness-probe: container "liveness-probe" in pod "csi-azuredisk-node-win-vrwwk" is waiting to start: PodInitializing Mar 16 23:03:47.339: INFO: Describing Pod kube-system/csi-proxy-dm54w Mar 16 23:03:47.339: INFO: Creating log watcher for controller kube-system/csi-proxy-dm54w, container csi-proxy Mar 16 23:03:47.734: INFO: Describing Pod kube-system/csi-proxy-fwgj7 Mar 16 23:03:47.734: INFO: Creating log watcher for controller kube-system/csi-proxy-fwgj7, container csi-proxy Mar 16 23:03:48.133: INFO: Describing Pod kube-system/etcd-capz-conf-0bueug-control-plane-mj5bc Mar 16 23:03:48.133: INFO: Creating log watcher for controller kube-system/etcd-capz-conf-0bueug-control-plane-mj5bc, container etcd Mar 16 23:03:48.538: INFO: Describing Pod kube-system/kube-apiserver-capz-conf-0bueug-control-plane-mj5bc Mar 16 23:03:48.539: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-conf-0bueug-control-plane-mj5bc, container kube-apiserver Mar 16 23:03:48.934: INFO: Describing Pod kube-system/kube-controller-manager-capz-conf-0bueug-control-plane-mj5bc Mar 16 23:03:48.934: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-conf-0bueug-control-plane-mj5bc, container kube-controller-manager Mar 16 23:03:49.337: INFO: Describing Pod kube-system/kube-proxy-windows-bgfqk Mar 16 23:03:49.337: INFO: Creating log watcher for controller kube-system/kube-proxy-windows-bgfqk, container kube-proxy Mar 16 23:03:49.734: INFO: Describing Pod kube-system/kube-proxy-windows-x8pwv Mar 16 23:03:49.734: INFO: Creating log watcher for controller kube-system/kube-proxy-windows-x8pwv, container kube-proxy Mar 16 23:03:50.135: INFO: Describing Pod kube-system/kube-proxy-xbdgr Mar 16 23:03:50.135: INFO: Creating log watcher for controller kube-system/kube-proxy-xbdgr, container kube-proxy Mar 16 23:03:50.533: INFO: Describing Pod kube-system/kube-scheduler-capz-conf-0bueug-control-plane-mj5bc Mar 16 23:03:50.533: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-conf-0bueug-control-plane-mj5bc, container kube-scheduler Mar 16 23:03:50.936: INFO: Describing Pod kube-system/metrics-server-6987569d96-8kswn Mar 16 23:03:50.936: INFO: Creating log watcher for controller kube-system/metrics-server-6987569d96-8kswn, container metrics-server Mar 16 23:03:51.333: INFO: Describing Pod tigera-operator/tigera-operator-59c686f986-rt8kc Mar 16 23:03:51.333: INFO: Fetching pod logs took 9.030368311s Mar 16 23:03:51.333: INFO: Dumping workload cluster capz-conf-0bueug/capz-conf-0bueug Azure activity log Mar 16 23:03:51.333: INFO: Creating log watcher for controller tigera-operator/tigera-operator-59c686f986-rt8kc, container tigera-operator Mar 16 23:03:54.274: INFO: Fetching activity logs took 2.940296234s Mar 16 23:03:54.274: INFO: Deleting all clusters in the capz-conf-0bueug namespace STEP: Deleting cluster capz-conf-0bueug - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/ginkgoextensions/output.go:35 @ 03/16/23 23:03:54.295 INFO: Waiting for the Cluster capz-conf-0bueug/capz-conf-0bueug to be deleted STEP: Waiting for cluster capz-conf-0bueug to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/framework/ginkgoextensions/output.go:35 @ 03/16/23 23:03:54.313 Mar 16 23:09:44.531: INFO: Deleting namespace used for hosting the "conformance-tests" test spec INFO: Deleting namespace capz-conf-0bueug Mar 16 23:09:44.580: INFO: Checking if any resources are left over in Azure for spec "conformance-tests" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:220 @ 03/16/23 23:09:45.042 < Exit [AfterEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:231 @ 03/16/23 23:10:11.237 (11m48.188s)
Filter through log files
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [It] Workload cluster creation Creating a Flatcar cluster [OPTIONAL] With Flatcar control-plane and worker nodes
capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the intree cloud provider [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster with VMSS flex machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e [It] Workload cluster creation Creating an AKS cluster [Managed Kubernetes] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating clusters on public MEC [OPTIONAL] with 1 control plane nodes and 1 worker node
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
... skipping 138 lines ... Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 138 100 138 0 0 2123 0 --:--:-- --:--:-- --:--:-- 2090 100 34 100 34 0 0 225 0 --:--:-- --:--:-- --:--:-- 225 using CI_VERSION=v1.27.0-alpha.3.828+a34e37c9963af5 using KUBERNETES_VERSION=v1.27.0-alpha.3.828+a34e37c9963af5 using IMAGE_TAG=v1.27.0-alpha.3.830_9fce3cd4b80206 Error response from daemon: manifest for capzci.azurecr.io/kube-apiserver:v1.27.0-alpha.3.830_9fce3cd4b80206 not found: manifest unknown: manifest tagged by "v1.27.0-alpha.3.830_9fce3cd4b80206" is not found Building Kubernetes make: Entering directory '/home/prow/go/src/k8s.io/kubernetes' +++ [0316 22:05:10] WARNING: linux/arm will no longer be built/shipped by default, please build it explicitly if needed. +++ [0316 22:05:10] support for linux/arm will be removed in a subsequent release. +++ [0316 22:05:10] Verifying Prerequisites.... +++ [0316 22:05:10] Building Docker image kube-build:build-3143ee45e4-5-v1.27.0-go1.20.2-bullseye.0 ... skipping 820 lines ... [38;5;243m------------------------------[0m [0mConformance Tests [0m[1mconformance-tests[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:98[0m INFO: Cluster name is capz-conf-0bueug [1mSTEP:[0m Creating namespace "capz-conf-0bueug" for hosting the cluster [38;5;243m@ 03/16/23 22:38:34.549[0m Mar 16 22:38:34.549: INFO: starting to create namespace for hosting the "capz-conf-0bueug" test spec 2023/03/16 22:38:34 failed trying to get namespace (capz-conf-0bueug):namespaces "capz-conf-0bueug" not found INFO: Creating namespace capz-conf-0bueug INFO: Creating event watcher for namespace "capz-conf-0bueug" [1mconformance-tests[38;5;243m - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 @ 03/16/23 22:38:34.602[0m [1mconformance-tests [0m[1mName[0m | [1mN[0m | [1mMin[0m | [1mMedian[0m | [1mMean[0m | [1mStdDev[0m | [1mMax[0m INFO: Creating the workload cluster with name "capz-conf-0bueug" using the "conformance-presubmit-artifacts-windows-containerd" template (Kubernetes v1.27.0-alpha.3.828+a34e37c9963af5, 1 control-plane machines, 0 worker machines) ... skipping 99 lines ... ==================================================== Random Seed: [1m1679006773[0m - will randomize all specs Will run [1m348[0m of [1m7207[0m specs Running in parallel across [1m4[0m processes [38;5;243m------------------------------[0m [38;5;9m[SynchronizedBeforeSuite] [FAILED] [728.313 seconds][0m [38;5;9m[1m[SynchronizedBeforeSuite] [0m [38;5;243mtest/e2e/e2e.go:77[0m [38;5;243mTimeline >>[0m Mar 16 22:46:13.817: INFO: >>> kubeConfig: /tmp/kubeconfig Mar 16 22:46:13.820: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable ... skipping 39 lines ... Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:44 +0000 UTC - event for kube-proxy: {daemonset-controller } SuccessfulCreate: Created pod: kube-proxy-6n5z6 Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:44 +0000 UTC - event for kube-proxy: {daemonset-controller } SuccessfulDelete: Deleted pod: kube-proxy-6n5z6 Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:44 +0000 UTC - event for kube-proxy-6n5z6: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-proxy-6n5z6 to capz-conf-0bueug-control-plane-mj5bc Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-apiserver-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "capzci.azurecr.io/kube-apiserver:v1.27.0-alpha.3.830_9fce3cd4b80206" Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-controller-manager-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "capzci.azurecr.io/kube-controller-manager:v1.27.0-alpha.3.830_9fce3cd4b80206" Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-proxy: {daemonset-controller } SuccessfulCreate: Created pod: kube-proxy-xbdgr Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-proxy-6n5z6: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedMount: MountVolume.SetUp failed for volume "kube-proxy" : object "kube-system"/"kube-proxy" not registered Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-proxy-6n5z6: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-rhjfr" : object "kube-system"/"kube-root-ca.crt" not registered Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-proxy-xbdgr: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "capzci.azurecr.io/kube-proxy:v1.27.0-alpha.3.830_9fce3cd4b80206" Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-proxy-xbdgr: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-proxy-xbdgr to capz-conf-0bueug-control-plane-mj5bc Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-scheduler-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "capzci.azurecr.io/kube-scheduler:v1.27.0-alpha.3.830_9fce3cd4b80206" Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:47 +0000 UTC - event for kube-apiserver-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Killing: Stopping container kube-apiserver Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:47 +0000 UTC - event for kube-controller-manager-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Killing: Stopping container kube-controller-manager Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:47 +0000 UTC - event for kube-scheduler-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Killing: Stopping container kube-scheduler ... skipping 18 lines ... Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:15 +0000 UTC - event for metrics-server: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-6987569d96 to 1 Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:15 +0000 UTC - event for metrics-server-6987569d96: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-6987569d96-8kswn Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:18 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:18 +0000 UTC - event for coredns-5d78c9869d-nbrqn: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:18 +0000 UTC - event for kube-scheduler: {default-scheduler } LeaderElection: capz-conf-0bueug-control-plane-mj5bc_4790ca42-5f76-4363-8a8d-bc2307d9f033 became leader Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:18 +0000 UTC - event for metrics-server-6987569d96-8kswn: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:35 +0000 UTC - event for kube-apiserver-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:51 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-5d78c9869d-jg2mq to capz-conf-0bueug-control-plane-mj5bc Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:51 +0000 UTC - event for coredns-5d78c9869d-nbrqn: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-5d78c9869d-nbrqn to capz-conf-0bueug-control-plane-mj5bc Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:51 +0000 UTC - event for metrics-server-6987569d96-8kswn: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-6987569d96-8kswn to capz-conf-0bueug-control-plane-mj5bc Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:52 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:52 +0000 UTC - event for coredns-5d78c9869d-nbrqn: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:53 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "927e714bcc0b5ae751075c38c9b7988d11d9f9ca0742dcc8ba26334e5813d4b8": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/ Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:53 +0000 UTC - event for coredns-5d78c9869d-nbrqn: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "946dd33ebcc4c32f473c66188ba91c8675b4c7a0b2183ebdecaba866f615d02d": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/ Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:53 +0000 UTC - event for metrics-server-6987569d96-8kswn: {kubelet capz-conf-0bueug-control-plane-mj5bc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:53 +0000 UTC - event for metrics-server-6987569d96-8kswn: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f06ca875435501c5124ae9ffa6822484534de14eb5e4418f383a442d84e03e54": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/ Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:54 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {kubelet capz-conf-0bueug-control-plane-mj5bc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:54 +0000 UTC - event for coredns-5d78c9869d-nbrqn: {kubelet capz-conf-0bueug-control-plane-mj5bc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:07 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.1" already present on machine Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:07 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container coredns Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:07 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container coredns Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:08 +0000 UTC - event for metrics-server-6987569d96-8kswn: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "k8s.gcr.io/metrics-server/metrics-server:v0.6.2" ... skipping 71 lines ... Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:34 +0000 UTC - event for containerd-logger-lsh6r: {kubelet capz-conf-scwjd} Killing: Stopping container containerd-logger Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:35 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Started: Started container containerd-logger Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:35 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Killing: Stopping container containerd-logger Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:35 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Created: Created container containerd-logger Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:38 +0000 UTC - event for containerd-logger-lsh6r: {kubelet capz-conf-scwjd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 326.9977ms (326.9977ms including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:40 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 444.5104ms (444.5104ms including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:40 +0000 UTC - event for kube-proxy-windows-bgfqk: {kubelet capz-conf-scwjd} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-windows-bgfqk_kube-system(1b0f5228-df77-4180-b53a-20f0f3d5acb4) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:44 +0000 UTC - event for kube-proxy-windows-x8pwv: {kubelet capz-conf-275z6} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-windows-x8pwv_kube-system(434d370f-88b5-4ede-acf0-2fe2029b30d0) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:49 +0000 UTC - event for containerd-logger-lsh6r: {kubelet capz-conf-scwjd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 374.7954ms (374.7954ms including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:51 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 411.3733ms (411.3733ms including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:00 +0000 UTC - event for containerd-logger-lsh6r: {kubelet capz-conf-scwjd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 362.5522ms (362.5522ms including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:02 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 470.8619ms (471.347ms including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:11 +0000 UTC - event for containerd-logger-lsh6r: {kubelet capz-conf-scwjd} BackOff: Back-off restarting failed container containerd-logger in pod containerd-logger-lsh6r_kube-system(017a5a4a-d9d2-4bc3-8671-6ed7c34dd141) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:12 +0000 UTC - event for csi-azuredisk-node-win: {daemonset-controller } SuccessfulCreate: Created pod: csi-azuredisk-node-win-vrwwk Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:12 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {kubelet capz-conf-275z6} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:12 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {default-scheduler } Scheduled: Successfully assigned kube-system/csi-azuredisk-node-win-vrwwk to capz-conf-275z6 Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:12 +0000 UTC - event for csi-proxy: {daemonset-controller } SuccessfulCreate: Created pod: csi-proxy-fwgj7 Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:12 +0000 UTC - event for csi-proxy-fwgj7: {default-scheduler } Scheduled: Successfully assigned kube-system/csi-proxy-fwgj7 to capz-conf-275z6 Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:12 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} Pulling: Pulling image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:13 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 460.8675ms (460.8675ms including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:23 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} BackOff: Back-off restarting failed container containerd-logger in pod containerd-logger-dv27w_kube-system(8b158921-6e6f-4293-aa4d-f1ba3f8d6022) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:27 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} Created: Created container csi-proxy Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:27 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" in 14.3317146s (14.6425719s including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:27 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} Started: Started container csi-proxy Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:28 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} Killing: Stopping container csi-proxy Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:32 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} Pulled: Container image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" already present on machine Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:43 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {kubelet capz-conf-275z6} Created: Created container init Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:43 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {kubelet capz-conf-275z6} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" in 16.0298164s (30.8268854s including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:43 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} BackOff: Back-off restarting failed container csi-proxy in pod csi-proxy-fwgj7_kube-system(ec53bf42-2782-4e41-954c-24c0694b8136) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:44 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {kubelet capz-conf-275z6} Started: Started container init Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:44 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {kubelet capz-conf-275z6} Killing: Stopping container init Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:49 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {kubelet capz-conf-275z6} Pulled: Container image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" already present on machine Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:01 +0000 UTC - event for csi-azuredisk-node-win: {daemonset-controller } SuccessfulCreate: Created pod: csi-azuredisk-node-win-tf9rw Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:01 +0000 UTC - event for csi-azuredisk-node-win-tf9rw: {default-scheduler } Scheduled: Successfully assigned kube-system/csi-azuredisk-node-win-tf9rw to capz-conf-scwjd Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:01 +0000 UTC - event for csi-proxy: {daemonset-controller } SuccessfulCreate: Created pod: csi-proxy-dm54w ... skipping 7 lines ... Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:19 +0000 UTC - event for csi-azuredisk-node-win-tf9rw: {kubelet capz-conf-scwjd} Pulled: Container image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" already present on machine Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:32 +0000 UTC - event for csi-proxy-dm54w: {kubelet capz-conf-scwjd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" in 14.7685822s (29.5448352s including waiting) Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:32 +0000 UTC - event for csi-proxy-dm54w: {kubelet capz-conf-scwjd} Started: Started container csi-proxy Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:32 +0000 UTC - event for csi-proxy-dm54w: {kubelet capz-conf-scwjd} Created: Created container csi-proxy Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:33 +0000 UTC - event for csi-proxy-dm54w: {kubelet capz-conf-scwjd} Killing: Stopping container csi-proxy Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:37 +0000 UTC - event for csi-proxy-dm54w: {kubelet capz-conf-scwjd} Pulled: Container image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" already present on machine Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:48 +0000 UTC - event for csi-proxy-dm54w: {kubelet capz-conf-scwjd} BackOff: Back-off restarting failed container csi-proxy in pod csi-proxy-dm54w_kube-system(1dafe25d-5961-4f8a-8685-e52c2150ab68) Mar 16 22:58:14.216: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 22:58:14.216: INFO: containerd-logger-dv27w capz-conf-275z6 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:57:20 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:57:20 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:07 +0000 UTC }] Mar 16 22:58:14.216: INFO: containerd-logger-lsh6r capz-conf-scwjd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:57:03 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:57:03 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:08 +0000 UTC }] Mar 16 22:58:14.216: INFO: coredns-5d78c9869d-jg2mq capz-conf-0bueug-control-plane-mj5bc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:51 +0000 UTC }] Mar 16 22:58:14.216: INFO: coredns-5d78c9869d-nbrqn capz-conf-0bueug-control-plane-mj5bc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:51 +0000 UTC }] Mar 16 22:58:14.216: INFO: csi-azuredisk-controller-56db99df6c-9zdpw capz-conf-0bueug-control-plane-mj5bc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:05 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:40 +0000 UTC }] ... skipping 137 lines ... ] } ], "filters": [ { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == Stats && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::LayerID && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::NameToGuid && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.Stats && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.State && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetProcessProperties && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetComputeSystemProperties && hasnoproperty error" } ], "outputs": [ { "type": "StdOutput" } ... skipping 28 lines ... ] } ], "filters": [ { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == Stats && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::LayerID && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::NameToGuid && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.Stats && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.State && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetProcessProperties && hasnoproperty error" }, { "type": "drop", "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetComputeSystemProperties && hasnoproperty error" } ], "outputs": [ { "type": "StdOutput" } ], "schemaVersion": "2016-08-11" } Logging started... ENDLOG for container kube-system:containerd-logger-lsh6r:containerd-logger Mar 16 22:58:19.327: INFO: Failed to get logs of pod csi-azuredisk-node-win-tf9rw, container liveness-probe, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-tf9rw) Mar 16 22:58:19.327: INFO: Logs of kube-system/csi-azuredisk-node-win-tf9rw:liveness-probe on node capz-conf-scwjd Mar 16 22:58:19.327: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-tf9rw:liveness-probe Mar 16 22:58:19.727: INFO: Failed to get logs of pod csi-azuredisk-node-win-tf9rw, container node-driver-registrar, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-tf9rw) Mar 16 22:58:19.727: INFO: Logs of kube-system/csi-azuredisk-node-win-tf9rw:node-driver-registrar on node capz-conf-scwjd Mar 16 22:58:19.727: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-tf9rw:node-driver-registrar Mar 16 22:58:20.127: INFO: Failed to get logs of pod csi-azuredisk-node-win-tf9rw, container azuredisk, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-tf9rw) Mar 16 22:58:20.127: INFO: Logs of kube-system/csi-azuredisk-node-win-tf9rw:azuredisk on node capz-conf-scwjd Mar 16 22:58:20.127: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-tf9rw:azuredisk Mar 16 22:58:20.527: INFO: Failed to get logs of pod csi-azuredisk-node-win-vrwwk, container liveness-probe, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-vrwwk) Mar 16 22:58:20.527: INFO: Logs of kube-system/csi-azuredisk-node-win-vrwwk:liveness-probe on node capz-conf-275z6 Mar 16 22:58:20.527: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-vrwwk:liveness-probe Mar 16 22:58:20.926: INFO: Failed to get logs of pod csi-azuredisk-node-win-vrwwk, container node-driver-registrar, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-vrwwk) Mar 16 22:58:20.926: INFO: Logs of kube-system/csi-azuredisk-node-win-vrwwk:node-driver-registrar on node capz-conf-275z6 Mar 16 22:58:20.926: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-vrwwk:node-driver-registrar Mar 16 22:58:21.327: INFO: Failed to get logs of pod csi-azuredisk-node-win-vrwwk, container azuredisk, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-vrwwk) Mar 16 22:58:21.327: INFO: Logs of kube-system/csi-azuredisk-node-win-vrwwk:azuredisk on node capz-conf-275z6 Mar 16 22:58:21.327: INFO: : STARTLOG ENDLOG for container kube-system:csi-azuredisk-node-win-vrwwk:azuredisk Mar 16 22:58:21.537: INFO: Logs of kube-system/csi-proxy-dm54w:csi-proxy on node capz-conf-scwjd Mar 16 22:58:21.537: INFO: : STARTLOG ... skipping 12 lines ... ENDLOG for container kube-system:kube-proxy-windows-bgfqk:kube-proxy Mar 16 22:58:22.128: INFO: Logs of kube-system/kube-proxy-windows-x8pwv:kube-proxy on node capz-conf-275z6 Mar 16 22:58:22.128: INFO: : STARTLOG ENDLOG for container kube-system:kube-proxy-windows-x8pwv:kube-proxy [38;5;9m[FAILED][0m in [SynchronizedBeforeSuite] - test/e2e/e2e.go:242 [38;5;243m@ 03/16/23 22:58:22.129[0m [38;5;243m<< Timeline[0m [38;5;9m[FAILED] Error waiting for all pods to be running and ready: Timed out after 600.001s. Expected all pods (need at least 0) in namespace "kube-system" to be running and ready (except for 0). 10 / 18 pods were running and ready. Expected 4 pod replicas, 4 are Running and Ready. Pods that were neither completed nor running: <[]v1.Pod | len:8, cap:8>: - metadata: ... skipping 237 lines ... imageID: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 lastState: terminated: containerID: containerd://a13b3dcc37d95a1f2869569555c1ea190f9f67ae6660b3e78f036574afeeaabb exitCode: -1073741510 finishedAt: "2023-03-16T22:57:15Z" reason: Error startedAt: "2023-03-16T22:57:14Z" name: containerd-logger ready: false restartCount: 10 started: false state: waiting: message: back-off 5m0s restarting failed container=containerd-logger pod=containerd-logger-dv27w_kube-system(8b158921-6e6f-4293-aa4d-f1ba3f8d6022) reason: CrashLoopBackOff hostIP: 10.1.0.4 phase: Running podIP: 10.1.0.4 podIPs: - ip: 10.1.0.4 ... skipping 240 lines ... imageID: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 lastState: terminated: containerID: containerd://5466d33baabec2eac4ebb86646586f775059d171a0498b0b8ab965a8c10f0639 exitCode: -1073741510 finishedAt: "2023-03-16T22:56:57Z" reason: Error startedAt: "2023-03-16T22:56:57Z" name: containerd-logger ready: false restartCount: 9 started: false state: waiting: message: back-off 5m0s restarting failed container=containerd-logger pod=containerd-logger-lsh6r_kube-system(017a5a4a-d9d2-4bc3-8671-6ed7c34dd141) reason: CrashLoopBackOff hostIP: 10.1.0.5 phase: Running podIP: 10.1.0.5 podIPs: - ip: 10.1.0.5 ... skipping 1237 lines ... imageID: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba lastState: terminated: containerID: containerd://5eca80256499b374f486996578e81b3aff277eb497e5452059f2b0a6b584e98f exitCode: -1073741510 finishedAt: "2023-03-16T22:54:01Z" reason: Error startedAt: "2023-03-16T22:54:00Z" name: csi-proxy ready: false restartCount: 7 started: false state: waiting: message: back-off 5m0s restarting failed container=csi-proxy pod=csi-proxy-dm54w_kube-system(1dafe25d-5961-4f8a-8685-e52c2150ab68) reason: CrashLoopBackOff hostIP: 10.1.0.5 phase: Running podIP: 10.1.0.5 podIPs: - ip: 10.1.0.5 ... skipping 211 lines ... imageID: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba lastState: terminated: containerID: containerd://b743aa1990fa5cacdeef2891f352986125e55081e9e1f09b4f53855b942578d2 exitCode: -1073741510 finishedAt: "2023-03-16T22:57:02Z" reason: Error startedAt: "2023-03-16T22:57:02Z" name: csi-proxy ready: false restartCount: 9 started: false state: waiting: message: back-off 5m0s restarting failed container=csi-proxy pod=csi-proxy-fwgj7_kube-system(ec53bf42-2782-4e41-954c-24c0694b8136) reason: CrashLoopBackOff hostIP: 10.1.0.4 phase: Running podIP: 10.1.0.4 podIPs: - ip: 10.1.0.4 ... skipping 279 lines ... imageID: sha256:066f734ecf45f03f1a29b2c4432153044af372540aec60a4e46e4a8b627cf1ed lastState: terminated: containerID: containerd://be073a99f597d9da07bf80b7b793854fe444f5f6230fd708f5d008ae2e736908 exitCode: -1073741510 finishedAt: "2023-03-16T22:55:58Z" reason: Error startedAt: "2023-03-16T22:55:58Z" name: kube-proxy ready: false restartCount: 9 started: false state: waiting: message: back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-windows-bgfqk_kube-system(1b0f5228-df77-4180-b53a-20f0f3d5acb4) reason: CrashLoopBackOff hostIP: 10.1.0.5 phase: Running podIP: 10.1.0.5 podIPs: - ip: 10.1.0.5 ... skipping 279 lines ... imageID: sha256:066f734ecf45f03f1a29b2c4432153044af372540aec60a4e46e4a8b627cf1ed lastState: terminated: containerID: containerd://195e6a8c7720308f7313bf0022da068f10c0d49a9d7d1a6411692b1d316f2c8d exitCode: -1073741510 finishedAt: "2023-03-16T22:55:52Z" reason: Error startedAt: "2023-03-16T22:55:52Z" name: kube-proxy ready: false restartCount: 9 started: false state: waiting: message: back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-windows-x8pwv_kube-system(434d370f-88b5-4ede-acf0-2fe2029b30d0) reason: CrashLoopBackOff hostIP: 10.1.0.4 phase: Running podIP: 10.1.0.4 podIPs: - ip: 10.1.0.4 qosClass: BestEffort startTime: "2023-03-16T22:45:08Z"[0m [38;5;9mIn [1m[SynchronizedBeforeSuite][0m[38;5;9m at: [1mtest/e2e/e2e.go:242[0m [38;5;243m@ 03/16/23 22:58:22.129[0m [38;5;243m------------------------------[0m [38;5;9m[SynchronizedBeforeSuite] [FAILED] [728.265 seconds][0m [38;5;9m[1m[SynchronizedBeforeSuite] [0m [38;5;243mtest/e2e/e2e.go:77[0m [38;5;9m[FAILED] [1m[38;5;9mSynchronizedBeforeSuite failed on Ginkgo parallel process #1[0m The first SynchronizedBeforeSuite function running on Ginkgo parallel process #1 failed. This suite will now abort. [0m [38;5;9mIn [1m[SynchronizedBeforeSuite][0m[38;5;9m at: [1mtest/e2e/e2e.go:77[0m [38;5;243m@ 03/16/23 22:58:22.165[0m [38;5;243m------------------------------[0m [38;5;9m[SynchronizedBeforeSuite] [FAILED] [728.292 seconds][0m [38;5;9m[1m[SynchronizedBeforeSuite] [0m [38;5;243mtest/e2e/e2e.go:77[0m [38;5;9m[FAILED] [1m[38;5;9mSynchronizedBeforeSuite failed on Ginkgo parallel process #1[0m The first SynchronizedBeforeSuite function running on Ginkgo parallel process #1 failed. This suite will now abort. [0m [38;5;9mIn [1m[SynchronizedBeforeSuite][0m[38;5;9m at: [1mtest/e2e/e2e.go:77[0m [38;5;243m@ 03/16/23 22:58:22.167[0m [38;5;243m------------------------------[0m [38;5;9m[SynchronizedBeforeSuite] [FAILED] [728.296 seconds][0m [38;5;9m[1m[SynchronizedBeforeSuite] [0m [38;5;243mtest/e2e/e2e.go:77[0m [38;5;9m[FAILED] [1m[38;5;9mSynchronizedBeforeSuite failed on Ginkgo parallel process #1[0m The first SynchronizedBeforeSuite function running on Ginkgo parallel process #1 failed. This suite will now abort. [0m [38;5;9mIn [1m[SynchronizedBeforeSuite][0m[38;5;9m at: [1mtest/e2e/e2e.go:77[0m [38;5;243m@ 03/16/23 22:58:22.167[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 4 Failures:[0m [38;5;9m[FAIL][0m [38;5;9m[1m[SynchronizedBeforeSuite] [0m [38;5;243mtest/e2e/e2e.go:77[0m [38;5;9m[FAIL][0m [38;5;9m[1m[SynchronizedBeforeSuite] [0m [38;5;243mtest/e2e/e2e.go:77[0m [38;5;9m[FAIL][0m [38;5;9m[1m[SynchronizedBeforeSuite] [0m [38;5;243mtest/e2e/e2e.go:77[0m [38;5;9m[FAIL][0m [38;5;9m[1m[SynchronizedBeforeSuite] [0m [38;5;243mtest/e2e/e2e.go:242[0m [38;5;9m[1mRan 0 of 7207 Specs in 728.460 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;14m[1mA BeforeSuite node failed so all tests were skipped.[0m I0316 22:46:13.399650 14 e2e.go:117] Starting e2e run "4af2e184-7c1d-4a05-ae29-fb6d39ca4fea" on Ginkgo node 1 [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11m--ginkgo.progress is deprecated . The functionality provided by --progress was confusing and is no longer needed. Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs. Or you can run with -vv to always see all node events. Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.[0m [38;5;11m--ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo. This feature has proved to be more noisy than useful. You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.[0m [38;5;11m--ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1[0m --- FAIL: TestE2E (728.90s) FAIL I0316 22:46:13.397133 16 e2e.go:117] Starting e2e run "f4cfa78f-3b54-4e19-9629-095930e680bb" on Ginkgo node 2 [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11m--ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags[0m [38;5;11m--ginkgo.progress is deprecated . The functionality provided by --progress was confusing and is no longer needed. Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs. Or you can run with -vv to always see all node events. Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.[0m [38;5;11m--ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo. This feature has proved to be more noisy than useful. You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1[0m --- FAIL: TestE2E (728.79s) FAIL I0316 22:46:13.402781 17 e2e.go:117] Starting e2e run "e39c4442-71a1-4ace-8627-54aecbc25947" on Ginkgo node 3 [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11m--ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags[0m [38;5;11m--ginkgo.progress is deprecated . The functionality provided by --progress was confusing and is no longer needed. Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs. Or you can run with -vv to always see all node events. Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.[0m [38;5;11m--ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo. This feature has proved to be more noisy than useful. You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1[0m --- FAIL: TestE2E (728.79s) FAIL I0316 22:46:13.401674 19 e2e.go:117] Starting e2e run "c0b35fcf-0d00-4215-aaaf-3c73b83e8307" on Ginkgo node 4 [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11m--ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags[0m [38;5;11m--ginkgo.progress is deprecated . The functionality provided by --progress was confusing and is no longer needed. Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs. Or you can run with -vv to always see all node events. Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.[0m [38;5;11m--ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo. This feature has proved to be more noisy than useful. You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1[0m --- FAIL: TestE2E (728.78s) FAIL Ginkgo ran 1 suite in 12m9.051658491s Test Suite Failed [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11m--slowSpecThreshold is deprecated use --slow-spec-threshold instead and pass in a duration string (e.g. '5s', not '5.0')[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed--slowspecthreshold[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.9.1[0m [38;5;9m[FAILED][0m in [It] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227 [38;5;243m@ 03/16/23 22:58:23.048[0m Mar 16 22:58:23.049: INFO: FAILED! Mar 16 22:58:23.050: INFO: Cleaning up after "Conformance Tests conformance-tests" spec Mar 16 22:58:23.050: INFO: Dumping all the Cluster API resources in the "capz-conf-0bueug" namespace [1mSTEP:[0m Dumping logs from the "capz-conf-0bueug" workload cluster [38;5;243m@ 03/16/23 22:58:23.785[0m Mar 16 22:58:23.785: INFO: Dumping workload cluster capz-conf-0bueug/capz-conf-0bueug logs Mar 16 22:58:23.866: INFO: Collecting logs for Linux node capz-conf-0bueug-control-plane-mj5bc in cluster capz-conf-0bueug in namespace capz-conf-0bueug Mar 16 22:58:38.112: INFO: Collecting boot logs for AzureMachine capz-conf-0bueug-control-plane-mj5bc Mar 16 22:58:39.087: INFO: Collecting logs for Windows node capz-conf-scwjd in cluster capz-conf-0bueug in namespace capz-conf-0bueug Mar 16 23:01:06.966: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-scwjd to /logs/artifacts/clusters/capz-conf-0bueug/machines/capz-conf-0bueug-md-win-786c6dcc6f-d9khz/crashdumps.tar Mar 16 23:01:08.508: INFO: Collecting boot logs for AzureMachine capz-conf-0bueug-md-win-scwjd Failed to get logs for Machine capz-conf-0bueug-md-win-786c6dcc6f-d9khz, Cluster capz-conf-0bueug/capz-conf-0bueug: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Mar 16 23:01:09.335: INFO: Collecting logs for Windows node capz-conf-275z6 in cluster capz-conf-0bueug in namespace capz-conf-0bueug Mar 16 23:03:39.025: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-275z6 to /logs/artifacts/clusters/capz-conf-0bueug/machines/capz-conf-0bueug-md-win-786c6dcc6f-j5vpk/crashdumps.tar Mar 16 23:03:40.640: INFO: Collecting boot logs for AzureMachine capz-conf-0bueug-md-win-275z6 Failed to get logs for Machine capz-conf-0bueug-md-win-786c6dcc6f-j5vpk, Cluster capz-conf-0bueug/capz-conf-0bueug: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Mar 16 23:03:41.422: INFO: Dumping workload cluster capz-conf-0bueug/capz-conf-0bueug nodes Mar 16 23:03:41.731: INFO: Describing Node capz-conf-0bueug-control-plane-mj5bc Mar 16 23:03:41.930: INFO: Describing Node capz-conf-275z6 Mar 16 23:03:42.120: INFO: Describing Node capz-conf-scwjd Mar 16 23:03:42.303: INFO: Fetching nodes took 880.352112ms Mar 16 23:03:42.303: INFO: Dumping workload cluster capz-conf-0bueug/capz-conf-0bueug pod logs ... skipping 5 lines ... Mar 16 23:03:42.700: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-59d9cb8fbb-5jzmf, container calico-kube-controllers Mar 16 23:03:42.774: INFO: Describing Pod calico-system/calico-node-h559n Mar 16 23:03:42.775: INFO: Creating log watcher for controller calico-system/calico-node-h559n, container calico-node Mar 16 23:03:42.870: INFO: Describing Pod calico-system/calico-node-windows-64sf9 Mar 16 23:03:42.870: INFO: Creating log watcher for controller calico-system/calico-node-windows-64sf9, container calico-node-startup Mar 16 23:03:42.870: INFO: Creating log watcher for controller calico-system/calico-node-windows-64sf9, container calico-node-felix Mar 16 23:03:42.923: INFO: Error starting logs stream for pod calico-system/calico-node-windows-64sf9, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-64sf9" is waiting to start: PodInitializing Mar 16 23:03:42.924: INFO: Error starting logs stream for pod calico-system/calico-node-windows-64sf9, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-64sf9" is waiting to start: PodInitializing Mar 16 23:03:42.936: INFO: Describing Pod calico-system/calico-node-windows-ptp8l Mar 16 23:03:42.936: INFO: Creating log watcher for controller calico-system/calico-node-windows-ptp8l, container calico-node-startup Mar 16 23:03:42.936: INFO: Creating log watcher for controller calico-system/calico-node-windows-ptp8l, container calico-node-felix Mar 16 23:03:42.985: INFO: Error starting logs stream for pod calico-system/calico-node-windows-ptp8l, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-ptp8l" is waiting to start: PodInitializing Mar 16 23:03:42.985: INFO: Error starting logs stream for pod calico-system/calico-node-windows-ptp8l, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-ptp8l" is waiting to start: PodInitializing Mar 16 23:03:43.331: INFO: Describing Pod calico-system/calico-typha-7998d677cf-226xr Mar 16 23:03:43.331: INFO: Creating log watcher for controller calico-system/calico-typha-7998d677cf-226xr, container calico-typha Mar 16 23:03:43.731: INFO: Describing Pod calico-system/csi-node-driver-svgcw Mar 16 23:03:43.731: INFO: Creating log watcher for controller calico-system/csi-node-driver-svgcw, container calico-csi Mar 16 23:03:43.732: INFO: Creating log watcher for controller calico-system/csi-node-driver-svgcw, container csi-node-driver-registrar Mar 16 23:03:44.133: INFO: Describing Pod kube-system/containerd-logger-dv27w ... skipping 16 lines ... Mar 16 23:03:46.136: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-v7lzh, container node-driver-registrar Mar 16 23:03:46.136: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-v7lzh, container azuredisk Mar 16 23:03:46.544: INFO: Describing Pod kube-system/csi-azuredisk-node-win-tf9rw Mar 16 23:03:46.544: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-tf9rw, container node-driver-registrar Mar 16 23:03:46.544: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-tf9rw, container liveness-probe Mar 16 23:03:46.544: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-tf9rw, container azuredisk Mar 16 23:03:46.590: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-tf9rw, container node-driver-registrar: container "node-driver-registrar" in pod "csi-azuredisk-node-win-tf9rw" is waiting to start: PodInitializing Mar 16 23:03:46.590: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-tf9rw, container liveness-probe: container "liveness-probe" in pod "csi-azuredisk-node-win-tf9rw" is waiting to start: PodInitializing Mar 16 23:03:46.590: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-tf9rw, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-win-tf9rw" is waiting to start: PodInitializing Mar 16 23:03:46.933: INFO: Describing Pod kube-system/csi-azuredisk-node-win-vrwwk Mar 16 23:03:46.933: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-vrwwk, container liveness-probe Mar 16 23:03:46.933: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-vrwwk, container azuredisk Mar 16 23:03:46.933: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-vrwwk, container node-driver-registrar Mar 16 23:03:46.972: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-vrwwk, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-win-vrwwk" is waiting to start: PodInitializing Mar 16 23:03:46.972: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-vrwwk, container node-driver-registrar: container "node-driver-registrar" in pod "csi-azuredisk-node-win-vrwwk" is waiting to start: PodInitializing Mar 16 23:03:46.972: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-vrwwk, container liveness-probe: container "liveness-probe" in pod "csi-azuredisk-node-win-vrwwk" is waiting to start: PodInitializing Mar 16 23:03:47.339: INFO: Describing Pod kube-system/csi-proxy-dm54w Mar 16 23:03:47.339: INFO: Creating log watcher for controller kube-system/csi-proxy-dm54w, container csi-proxy Mar 16 23:03:47.734: INFO: Describing Pod kube-system/csi-proxy-fwgj7 Mar 16 23:03:47.734: INFO: Creating log watcher for controller kube-system/csi-proxy-fwgj7, container csi-proxy Mar 16 23:03:48.133: INFO: Describing Pod kube-system/etcd-capz-conf-0bueug-control-plane-mj5bc Mar 16 23:03:48.133: INFO: Creating log watcher for controller kube-system/etcd-capz-conf-0bueug-control-plane-mj5bc, container etcd ... skipping 21 lines ... INFO: Waiting for the Cluster capz-conf-0bueug/capz-conf-0bueug to be deleted [1mSTEP:[0m Waiting for cluster capz-conf-0bueug to be deleted [38;5;243m@ 03/16/23 23:03:54.313[0m Mar 16 23:09:44.531: INFO: Deleting namespace used for hosting the "conformance-tests" test spec INFO: Deleting namespace capz-conf-0bueug Mar 16 23:09:44.580: INFO: Checking if any resources are left over in Azure for spec "conformance-tests" [1mSTEP:[0m Redacting sensitive information from logs [38;5;243m@ 03/16/23 23:09:45.042[0m [38;5;9m• [FAILED] [1896.688 seconds][0m [0mConformance Tests [38;5;9m[1m[It] conformance-tests[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:98[0m [38;5;9m[FAILED] Unexpected error: <*errors.withStack | 0xc002e28f60>: { error: <*errors.withMessage | 0xc002b12900>{ cause: <*errors.errorString | 0xc0004fa310>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x34b656e, 0x376dca7, 0x196a59b, 0x197e6d8, 0x14ec761], } Unable to run conformance tests: error container run failed with exit code 1 occurred[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227[0m [38;5;243m@ 03/16/23 22:58:23.048[0m [38;5;9mFull Stack Trace[0m sigs.k8s.io/cluster-api-provider-azure/test/e2e.glob..func3.2() /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227 +0x175a ... skipping 6 lines ... [0m[1m[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report[0m [38;5;243mautogenerated by Ginkgo[0m [38;5;10m[ReportAfterSuite] PASSED [0.012 seconds][0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0mConformance Tests [38;5;9m[1m[It] conformance-tests[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227[0m [38;5;9m[1mRan 1 of 25 Specs in 2035.613 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m0 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m24 Skipped[0m --- FAIL: TestE2E (2035.63s) [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mCurrentGinkgoTestDescription() is deprecated in Ginkgo V2. Use CurrentSpecReport() instead.[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:297[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:300[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.8.4[0m FAIL Ginkgo ran 1 suite in 36m11.333113255s Test Suite Failed make[3]: *** [Makefile:663: test-e2e-run] Error 1 make[3]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[2]: *** [Makefile:678: test-e2e-skip-push] Error 2 make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[1]: *** [Makefile:694: test-conformance] Error 2 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:704: test-windows-upstream] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 8 lines ...