This job view page is being replaced by Spyglass soon. Check out the new job view.
PRclaudiubelu: Refactored kubelet's kuberuntime_sandbox
ResultFAILURE
Tests 1 failed / 2 succeeded
Started2023-03-16 22:04
Elapsed1h7m
Revision5e605d81d57e2309b3c08f821c9dc41372f802c7
Refs 114185

Test Failures


capz-e2e [It] Conformance Tests conformance-tests 31m36s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sConformance\sTests\sconformance\-tests$'
[FAILED] Unexpected error:
    <*errors.withStack | 0xc002e28f60>: {
        error: <*errors.withMessage | 0xc002b12900>{
            cause: <*errors.errorString | 0xc0004fa310>{
                s: "error container run failed with exit code 1",
            },
            msg: "Unable to run conformance tests",
        },
        stack: [0x34b656e, 0x376dca7, 0x196a59b, 0x197e6d8, 0x14ec761],
    }
    Unable to run conformance tests: error container run failed with exit code 1
occurred
In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227 @ 03/16/23 22:58:23.048

				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files


Show 2 Passed Tests

Show 24 Skipped Tests

Error lines from build-log.txt

... skipping 138 lines ...
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   138  100   138    0     0   2123      0 --:--:-- --:--:-- --:--:--  2090

100    34  100    34    0     0    225      0 --:--:-- --:--:-- --:--:--   225
using CI_VERSION=v1.27.0-alpha.3.828+a34e37c9963af5
using KUBERNETES_VERSION=v1.27.0-alpha.3.828+a34e37c9963af5
using IMAGE_TAG=v1.27.0-alpha.3.830_9fce3cd4b80206
Error response from daemon: manifest for capzci.azurecr.io/kube-apiserver:v1.27.0-alpha.3.830_9fce3cd4b80206 not found: manifest unknown: manifest tagged by "v1.27.0-alpha.3.830_9fce3cd4b80206" is not found
Building Kubernetes
make: Entering directory '/home/prow/go/src/k8s.io/kubernetes'
+++ [0316 22:05:10] WARNING: linux/arm will no longer be built/shipped by default, please build it explicitly if needed.
+++ [0316 22:05:10]          support for linux/arm will be removed in a subsequent release.
+++ [0316 22:05:10] Verifying Prerequisites....
+++ [0316 22:05:10] Building Docker image kube-build:build-3143ee45e4-5-v1.27.0-go1.20.2-bullseye.0
... skipping 820 lines ...
------------------------------
Conformance Tests conformance-tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:98
  INFO: Cluster name is capz-conf-0bueug
  STEP: Creating namespace "capz-conf-0bueug" for hosting the cluster @ 03/16/23 22:38:34.549
  Mar 16 22:38:34.549: INFO: starting to create namespace for hosting the "capz-conf-0bueug" test spec
2023/03/16 22:38:34 failed trying to get namespace (capz-conf-0bueug):namespaces "capz-conf-0bueug" not found
  INFO: Creating namespace capz-conf-0bueug
  INFO: Creating event watcher for namespace "capz-conf-0bueug"
  conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 @ 03/16/23 22:38:34.602
    conformance-tests
    Name | N | Min | Median | Mean | StdDev | Max
  INFO: Creating the workload cluster with name "capz-conf-0bueug" using the "conformance-presubmit-artifacts-windows-containerd" template (Kubernetes v1.27.0-alpha.3.828+a34e37c9963af5, 1 control-plane machines, 0 worker machines)
... skipping 99 lines ...
  ====================================================
  Random Seed: 1679006773 - will randomize all specs
  
  Will run 348 of 7207 specs
  Running in parallel across 4 processes
  ------------------------------
  [SynchronizedBeforeSuite] [FAILED] [728.313 seconds]

  [SynchronizedBeforeSuite] 
  test/e2e/e2e.go:77
  
    Timeline >>
    Mar 16 22:46:13.817: INFO: >>> kubeConfig: /tmp/kubeconfig
    Mar 16 22:46:13.820: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
... skipping 39 lines ...
    Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:44 +0000 UTC - event for kube-proxy: {daemonset-controller } SuccessfulCreate: Created pod: kube-proxy-6n5z6
    Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:44 +0000 UTC - event for kube-proxy: {daemonset-controller } SuccessfulDelete: Deleted pod: kube-proxy-6n5z6
    Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:44 +0000 UTC - event for kube-proxy-6n5z6: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-proxy-6n5z6 to capz-conf-0bueug-control-plane-mj5bc
    Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-apiserver-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "capzci.azurecr.io/kube-apiserver:v1.27.0-alpha.3.830_9fce3cd4b80206"
    Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-controller-manager-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "capzci.azurecr.io/kube-controller-manager:v1.27.0-alpha.3.830_9fce3cd4b80206"
    Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-proxy: {daemonset-controller } SuccessfulCreate: Created pod: kube-proxy-xbdgr
    Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-proxy-6n5z6: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedMount: MountVolume.SetUp failed for volume "kube-proxy" : object "kube-system"/"kube-proxy" not registered

    Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-proxy-6n5z6: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-rhjfr" : object "kube-system"/"kube-root-ca.crt" not registered

    Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-proxy-xbdgr: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "capzci.azurecr.io/kube-proxy:v1.27.0-alpha.3.830_9fce3cd4b80206"
    Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-proxy-xbdgr: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-proxy-xbdgr to capz-conf-0bueug-control-plane-mj5bc
    Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:46 +0000 UTC - event for kube-scheduler-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "capzci.azurecr.io/kube-scheduler:v1.27.0-alpha.3.830_9fce3cd4b80206"
    Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:47 +0000 UTC - event for kube-apiserver-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Killing: Stopping container kube-apiserver
    Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:47 +0000 UTC - event for kube-controller-manager-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Killing: Stopping container kube-controller-manager
    Mar 16 22:58:14.154: INFO: At 2023-03-16 22:42:47 +0000 UTC - event for kube-scheduler-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Killing: Stopping container kube-scheduler
... skipping 18 lines ...
    Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:15 +0000 UTC - event for metrics-server: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-6987569d96 to 1
    Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:15 +0000 UTC - event for metrics-server-6987569d96: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-6987569d96-8kswn
    Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:18 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
    Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:18 +0000 UTC - event for coredns-5d78c9869d-nbrqn: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
    Mar 16 22:58:14.154: INFO: At 2023-03-16 22:43:18 +0000 UTC - event for kube-scheduler: {default-scheduler } LeaderElection: capz-conf-0bueug-control-plane-mj5bc_4790ca42-5f76-4363-8a8d-bc2307d9f033 became leader
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:18 +0000 UTC - event for metrics-server-6987569d96-8kswn: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:35 +0000 UTC - event for kube-apiserver-capz-conf-0bueug-control-plane-mj5bc: {kubelet capz-conf-0bueug-control-plane-mj5bc} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500

    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:51 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-5d78c9869d-jg2mq to capz-conf-0bueug-control-plane-mj5bc
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:51 +0000 UTC - event for coredns-5d78c9869d-nbrqn: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-5d78c9869d-nbrqn to capz-conf-0bueug-control-plane-mj5bc
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:51 +0000 UTC - event for metrics-server-6987569d96-8kswn: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-6987569d96-8kswn to capz-conf-0bueug-control-plane-mj5bc
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:52 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition

    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:52 +0000 UTC - event for coredns-5d78c9869d-nbrqn: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition

    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:53 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "927e714bcc0b5ae751075c38c9b7988d11d9f9ca0742dcc8ba26334e5813d4b8": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/

    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:53 +0000 UTC - event for coredns-5d78c9869d-nbrqn: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "946dd33ebcc4c32f473c66188ba91c8675b4c7a0b2183ebdecaba866f615d02d": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/

    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:53 +0000 UTC - event for metrics-server-6987569d96-8kswn: {kubelet capz-conf-0bueug-control-plane-mj5bc} SandboxChanged: Pod sandbox changed, it will be killed and re-created.
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:53 +0000 UTC - event for metrics-server-6987569d96-8kswn: {kubelet capz-conf-0bueug-control-plane-mj5bc} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f06ca875435501c5124ae9ffa6822484534de14eb5e4418f383a442d84e03e54": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/

    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:54 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {kubelet capz-conf-0bueug-control-plane-mj5bc} SandboxChanged: Pod sandbox changed, it will be killed and re-created.
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:43:54 +0000 UTC - event for coredns-5d78c9869d-nbrqn: {kubelet capz-conf-0bueug-control-plane-mj5bc} SandboxChanged: Pod sandbox changed, it will be killed and re-created.
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:07 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.1" already present on machine
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:07 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {kubelet capz-conf-0bueug-control-plane-mj5bc} Created: Created container coredns
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:07 +0000 UTC - event for coredns-5d78c9869d-jg2mq: {kubelet capz-conf-0bueug-control-plane-mj5bc} Started: Started container coredns
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:44:08 +0000 UTC - event for metrics-server-6987569d96-8kswn: {kubelet capz-conf-0bueug-control-plane-mj5bc} Pulling: Pulling image "k8s.gcr.io/metrics-server/metrics-server:v0.6.2"
... skipping 71 lines ...
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:34 +0000 UTC - event for containerd-logger-lsh6r: {kubelet capz-conf-scwjd} Killing: Stopping container containerd-logger
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:35 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Started: Started container containerd-logger
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:35 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Killing: Stopping container containerd-logger
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:35 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Created: Created container containerd-logger
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:38 +0000 UTC - event for containerd-logger-lsh6r: {kubelet capz-conf-scwjd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 326.9977ms (326.9977ms including waiting)
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:40 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 444.5104ms (444.5104ms including waiting)
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:40 +0000 UTC - event for kube-proxy-windows-bgfqk: {kubelet capz-conf-scwjd} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-windows-bgfqk_kube-system(1b0f5228-df77-4180-b53a-20f0f3d5acb4)

    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:44 +0000 UTC - event for kube-proxy-windows-x8pwv: {kubelet capz-conf-275z6} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-windows-x8pwv_kube-system(434d370f-88b5-4ede-acf0-2fe2029b30d0)

    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:49 +0000 UTC - event for containerd-logger-lsh6r: {kubelet capz-conf-scwjd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 374.7954ms (374.7954ms including waiting)
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:45:51 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 411.3733ms (411.3733ms including waiting)
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:00 +0000 UTC - event for containerd-logger-lsh6r: {kubelet capz-conf-scwjd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 362.5522ms (362.5522ms including waiting)
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:02 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 470.8619ms (471.347ms including waiting)
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:11 +0000 UTC - event for containerd-logger-lsh6r: {kubelet capz-conf-scwjd} BackOff: Back-off restarting failed container containerd-logger in pod containerd-logger-lsh6r_kube-system(017a5a4a-d9d2-4bc3-8671-6ed7c34dd141)

    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:12 +0000 UTC - event for csi-azuredisk-node-win: {daemonset-controller } SuccessfulCreate: Created pod: csi-azuredisk-node-win-vrwwk
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:12 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {kubelet capz-conf-275z6} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0"
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:12 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {default-scheduler } Scheduled: Successfully assigned kube-system/csi-azuredisk-node-win-vrwwk to capz-conf-275z6
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:12 +0000 UTC - event for csi-proxy: {daemonset-controller } SuccessfulCreate: Created pod: csi-proxy-fwgj7
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:12 +0000 UTC - event for csi-proxy-fwgj7: {default-scheduler } Scheduled: Successfully assigned kube-system/csi-proxy-fwgj7 to capz-conf-275z6
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:12 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} Pulling: Pulling image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2"
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:13 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 460.8675ms (460.8675ms including waiting)
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:23 +0000 UTC - event for containerd-logger-dv27w: {kubelet capz-conf-275z6} BackOff: Back-off restarting failed container containerd-logger in pod containerd-logger-dv27w_kube-system(8b158921-6e6f-4293-aa4d-f1ba3f8d6022)

    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:27 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} Created: Created container csi-proxy
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:27 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" in 14.3317146s (14.6425719s including waiting)
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:27 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} Started: Started container csi-proxy
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:28 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} Killing: Stopping container csi-proxy
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:32 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} Pulled: Container image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" already present on machine
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:43 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {kubelet capz-conf-275z6} Created: Created container init
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:43 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {kubelet capz-conf-275z6} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" in 16.0298164s (30.8268854s including waiting)
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:43 +0000 UTC - event for csi-proxy-fwgj7: {kubelet capz-conf-275z6} BackOff: Back-off restarting failed container csi-proxy in pod csi-proxy-fwgj7_kube-system(ec53bf42-2782-4e41-954c-24c0694b8136)

    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:44 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {kubelet capz-conf-275z6} Started: Started container init
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:44 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {kubelet capz-conf-275z6} Killing: Stopping container init
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:46:49 +0000 UTC - event for csi-azuredisk-node-win-vrwwk: {kubelet capz-conf-275z6} Pulled: Container image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" already present on machine
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:01 +0000 UTC - event for csi-azuredisk-node-win: {daemonset-controller } SuccessfulCreate: Created pod: csi-azuredisk-node-win-tf9rw
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:01 +0000 UTC - event for csi-azuredisk-node-win-tf9rw: {default-scheduler } Scheduled: Successfully assigned kube-system/csi-azuredisk-node-win-tf9rw to capz-conf-scwjd
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:01 +0000 UTC - event for csi-proxy: {daemonset-controller } SuccessfulCreate: Created pod: csi-proxy-dm54w
... skipping 7 lines ...
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:19 +0000 UTC - event for csi-azuredisk-node-win-tf9rw: {kubelet capz-conf-scwjd} Pulled: Container image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" already present on machine
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:32 +0000 UTC - event for csi-proxy-dm54w: {kubelet capz-conf-scwjd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" in 14.7685822s (29.5448352s including waiting)
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:32 +0000 UTC - event for csi-proxy-dm54w: {kubelet capz-conf-scwjd} Started: Started container csi-proxy
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:32 +0000 UTC - event for csi-proxy-dm54w: {kubelet capz-conf-scwjd} Created: Created container csi-proxy
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:33 +0000 UTC - event for csi-proxy-dm54w: {kubelet capz-conf-scwjd} Killing: Stopping container csi-proxy
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:37 +0000 UTC - event for csi-proxy-dm54w: {kubelet capz-conf-scwjd} Pulled: Container image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" already present on machine
    Mar 16 22:58:14.155: INFO: At 2023-03-16 22:48:48 +0000 UTC - event for csi-proxy-dm54w: {kubelet capz-conf-scwjd} BackOff: Back-off restarting failed container csi-proxy in pod csi-proxy-dm54w_kube-system(1dafe25d-5961-4f8a-8685-e52c2150ab68)

    Mar 16 22:58:14.216: INFO: POD                                                           NODE                                  PHASE    GRACE  CONDITIONS
    Mar 16 22:58:14.216: INFO: containerd-logger-dv27w                                       capz-conf-275z6                       Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:57:20 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:57:20 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:07 +0000 UTC  }]
    Mar 16 22:58:14.216: INFO: containerd-logger-lsh6r                                       capz-conf-scwjd                       Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:57:03 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:57:03 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:08 +0000 UTC  }]
    Mar 16 22:58:14.216: INFO: coredns-5d78c9869d-jg2mq                                      capz-conf-0bueug-control-plane-mj5bc  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:51 +0000 UTC  }]
    Mar 16 22:58:14.216: INFO: coredns-5d78c9869d-nbrqn                                      capz-conf-0bueug-control-plane-mj5bc  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:43:51 +0000 UTC  }]
    Mar 16 22:58:14.216: INFO: csi-azuredisk-controller-56db99df6c-9zdpw                     capz-conf-0bueug-control-plane-mj5bc  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:45:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-16 22:44:40 +0000 UTC  }]
... skipping 137 lines ...
          ]
        }
      ],
      "filters": [
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == Stats && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::LayerID && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::NameToGuid && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.Stats && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.State && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetProcessProperties && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetComputeSystemProperties && hasnoproperty error"

        }
      ],
      "outputs": [
        {
          "type": "StdOutput"
        }
... skipping 28 lines ...
          ]
        }
      ],
      "filters": [
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == Stats && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::LayerID && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::NameToGuid && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.Stats && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.State && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetProcessProperties && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetComputeSystemProperties && hasnoproperty error"

        }
      ],
      "outputs": [
        {
          "type": "StdOutput"
        }
      ],
      "schemaVersion": "2016-08-11"
    }
  
    Logging started...
  
    ENDLOG for container kube-system:containerd-logger-lsh6r:containerd-logger
    Mar 16 22:58:19.327: INFO: Failed to get logs of pod csi-azuredisk-node-win-tf9rw, container liveness-probe, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-tf9rw)

    Mar 16 22:58:19.327: INFO: Logs of kube-system/csi-azuredisk-node-win-tf9rw:liveness-probe on node capz-conf-scwjd
    Mar 16 22:58:19.327: INFO:  : STARTLOG
  
    ENDLOG for container kube-system:csi-azuredisk-node-win-tf9rw:liveness-probe
    Mar 16 22:58:19.727: INFO: Failed to get logs of pod csi-azuredisk-node-win-tf9rw, container node-driver-registrar, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-tf9rw)

    Mar 16 22:58:19.727: INFO: Logs of kube-system/csi-azuredisk-node-win-tf9rw:node-driver-registrar on node capz-conf-scwjd
    Mar 16 22:58:19.727: INFO:  : STARTLOG
  
    ENDLOG for container kube-system:csi-azuredisk-node-win-tf9rw:node-driver-registrar
    Mar 16 22:58:20.127: INFO: Failed to get logs of pod csi-azuredisk-node-win-tf9rw, container azuredisk, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-tf9rw)

    Mar 16 22:58:20.127: INFO: Logs of kube-system/csi-azuredisk-node-win-tf9rw:azuredisk on node capz-conf-scwjd
    Mar 16 22:58:20.127: INFO:  : STARTLOG
  
    ENDLOG for container kube-system:csi-azuredisk-node-win-tf9rw:azuredisk
    Mar 16 22:58:20.527: INFO: Failed to get logs of pod csi-azuredisk-node-win-vrwwk, container liveness-probe, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-vrwwk)

    Mar 16 22:58:20.527: INFO: Logs of kube-system/csi-azuredisk-node-win-vrwwk:liveness-probe on node capz-conf-275z6
    Mar 16 22:58:20.527: INFO:  : STARTLOG
  
    ENDLOG for container kube-system:csi-azuredisk-node-win-vrwwk:liveness-probe
    Mar 16 22:58:20.926: INFO: Failed to get logs of pod csi-azuredisk-node-win-vrwwk, container node-driver-registrar, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-vrwwk)

    Mar 16 22:58:20.926: INFO: Logs of kube-system/csi-azuredisk-node-win-vrwwk:node-driver-registrar on node capz-conf-275z6
    Mar 16 22:58:20.926: INFO:  : STARTLOG
  
    ENDLOG for container kube-system:csi-azuredisk-node-win-vrwwk:node-driver-registrar
    Mar 16 22:58:21.327: INFO: Failed to get logs of pod csi-azuredisk-node-win-vrwwk, container azuredisk, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-vrwwk)

    Mar 16 22:58:21.327: INFO: Logs of kube-system/csi-azuredisk-node-win-vrwwk:azuredisk on node capz-conf-275z6
    Mar 16 22:58:21.327: INFO:  : STARTLOG
  
    ENDLOG for container kube-system:csi-azuredisk-node-win-vrwwk:azuredisk
    Mar 16 22:58:21.537: INFO: Logs of kube-system/csi-proxy-dm54w:csi-proxy on node capz-conf-scwjd
    Mar 16 22:58:21.537: INFO:  : STARTLOG
... skipping 12 lines ...
  
    ENDLOG for container kube-system:kube-proxy-windows-bgfqk:kube-proxy
    Mar 16 22:58:22.128: INFO: Logs of kube-system/kube-proxy-windows-x8pwv:kube-proxy on node capz-conf-275z6
    Mar 16 22:58:22.128: INFO:  : STARTLOG
  
    ENDLOG for container kube-system:kube-proxy-windows-x8pwv:kube-proxy
    [FAILED] in [SynchronizedBeforeSuite] - test/e2e/e2e.go:242 @ 03/16/23 22:58:22.129

    << Timeline
  
    [FAILED] Error waiting for all pods to be running and ready: Timed out after 600.001s.

    Expected all pods (need at least 0) in namespace "kube-system" to be running and ready (except for 0).
    10 / 18 pods were running and ready.
    Expected 4 pod replicas, 4 are Running and Ready.
    Pods that were neither completed nor running:
        <[]v1.Pod | len:8, cap:8>: 
            - metadata:
... skipping 237 lines ...
                  imageID: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505
                  lastState:
                    terminated:
                      containerID: containerd://a13b3dcc37d95a1f2869569555c1ea190f9f67ae6660b3e78f036574afeeaabb
                      exitCode: -1073741510
                      finishedAt: "2023-03-16T22:57:15Z"
                      reason: Error

                      startedAt: "2023-03-16T22:57:14Z"
                  name: containerd-logger
                  ready: false
                  restartCount: 10
                  started: false
                  state:
                    waiting:
                      message: back-off 5m0s restarting failed container=containerd-logger pod=containerd-logger-dv27w_kube-system(8b158921-6e6f-4293-aa4d-f1ba3f8d6022)

                      reason: CrashLoopBackOff
                hostIP: 10.1.0.4
                phase: Running
                podIP: 10.1.0.4
                podIPs:
                - ip: 10.1.0.4
... skipping 240 lines ...
                  imageID: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505
                  lastState:
                    terminated:
                      containerID: containerd://5466d33baabec2eac4ebb86646586f775059d171a0498b0b8ab965a8c10f0639
                      exitCode: -1073741510
                      finishedAt: "2023-03-16T22:56:57Z"
                      reason: Error

                      startedAt: "2023-03-16T22:56:57Z"
                  name: containerd-logger
                  ready: false
                  restartCount: 9
                  started: false
                  state:
                    waiting:
                      message: back-off 5m0s restarting failed container=containerd-logger pod=containerd-logger-lsh6r_kube-system(017a5a4a-d9d2-4bc3-8671-6ed7c34dd141)

                      reason: CrashLoopBackOff
                hostIP: 10.1.0.5
                phase: Running
                podIP: 10.1.0.5
                podIPs:
                - ip: 10.1.0.5
... skipping 1237 lines ...
                  imageID: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba
                  lastState:
                    terminated:
                      containerID: containerd://5eca80256499b374f486996578e81b3aff277eb497e5452059f2b0a6b584e98f
                      exitCode: -1073741510
                      finishedAt: "2023-03-16T22:54:01Z"
                      reason: Error

                      startedAt: "2023-03-16T22:54:00Z"
                  name: csi-proxy
                  ready: false
                  restartCount: 7
                  started: false
                  state:
                    waiting:
                      message: back-off 5m0s restarting failed container=csi-proxy pod=csi-proxy-dm54w_kube-system(1dafe25d-5961-4f8a-8685-e52c2150ab68)

                      reason: CrashLoopBackOff
                hostIP: 10.1.0.5
                phase: Running
                podIP: 10.1.0.5
                podIPs:
                - ip: 10.1.0.5
... skipping 211 lines ...
                  imageID: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba
                  lastState:
                    terminated:
                      containerID: containerd://b743aa1990fa5cacdeef2891f352986125e55081e9e1f09b4f53855b942578d2
                      exitCode: -1073741510
                      finishedAt: "2023-03-16T22:57:02Z"
                      reason: Error

                      startedAt: "2023-03-16T22:57:02Z"
                  name: csi-proxy
                  ready: false
                  restartCount: 9
                  started: false
                  state:
                    waiting:
                      message: back-off 5m0s restarting failed container=csi-proxy pod=csi-proxy-fwgj7_kube-system(ec53bf42-2782-4e41-954c-24c0694b8136)

                      reason: CrashLoopBackOff
                hostIP: 10.1.0.4
                phase: Running
                podIP: 10.1.0.4
                podIPs:
                - ip: 10.1.0.4
... skipping 279 lines ...
                  imageID: sha256:066f734ecf45f03f1a29b2c4432153044af372540aec60a4e46e4a8b627cf1ed
                  lastState:
                    terminated:
                      containerID: containerd://be073a99f597d9da07bf80b7b793854fe444f5f6230fd708f5d008ae2e736908
                      exitCode: -1073741510
                      finishedAt: "2023-03-16T22:55:58Z"
                      reason: Error

                      startedAt: "2023-03-16T22:55:58Z"
                  name: kube-proxy
                  ready: false
                  restartCount: 9
                  started: false
                  state:
                    waiting:
                      message: back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-windows-bgfqk_kube-system(1b0f5228-df77-4180-b53a-20f0f3d5acb4)

                      reason: CrashLoopBackOff
                hostIP: 10.1.0.5
                phase: Running
                podIP: 10.1.0.5
                podIPs:
                - ip: 10.1.0.5
... skipping 279 lines ...
                  imageID: sha256:066f734ecf45f03f1a29b2c4432153044af372540aec60a4e46e4a8b627cf1ed
                  lastState:
                    terminated:
                      containerID: containerd://195e6a8c7720308f7313bf0022da068f10c0d49a9d7d1a6411692b1d316f2c8d
                      exitCode: -1073741510
                      finishedAt: "2023-03-16T22:55:52Z"
                      reason: Error

                      startedAt: "2023-03-16T22:55:52Z"
                  name: kube-proxy
                  ready: false
                  restartCount: 9
                  started: false
                  state:
                    waiting:
                      message: back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-windows-x8pwv_kube-system(434d370f-88b5-4ede-acf0-2fe2029b30d0)

                      reason: CrashLoopBackOff
                hostIP: 10.1.0.4
                phase: Running
                podIP: 10.1.0.4
                podIPs:
                - ip: 10.1.0.4
                qosClass: BestEffort
                startTime: "2023-03-16T22:45:08Z"
    In [SynchronizedBeforeSuite] at: test/e2e/e2e.go:242 @ 03/16/23 22:58:22.129
  ------------------------------
  [SynchronizedBeforeSuite] [FAILED] [728.265 seconds]

  [SynchronizedBeforeSuite] 
  test/e2e/e2e.go:77
  
    [FAILED] SynchronizedBeforeSuite failed on Ginkgo parallel process #1

      The first SynchronizedBeforeSuite function running on Ginkgo parallel process
      #1 failed.  This suite will now abort.

  
    
    In [SynchronizedBeforeSuite] at: test/e2e/e2e.go:77 @ 03/16/23 22:58:22.165
  ------------------------------
  [SynchronizedBeforeSuite] [FAILED] [728.292 seconds]

  [SynchronizedBeforeSuite] 
  test/e2e/e2e.go:77
  
    [FAILED] SynchronizedBeforeSuite failed on Ginkgo parallel process #1

      The first SynchronizedBeforeSuite function running on Ginkgo parallel process
      #1 failed.  This suite will now abort.

  
    
    In [SynchronizedBeforeSuite] at: test/e2e/e2e.go:77 @ 03/16/23 22:58:22.167
  ------------------------------
  [SynchronizedBeforeSuite] [FAILED] [728.296 seconds]

  [SynchronizedBeforeSuite] 
  test/e2e/e2e.go:77
  
    [FAILED] SynchronizedBeforeSuite failed on Ginkgo parallel process #1

      The first SynchronizedBeforeSuite function running on Ginkgo parallel process
      #1 failed.  This suite will now abort.

  
    
    In [SynchronizedBeforeSuite] at: test/e2e/e2e.go:77 @ 03/16/23 22:58:22.167
  ------------------------------
  
  Summarizing 4 Failures:
    [FAIL] [SynchronizedBeforeSuite] 

    test/e2e/e2e.go:77
    [FAIL] [SynchronizedBeforeSuite] 

    test/e2e/e2e.go:77
    [FAIL] [SynchronizedBeforeSuite] 

    test/e2e/e2e.go:77
    [FAIL] [SynchronizedBeforeSuite] 

    test/e2e/e2e.go:242
  
  Ran 0 of 7207 Specs in 728.460 seconds
  FAIL! -- A BeforeSuite node failed so all tests were skipped.

  
    I0316 22:46:13.399650      14 e2e.go:117] Starting e2e run "4af2e184-7c1d-4a05-ae29-fb6d39ca4fea" on Ginkgo node 1
  You're using deprecated Ginkgo functionality:
  =============================================
    --ginkgo.progress is deprecated .  The functionality provided by --progress was confusing and is no longer needed.  Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs.  Or you can run with -vv to always see all node events.  Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.

    --ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo.  This feature has proved to be more noisy than useful.  You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.
    --ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead
    Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags
  
  To silence deprecations that can be silenced set the following environment variable:
    ACK_GINKGO_DEPRECATIONS=2.9.1
  
  --- FAIL: TestE2E (728.90s)

  FAIL

  
    I0316 22:46:13.397133      16 e2e.go:117] Starting e2e run "f4cfa78f-3b54-4e19-9629-095930e680bb" on Ginkgo node 2
  You're using deprecated Ginkgo functionality:
  =============================================
    --ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead
    Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags
    --ginkgo.progress is deprecated .  The functionality provided by --progress was confusing and is no longer needed.  Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs.  Or you can run with -vv to always see all node events.  Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.

    --ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo.  This feature has proved to be more noisy than useful.  You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.
  
  To silence deprecations that can be silenced set the following environment variable:
    ACK_GINKGO_DEPRECATIONS=2.9.1
  
  --- FAIL: TestE2E (728.79s)

  FAIL

  
    I0316 22:46:13.402781      17 e2e.go:117] Starting e2e run "e39c4442-71a1-4ace-8627-54aecbc25947" on Ginkgo node 3
  You're using deprecated Ginkgo functionality:
  =============================================
    --ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead
    Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags
    --ginkgo.progress is deprecated .  The functionality provided by --progress was confusing and is no longer needed.  Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs.  Or you can run with -vv to always see all node events.  Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.

    --ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo.  This feature has proved to be more noisy than useful.  You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.
  
  To silence deprecations that can be silenced set the following environment variable:
    ACK_GINKGO_DEPRECATIONS=2.9.1
  
  --- FAIL: TestE2E (728.79s)

  FAIL

  
    I0316 22:46:13.401674      19 e2e.go:117] Starting e2e run "c0b35fcf-0d00-4215-aaaf-3c73b83e8307" on Ginkgo node 4
  You're using deprecated Ginkgo functionality:
  =============================================
    --ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead
    Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags
    --ginkgo.progress is deprecated .  The functionality provided by --progress was confusing and is no longer needed.  Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs.  Or you can run with -vv to always see all node events.  Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.

    --ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo.  This feature has proved to be more noisy than useful.  You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.
  
  To silence deprecations that can be silenced set the following environment variable:
    ACK_GINKGO_DEPRECATIONS=2.9.1
  
  --- FAIL: TestE2E (728.78s)

  FAIL

  
  
  Ginkgo ran 1 suite in 12m9.051658491s
  
  Test Suite Failed

  You're using deprecated Ginkgo functionality:
  =============================================
    --slowSpecThreshold is deprecated use --slow-spec-threshold instead and pass in a duration string (e.g. '5s', not '5.0')
    Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed--slowspecthreshold
  
  To silence deprecations that can be silenced set the following environment variable:
    ACK_GINKGO_DEPRECATIONS=2.9.1
  
  [FAILED] in [It] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227 @ 03/16/23 22:58:23.048
  Mar 16 22:58:23.049: INFO: FAILED!
  Mar 16 22:58:23.050: INFO: Cleaning up after "Conformance Tests conformance-tests" spec
  Mar 16 22:58:23.050: INFO: Dumping all the Cluster API resources in the "capz-conf-0bueug" namespace
  STEP: Dumping logs from the "capz-conf-0bueug" workload cluster @ 03/16/23 22:58:23.785
  Mar 16 22:58:23.785: INFO: Dumping workload cluster capz-conf-0bueug/capz-conf-0bueug logs
  Mar 16 22:58:23.866: INFO: Collecting logs for Linux node capz-conf-0bueug-control-plane-mj5bc in cluster capz-conf-0bueug in namespace capz-conf-0bueug

  Mar 16 22:58:38.112: INFO: Collecting boot logs for AzureMachine capz-conf-0bueug-control-plane-mj5bc

  Mar 16 22:58:39.087: INFO: Collecting logs for Windows node capz-conf-scwjd in cluster capz-conf-0bueug in namespace capz-conf-0bueug

  Mar 16 23:01:06.966: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-scwjd to /logs/artifacts/clusters/capz-conf-0bueug/machines/capz-conf-0bueug-md-win-786c6dcc6f-d9khz/crashdumps.tar
  Mar 16 23:01:08.508: INFO: Collecting boot logs for AzureMachine capz-conf-0bueug-md-win-scwjd

Failed to get logs for Machine capz-conf-0bueug-md-win-786c6dcc6f-d9khz, Cluster capz-conf-0bueug/capz-conf-0bueug: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
  Mar 16 23:01:09.335: INFO: Collecting logs for Windows node capz-conf-275z6 in cluster capz-conf-0bueug in namespace capz-conf-0bueug

  Mar 16 23:03:39.025: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-275z6 to /logs/artifacts/clusters/capz-conf-0bueug/machines/capz-conf-0bueug-md-win-786c6dcc6f-j5vpk/crashdumps.tar
  Mar 16 23:03:40.640: INFO: Collecting boot logs for AzureMachine capz-conf-0bueug-md-win-275z6

Failed to get logs for Machine capz-conf-0bueug-md-win-786c6dcc6f-j5vpk, Cluster capz-conf-0bueug/capz-conf-0bueug: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
  Mar 16 23:03:41.422: INFO: Dumping workload cluster capz-conf-0bueug/capz-conf-0bueug nodes
  Mar 16 23:03:41.731: INFO: Describing Node capz-conf-0bueug-control-plane-mj5bc
  Mar 16 23:03:41.930: INFO: Describing Node capz-conf-275z6
  Mar 16 23:03:42.120: INFO: Describing Node capz-conf-scwjd
  Mar 16 23:03:42.303: INFO: Fetching nodes took 880.352112ms
  Mar 16 23:03:42.303: INFO: Dumping workload cluster capz-conf-0bueug/capz-conf-0bueug pod logs
... skipping 5 lines ...
  Mar 16 23:03:42.700: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-59d9cb8fbb-5jzmf, container calico-kube-controllers
  Mar 16 23:03:42.774: INFO: Describing Pod calico-system/calico-node-h559n
  Mar 16 23:03:42.775: INFO: Creating log watcher for controller calico-system/calico-node-h559n, container calico-node
  Mar 16 23:03:42.870: INFO: Describing Pod calico-system/calico-node-windows-64sf9
  Mar 16 23:03:42.870: INFO: Creating log watcher for controller calico-system/calico-node-windows-64sf9, container calico-node-startup
  Mar 16 23:03:42.870: INFO: Creating log watcher for controller calico-system/calico-node-windows-64sf9, container calico-node-felix
  Mar 16 23:03:42.923: INFO: Error starting logs stream for pod calico-system/calico-node-windows-64sf9, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-64sf9" is waiting to start: PodInitializing
  Mar 16 23:03:42.924: INFO: Error starting logs stream for pod calico-system/calico-node-windows-64sf9, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-64sf9" is waiting to start: PodInitializing
  Mar 16 23:03:42.936: INFO: Describing Pod calico-system/calico-node-windows-ptp8l
  Mar 16 23:03:42.936: INFO: Creating log watcher for controller calico-system/calico-node-windows-ptp8l, container calico-node-startup
  Mar 16 23:03:42.936: INFO: Creating log watcher for controller calico-system/calico-node-windows-ptp8l, container calico-node-felix
  Mar 16 23:03:42.985: INFO: Error starting logs stream for pod calico-system/calico-node-windows-ptp8l, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-ptp8l" is waiting to start: PodInitializing
  Mar 16 23:03:42.985: INFO: Error starting logs stream for pod calico-system/calico-node-windows-ptp8l, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-ptp8l" is waiting to start: PodInitializing
  Mar 16 23:03:43.331: INFO: Describing Pod calico-system/calico-typha-7998d677cf-226xr
  Mar 16 23:03:43.331: INFO: Creating log watcher for controller calico-system/calico-typha-7998d677cf-226xr, container calico-typha
  Mar 16 23:03:43.731: INFO: Describing Pod calico-system/csi-node-driver-svgcw
  Mar 16 23:03:43.731: INFO: Creating log watcher for controller calico-system/csi-node-driver-svgcw, container calico-csi
  Mar 16 23:03:43.732: INFO: Creating log watcher for controller calico-system/csi-node-driver-svgcw, container csi-node-driver-registrar
  Mar 16 23:03:44.133: INFO: Describing Pod kube-system/containerd-logger-dv27w
... skipping 16 lines ...
  Mar 16 23:03:46.136: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-v7lzh, container node-driver-registrar
  Mar 16 23:03:46.136: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-v7lzh, container azuredisk
  Mar 16 23:03:46.544: INFO: Describing Pod kube-system/csi-azuredisk-node-win-tf9rw
  Mar 16 23:03:46.544: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-tf9rw, container node-driver-registrar
  Mar 16 23:03:46.544: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-tf9rw, container liveness-probe
  Mar 16 23:03:46.544: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-tf9rw, container azuredisk
  Mar 16 23:03:46.590: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-tf9rw, container node-driver-registrar: container "node-driver-registrar" in pod "csi-azuredisk-node-win-tf9rw" is waiting to start: PodInitializing
  Mar 16 23:03:46.590: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-tf9rw, container liveness-probe: container "liveness-probe" in pod "csi-azuredisk-node-win-tf9rw" is waiting to start: PodInitializing
  Mar 16 23:03:46.590: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-tf9rw, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-win-tf9rw" is waiting to start: PodInitializing
  Mar 16 23:03:46.933: INFO: Describing Pod kube-system/csi-azuredisk-node-win-vrwwk
  Mar 16 23:03:46.933: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-vrwwk, container liveness-probe
  Mar 16 23:03:46.933: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-vrwwk, container azuredisk
  Mar 16 23:03:46.933: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-vrwwk, container node-driver-registrar
  Mar 16 23:03:46.972: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-vrwwk, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-win-vrwwk" is waiting to start: PodInitializing
  Mar 16 23:03:46.972: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-vrwwk, container node-driver-registrar: container "node-driver-registrar" in pod "csi-azuredisk-node-win-vrwwk" is waiting to start: PodInitializing
  Mar 16 23:03:46.972: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-vrwwk, container liveness-probe: container "liveness-probe" in pod "csi-azuredisk-node-win-vrwwk" is waiting to start: PodInitializing
  Mar 16 23:03:47.339: INFO: Describing Pod kube-system/csi-proxy-dm54w
  Mar 16 23:03:47.339: INFO: Creating log watcher for controller kube-system/csi-proxy-dm54w, container csi-proxy
  Mar 16 23:03:47.734: INFO: Describing Pod kube-system/csi-proxy-fwgj7
  Mar 16 23:03:47.734: INFO: Creating log watcher for controller kube-system/csi-proxy-fwgj7, container csi-proxy
  Mar 16 23:03:48.133: INFO: Describing Pod kube-system/etcd-capz-conf-0bueug-control-plane-mj5bc
  Mar 16 23:03:48.133: INFO: Creating log watcher for controller kube-system/etcd-capz-conf-0bueug-control-plane-mj5bc, container etcd
... skipping 21 lines ...
  INFO: Waiting for the Cluster capz-conf-0bueug/capz-conf-0bueug to be deleted
  STEP: Waiting for cluster capz-conf-0bueug to be deleted @ 03/16/23 23:03:54.313
  Mar 16 23:09:44.531: INFO: Deleting namespace used for hosting the "conformance-tests" test spec
  INFO: Deleting namespace capz-conf-0bueug
  Mar 16 23:09:44.580: INFO: Checking if any resources are left over in Azure for spec "conformance-tests"
  STEP: Redacting sensitive information from logs @ 03/16/23 23:09:45.042
• [FAILED] [1896.688 seconds]
Conformance Tests [It] conformance-tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:98

  [FAILED] Unexpected error:
      <*errors.withStack | 0xc002e28f60>: {
          error: <*errors.withMessage | 0xc002b12900>{
              cause: <*errors.errorString | 0xc0004fa310>{
                  s: "error container run failed with exit code 1",
              },
              msg: "Unable to run conformance tests",
          },
          stack: [0x34b656e, 0x376dca7, 0x196a59b, 0x197e6d8, 0x14ec761],
      }
      Unable to run conformance tests: error container run failed with exit code 1
  occurred
  In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227 @ 03/16/23 22:58:23.048

  Full Stack Trace
    sigs.k8s.io/cluster-api-provider-azure/test/e2e.glob..func3.2()
    	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227 +0x175a
... skipping 6 lines ...
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
[ReportAfterSuite] PASSED [0.012 seconds]
------------------------------

Summarizing 1 Failure:
  [FAIL] Conformance Tests [It] conformance-tests
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227

Ran 1 of 25 Specs in 2035.613 seconds
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 24 Skipped
--- FAIL: TestE2E (2035.63s)
You're using deprecated Ginkgo functionality:
=============================================
  CurrentGinkgoTestDescription() is deprecated in Ginkgo V2.  Use CurrentSpecReport() instead.
  Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:297
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:300

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=2.8.4

FAIL

Ginkgo ran 1 suite in 36m11.333113255s

Test Suite Failed
make[3]: *** [Makefile:663: test-e2e-run] Error 1
make[3]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: *** [Makefile:678: test-e2e-skip-push] Error 2
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[1]: *** [Makefile:694: test-conformance] Error 2
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:704: test-windows-upstream] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 8 lines ...