This job view page is being replaced by Spyglass soon. Check out the new job view.
PRclaudiubelu: Refactored kubelet's kuberuntime_sandbox
ResultFAILURE
Tests 1 failed / 2 succeeded
Started2023-03-20 20:01
Elapsed1h5m
Revision5e605d81d57e2309b3c08f821c9dc41372f802c7
Refs 114185

Test Failures


capz-e2e [It] Conformance Tests conformance-tests 33m24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sConformance\sTests\sconformance\-tests$'
[FAILED] Unexpected error:
    <*errors.withStack | 0xc000f9b470>: {
        error: <*errors.withMessage | 0xc002656300>{
            cause: <*errors.errorString | 0xc00021f130>{
                s: "error container run failed with exit code 1",
            },
            msg: "Unable to run conformance tests",
        },
        stack: [0x34b656e, 0x376dca7, 0x196a59b, 0x197e6d8, 0x14ec761],
    }
    Unable to run conformance tests: error container run failed with exit code 1
occurred
In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227 @ 03/20/23 20:55:12.532

				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files


Show 2 Passed Tests

Show 24 Skipped Tests

Error lines from build-log.txt

... skipping 139 lines ...
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   138  100   138    0     0   4312      0 --:--:-- --:--:-- --:--:--  4312

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100    32  100    32    0     0    123      0 --:--:-- --:--:-- --:--:--  2000
using CI_VERSION=v1.27.0-beta.0.25+15894cfc85cab6
using KUBERNETES_VERSION=v1.27.0-beta.0.25+15894cfc85cab6
using IMAGE_TAG=v1.27.0-beta.0.29_117662b4a973d5
Error response from daemon: manifest for capzci.azurecr.io/kube-apiserver:v1.27.0-beta.0.29_117662b4a973d5 not found: manifest unknown: manifest tagged by "v1.27.0-beta.0.29_117662b4a973d5" is not found
Building Kubernetes
make: Entering directory '/home/prow/go/src/k8s.io/kubernetes'
+++ [0320 20:02:45] Verifying Prerequisites....
+++ [0320 20:02:45] Building Docker image kube-build:build-a0d1e9fdaf-5-v1.27.0-go1.20.2-bullseye.0
+++ [0320 20:04:52] Creating data container kube-build-data-a0d1e9fdaf-5-v1.27.0-go1.20.2-bullseye.0
+++ [0320 20:04:54] Syncing sources to container
... skipping 812 lines ...
------------------------------
Conformance Tests conformance-tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:98
  INFO: Cluster name is capz-conf-1plfqp
  STEP: Creating namespace "capz-conf-1plfqp" for hosting the cluster @ 03/20/23 20:33:22.243
  Mar 20 20:33:22.243: INFO: starting to create namespace for hosting the "capz-conf-1plfqp" test spec
2023/03/20 20:33:22 failed trying to get namespace (capz-conf-1plfqp):namespaces "capz-conf-1plfqp" not found
  INFO: Creating namespace capz-conf-1plfqp
  INFO: Creating event watcher for namespace "capz-conf-1plfqp"
  conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 @ 03/20/23 20:33:22.333
    conformance-tests
    Name | N | Min | Median | Mean | StdDev | Max
  INFO: Creating the workload cluster with name "capz-conf-1plfqp" using the "conformance-presubmit-artifacts-windows-containerd" template (Kubernetes v1.27.0-beta.0.25+15894cfc85cab6, 1 control-plane machines, 0 worker machines)
... skipping 99 lines ...
  ====================================================
  Random Seed: 1679344953 - will randomize all specs
  
  Will run 348 of 7207 specs
  Running in parallel across 4 processes
  ------------------------------
  [SynchronizedBeforeSuite] [FAILED] [758.318 seconds]

  [SynchronizedBeforeSuite] 
  test/e2e/e2e.go:77
  
    Timeline >>
    Mar 20 20:42:33.697: INFO: >>> kubeConfig: /tmp/kubeconfig
    Mar 20 20:42:33.699: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
... skipping 63 lines ...
    Mar 20 20:55:04.015: INFO: At 2023-03-20 20:37:59 +0000 UTC - event for kube-proxy-x9kfz: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-proxy-x9kfz to capz-conf-1plfqp-control-plane-2j2gm
    Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "capzci.azurecr.io/kube-apiserver:v1.27.0-beta.0.29_117662b4a973d5"
    Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-controller-manager-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "capzci.azurecr.io/kube-controller-manager:v1.27.0-beta.0.29_117662b4a973d5"
    Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-proxy: {daemonset-controller } SuccessfulCreate: Created pod: kube-proxy-7gqj4
    Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-proxy-7gqj4: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-proxy-7gqj4 to capz-conf-1plfqp-control-plane-2j2gm
    Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-proxy-7gqj4: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "capzci.azurecr.io/kube-proxy:v1.27.0-beta.0.29_117662b4a973d5"
    Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-proxy-x9kfz: {kubelet capz-conf-1plfqp-control-plane-2j2gm} FailedMount: MountVolume.SetUp failed for volume "kube-proxy" : object "kube-system"/"kube-proxy" not registered

    Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-proxy-x9kfz: {kubelet capz-conf-1plfqp-control-plane-2j2gm} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-m8dpv" : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:capz-conf-1plfqp-control-plane-2j2gm" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'capz-conf-1plfqp-control-plane-2j2gm' and this object, object "kube-system"/"kube-root-ca.crt" not registered]

    Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:01 +0000 UTC - event for kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "capzci.azurecr.io/kube-scheduler:v1.27.0-beta.0.29_117662b4a973d5"
    Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:02 +0000 UTC - event for kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Killing: Stopping container kube-apiserver
    Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:02 +0000 UTC - event for kube-controller-manager-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Killing: Stopping container kube-controller-manager
    Mar 20 20:55:04.015: INFO: At 2023-03-20 20:38:02 +0000 UTC - event for kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Killing: Stopping container kube-scheduler
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:04 +0000 UTC - event for kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "capzci.azurecr.io/kube-apiserver:v1.27.0-beta.0.29_117662b4a973d5" in 3.562353503s (3.562464604s including waiting)
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:05 +0000 UTC - event for kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container kube-apiserver
... skipping 4 lines ...
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:10 +0000 UTC - event for kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container kube-scheduler
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:10 +0000 UTC - event for kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "capzci.azurecr.io/kube-scheduler:v1.27.0-beta.0.29_117662b4a973d5" in 2.140395691s (8.671982443s including waiting)
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:10 +0000 UTC - event for kube-scheduler-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container kube-scheduler
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:13 +0000 UTC - event for kube-proxy-7gqj4: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container kube-proxy
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:13 +0000 UTC - event for kube-proxy-7gqj4: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "capzci.azurecr.io/kube-proxy:v1.27.0-beta.0.29_117662b4a973d5" in 3.499445571s (11.923340955s including waiting)
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:13 +0000 UTC - event for kube-proxy-7gqj4: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container kube-proxy
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:21 +0000 UTC - event for kube-apiserver-capz-conf-1plfqp-control-plane-2j2gm: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Unhealthy: Startup probe failed: HTTP probe failed with statuscode: 500

    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:24 +0000 UTC - event for kube-controller-manager: {kube-controller-manager } LeaderElection: capz-conf-1plfqp-control-plane-2j2gm_11cc3f7d-b40e-4cbe-be22-ee508e31eb2b became leader
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:26 +0000 UTC - event for kube-scheduler: {default-scheduler } LeaderElection: capz-conf-1plfqp-control-plane-2j2gm_d0286c4b-aa0a-48d9-b282-91d3450fb492 became leader
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:31 +0000 UTC - event for metrics-server: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-6987569d96 to 1
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:38:31 +0000 UTC - event for metrics-server-6987569d96: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-6987569d96-kbkwt
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:20 +0000 UTC - event for coredns-5d78c9869d-c58vk: {kubelet capz-conf-1plfqp-control-plane-2j2gm} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "9e13897e7fed205b2819620b91a752b5b98b00008e7f1e2aad8184773be3dc43": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/

    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:20 +0000 UTC - event for coredns-5d78c9869d-wh4l9: {kubelet capz-conf-1plfqp-control-plane-2j2gm} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "32f577f0bea9664ec11ac0e5b98a62af85a154812095aa16ee7f9349556e49a7": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/

    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:20 +0000 UTC - event for metrics-server-6987569d96-kbkwt: {kubelet capz-conf-1plfqp-control-plane-2j2gm} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "da59f6a3840523d29dc136abb059229721874304ef229111992d9d331dfd85cf": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/

    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:21 +0000 UTC - event for coredns-5d78c9869d-c58vk: {kubelet capz-conf-1plfqp-control-plane-2j2gm} SandboxChanged: Pod sandbox changed, it will be killed and re-created.
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:21 +0000 UTC - event for coredns-5d78c9869d-wh4l9: {kubelet capz-conf-1plfqp-control-plane-2j2gm} SandboxChanged: Pod sandbox changed, it will be killed and re-created.
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:21 +0000 UTC - event for metrics-server-6987569d96-kbkwt: {kubelet capz-conf-1plfqp-control-plane-2j2gm} SandboxChanged: Pod sandbox changed, it will be killed and re-created.
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:33 +0000 UTC - event for coredns-5d78c9869d-wh4l9: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container coredns
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:33 +0000 UTC - event for coredns-5d78c9869d-wh4l9: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.1" already present on machine
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:33 +0000 UTC - event for coredns-5d78c9869d-wh4l9: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container coredns
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:33 +0000 UTC - event for metrics-server-6987569d96-kbkwt: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulling: Pulling image "k8s.gcr.io/metrics-server/metrics-server:v0.6.2"
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:38 +0000 UTC - event for coredns-5d78c9869d-c58vk: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.1" already present on machine
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:38 +0000 UTC - event for coredns-5d78c9869d-c58vk: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container coredns
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:38 +0000 UTC - event for coredns-5d78c9869d-c58vk: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503

    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:38 +0000 UTC - event for coredns-5d78c9869d-c58vk: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container coredns
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:39 +0000 UTC - event for metrics-server-6987569d96-kbkwt: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Pulled: Successfully pulled image "k8s.gcr.io/metrics-server/metrics-server:v0.6.2" in 5.443256687s (6.220329455s including waiting)
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:40 +0000 UTC - event for metrics-server-6987569d96-kbkwt: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Created: Created container metrics-server
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:39:41 +0000 UTC - event for metrics-server-6987569d96-kbkwt: {kubelet capz-conf-1plfqp-control-plane-2j2gm} Started: Started container metrics-server
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:11 +0000 UTC - event for csi-azuredisk-controller: {deployment-controller } ScalingReplicaSet: Scaled up replica set csi-azuredisk-controller-56db99df6c to 1
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:40:11 +0000 UTC - event for csi-azuredisk-controller-56db99df6c: {replicaset-controller } SuccessfulCreate: Created pod: csi-azuredisk-controller-56db99df6c-sbnn7
... skipping 53 lines ...
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:23 +0000 UTC - event for kube-proxy-windows-527hb: {kubelet capz-conf-gm7xg} Killing: Stopping container kube-proxy
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:27 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Created: Created container containerd-logger
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:27 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Started: Started container containerd-logger
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:28 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Killing: Stopping container containerd-logger
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:31 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 4.2654753s (9.082484s including waiting)
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:32 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 415.267ms (415.267ms including waiting)
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:32 +0000 UTC - event for kube-proxy-windows-wmp2s: {kubelet capz-conf-vvvcd} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-windows-wmp2s_kube-system(bcd38796-26a8-4f15-9513-2a8ac58d2ba4)

    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:43 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Created: Created container containerd-logger
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:43 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Started: Started container containerd-logger
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:44 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Killing: Stopping container containerd-logger
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:44 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 470.5145ms (470.5145ms including waiting)
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:45 +0000 UTC - event for kube-proxy-windows-527hb: {kubelet capz-conf-gm7xg} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-windows-527hb_kube-system(00140840-3274-4053-b4b9-49e8d5996de7)

    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:49 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 595.9949ms (595.9949ms including waiting)
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:41:55 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 607.4595ms (607.4595ms including waiting)
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:03 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 525.7424ms (525.7424ms including waiting)
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:06 +0000 UTC - event for containerd-logger-xxz7w: {kubelet capz-conf-vvvcd} BackOff: Back-off restarting failed container containerd-logger in pod containerd-logger-xxz7w_kube-system(e7e2ec93-e3fc-4ecc-8c7e-5cdb59f5fa8c)

    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:07 +0000 UTC - event for csi-azuredisk-node-win: {daemonset-controller } SuccessfulCreate: Created pod: csi-azuredisk-node-win-nrh82
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:07 +0000 UTC - event for csi-proxy: {daemonset-controller } SuccessfulCreate: Created pod: csi-proxy-bnsgh
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:08 +0000 UTC - event for csi-azuredisk-node-win-nrh82: {kubelet capz-conf-vvvcd} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0"
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:08 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} Pulling: Pulling image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2"
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:18 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0" in 498.4167ms (498.4167ms including waiting)
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:26 +0000 UTC - event for csi-azuredisk-node-win-nrh82: {kubelet capz-conf-vvvcd} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" in 17.1572849s (17.1572849s including waiting)
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:26 +0000 UTC - event for csi-azuredisk-node-win-nrh82: {kubelet capz-conf-vvvcd} Created: Created container init
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:26 +0000 UTC - event for csi-azuredisk-node-win-nrh82: {kubelet capz-conf-vvvcd} Started: Started container init
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:27 +0000 UTC - event for csi-azuredisk-node-win-nrh82: {kubelet capz-conf-vvvcd} Killing: Stopping container init
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:27 +0000 UTC - event for csi-azuredisk-node-win-nrh82: {kubelet capz-conf-vvvcd} Pulled: Container image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" already present on machine
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:31 +0000 UTC - event for containerd-logger-ng4wl: {kubelet capz-conf-gm7xg} BackOff: Back-off restarting failed container containerd-logger in pod containerd-logger-ng4wl_kube-system(bd28dbc9-32d2-41df-8201-42b78981a1f5)

    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:43 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} Created: Created container csi-proxy
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:43 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" in 17.127592s (34.2429305s including waiting)
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:43 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} Started: Started container csi-proxy
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:44 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} Killing: Stopping container csi-proxy
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:42:48 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} Pulled: Container image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" already present on machine
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:43:05 +0000 UTC - event for csi-proxy-bnsgh: {kubelet capz-conf-vvvcd} BackOff: Back-off restarting failed container csi-proxy in pod csi-proxy-bnsgh_kube-system(d8246000-ea4b-4f56-a4b8-755b44656004)

    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:44:46 +0000 UTC - event for csi-azuredisk-node-win: {daemonset-controller } SuccessfulCreate: Created pod: csi-azuredisk-node-win-778bd
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:44:46 +0000 UTC - event for csi-proxy: {daemonset-controller } SuccessfulCreate: Created pod: csi-proxy-4v7zg
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:44:47 +0000 UTC - event for csi-azuredisk-node-win-778bd: {kubelet capz-conf-gm7xg} Pulling: Pulling image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0"
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:44:47 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} Pulling: Pulling image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2"
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:09 +0000 UTC - event for csi-azuredisk-node-win-778bd: {kubelet capz-conf-gm7xg} Pulled: Successfully pulled image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" in 21.9331601s (21.933656s including waiting)
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:09 +0000 UTC - event for csi-azuredisk-node-win-778bd: {kubelet capz-conf-gm7xg} Created: Created container init
... skipping 2 lines ...
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:15 +0000 UTC - event for csi-azuredisk-node-win-778bd: {kubelet capz-conf-gm7xg} Pulled: Container image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0" already present on machine
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:31 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} Pulled: Successfully pulled image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" in 22.3031678s (44.1908144s including waiting)
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:32 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} Created: Created container csi-proxy
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:32 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} Started: Started container csi-proxy
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:33 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} Killing: Stopping container csi-proxy
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:37 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} Pulled: Container image "ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2" already present on machine
    Mar 20 20:55:04.016: INFO: At 2023-03-20 20:45:54 +0000 UTC - event for csi-proxy-4v7zg: {kubelet capz-conf-gm7xg} BackOff: Back-off restarting failed container csi-proxy in pod csi-proxy-4v7zg_kube-system(4bfb48ce-a08e-4c4b-8d11-594ea6912696)

    Mar 20 20:55:04.070: INFO: POD                                                           NODE                                  PHASE    GRACE  CONDITIONS
    Mar 20 20:55:04.070: INFO: containerd-logger-ng4wl                                       capz-conf-gm7xg                       Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:41:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:54:10 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:54:10 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:41:01 +0000 UTC  }]
    Mar 20 20:55:04.070: INFO: containerd-logger-xxz7w                                       capz-conf-vvvcd                       Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:52:45 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:52:45 +0000 UTC ContainersNotReady containers with unready status: [containerd-logger]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:54 +0000 UTC  }]
    Mar 20 20:55:04.070: INFO: coredns-5d78c9869d-c58vk                                      capz-conf-1plfqp-control-plane-2j2gm  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:19 +0000 UTC  }]
    Mar 20 20:55:04.070: INFO: coredns-5d78c9869d-wh4l9                                      capz-conf-1plfqp-control-plane-2j2gm  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:39:19 +0000 UTC  }]
    Mar 20 20:55:04.070: INFO: csi-azuredisk-controller-56db99df6c-sbnn7                     capz-conf-1plfqp-control-plane-2j2gm  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-20 20:40:11 +0000 UTC  }]
... skipping 137 lines ...
          ]
        }
      ],
      "filters": [
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == Stats && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::LayerID && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::NameToGuid && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.Stats && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.State && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetProcessProperties && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetComputeSystemProperties && hasnoproperty error"

        }
      ],
      "outputs": [
        {
          "type": "StdOutput"
        }
... skipping 28 lines ...
          ]
        }
      ],
      "filters": [
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == Stats && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::LayerID && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == hcsshim::NameToGuid && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.Stats && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == containerd.task.v2.Task.State && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetProcessProperties && hasnoproperty error"

        },
        {
            "type": "drop",
            "include": "ProviderName == Microsoft.Virtualization.RunHCS && name == HcsGetComputeSystemProperties && hasnoproperty error"

        }
      ],
      "outputs": [
        {
          "type": "StdOutput"
        }
      ],
      "schemaVersion": "2016-08-11"
    }
  
    Logging started...
  
    ENDLOG for container kube-system:containerd-logger-xxz7w:containerd-logger
    Mar 20 20:55:09.198: INFO: Failed to get logs of pod csi-azuredisk-node-win-778bd, container liveness-probe, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-778bd)

    Mar 20 20:55:09.198: INFO: Logs of kube-system/csi-azuredisk-node-win-778bd:liveness-probe on node capz-conf-gm7xg
    Mar 20 20:55:09.198: INFO:  : STARTLOG
  
    ENDLOG for container kube-system:csi-azuredisk-node-win-778bd:liveness-probe
    Mar 20 20:55:09.597: INFO: Failed to get logs of pod csi-azuredisk-node-win-778bd, container node-driver-registrar, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-778bd)

    Mar 20 20:55:09.597: INFO: Logs of kube-system/csi-azuredisk-node-win-778bd:node-driver-registrar on node capz-conf-gm7xg
    Mar 20 20:55:09.597: INFO:  : STARTLOG
  
    ENDLOG for container kube-system:csi-azuredisk-node-win-778bd:node-driver-registrar
    Mar 20 20:55:09.998: INFO: Failed to get logs of pod csi-azuredisk-node-win-778bd, container azuredisk, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-778bd)

    Mar 20 20:55:09.998: INFO: Logs of kube-system/csi-azuredisk-node-win-778bd:azuredisk on node capz-conf-gm7xg
    Mar 20 20:55:09.998: INFO:  : STARTLOG
  
    ENDLOG for container kube-system:csi-azuredisk-node-win-778bd:azuredisk
    Mar 20 20:55:10.397: INFO: Failed to get logs of pod csi-azuredisk-node-win-nrh82, container liveness-probe, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-nrh82)

    Mar 20 20:55:10.397: INFO: Logs of kube-system/csi-azuredisk-node-win-nrh82:liveness-probe on node capz-conf-vvvcd
    Mar 20 20:55:10.397: INFO:  : STARTLOG
  
    ENDLOG for container kube-system:csi-azuredisk-node-win-nrh82:liveness-probe
    Mar 20 20:55:10.798: INFO: Failed to get logs of pod csi-azuredisk-node-win-nrh82, container node-driver-registrar, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-nrh82)

    Mar 20 20:55:10.798: INFO: Logs of kube-system/csi-azuredisk-node-win-nrh82:node-driver-registrar on node capz-conf-vvvcd
    Mar 20 20:55:10.798: INFO:  : STARTLOG
  
    ENDLOG for container kube-system:csi-azuredisk-node-win-nrh82:node-driver-registrar
    Mar 20 20:55:11.197: INFO: Failed to get logs of pod csi-azuredisk-node-win-nrh82, container azuredisk, err: the server rejected our request for an unknown reason (get pods csi-azuredisk-node-win-nrh82)

    Mar 20 20:55:11.197: INFO: Logs of kube-system/csi-azuredisk-node-win-nrh82:azuredisk on node capz-conf-vvvcd
    Mar 20 20:55:11.197: INFO:  : STARTLOG
  
    ENDLOG for container kube-system:csi-azuredisk-node-win-nrh82:azuredisk
    Mar 20 20:55:11.413: INFO: Logs of kube-system/csi-proxy-4v7zg:csi-proxy on node capz-conf-gm7xg
    Mar 20 20:55:11.413: INFO:  : STARTLOG
... skipping 17 lines ...
    discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose 
    parameter. For a list of approved verbs, type Get-Verb.
    Running kub-proxy service.
    Waiting for HNS network Calico to be created...
  
    ENDLOG for container kube-system:kube-proxy-windows-wmp2s:kube-proxy
    [FAILED] in [SynchronizedBeforeSuite] - test/e2e/e2e.go:242 @ 03/20/23 20:55:12.014

    << Timeline
  
    [FAILED] Error waiting for all pods to be running and ready: Timed out after 600.000s.

    Expected all pods (need at least 0) in namespace "kube-system" to be running and ready (except for 0).
    10 / 18 pods were running and ready.
    Expected 4 pod replicas, 4 are Running and Ready.
    Pods that were neither completed nor running:
        <[]v1.Pod | len:8, cap:8>: 
            - metadata:
... skipping 237 lines ...
                  imageID: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505
                  lastState:
                    terminated:
                      containerID: containerd://1a2e8e1c5f111dd334de452542dd1eb113ab1ba967088428f64f5c344c449e41
                      exitCode: -1073741510
                      finishedAt: "2023-03-20T20:54:06Z"
                      reason: Error

                      startedAt: "2023-03-20T20:54:05Z"
                  name: containerd-logger
                  ready: false
                  restartCount: 9
                  started: false
                  state:
                    waiting:
                      message: back-off 5m0s restarting failed container=containerd-logger pod=containerd-logger-ng4wl_kube-system(bd28dbc9-32d2-41df-8201-42b78981a1f5)

                      reason: CrashLoopBackOff
                hostIP: 10.1.0.4
                phase: Running
                podIP: 10.1.0.4
                podIPs:
                - ip: 10.1.0.4
... skipping 240 lines ...
                  imageID: ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505
                  lastState:
                    terminated:
                      containerID: containerd://8d389ab98b1e1381585a689130d3632e29d79314045e6027a7505ced598f75ac
                      exitCode: -1073741510
                      finishedAt: "2023-03-20T20:52:41Z"
                      reason: Error

                      startedAt: "2023-03-20T20:52:40Z"
                  name: containerd-logger
                  ready: false
                  restartCount: 9
                  started: false
                  state:
                    waiting:
                      message: back-off 5m0s restarting failed container=containerd-logger pod=containerd-logger-xxz7w_kube-system(e7e2ec93-e3fc-4ecc-8c7e-5cdb59f5fa8c)

                      reason: CrashLoopBackOff
                hostIP: 10.1.0.5
                phase: Running
                podIP: 10.1.0.5
                podIPs:
                - ip: 10.1.0.5
... skipping 1241 lines ...
                  imageID: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba
                  lastState:
                    terminated:
                      containerID: containerd://54e337036669b590080a33498de1e29bfe77c5ddc7a9d11a0edabf34a8b47d31
                      exitCode: -1073741510
                      finishedAt: "2023-03-20T20:50:59Z"
                      reason: Error

                      startedAt: "2023-03-20T20:50:58Z"
                  name: csi-proxy
                  ready: false
                  restartCount: 7
                  started: false
                  state:
                    waiting:
                      message: back-off 5m0s restarting failed container=csi-proxy pod=csi-proxy-4v7zg_kube-system(4bfb48ce-a08e-4c4b-8d11-594ea6912696)

                      reason: CrashLoopBackOff
                hostIP: 10.1.0.4
                phase: Running
                podIP: 10.1.0.4
                podIPs:
                - ip: 10.1.0.4
... skipping 211 lines ...
                  imageID: ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba
                  lastState:
                    terminated:
                      containerID: containerd://2353a4851cba8f20bd2fc79edf323753812abc396be6ff59943c0f1f8776e173
                      exitCode: -1073741510
                      finishedAt: "2023-03-20T20:53:14Z"
                      reason: Error

                      startedAt: "2023-03-20T20:53:13Z"
                  name: csi-proxy
                  ready: false
                  restartCount: 9
                  started: false
                  state:
                    waiting:
                      message: back-off 5m0s restarting failed container=csi-proxy pod=csi-proxy-bnsgh_kube-system(d8246000-ea4b-4f56-a4b8-755b44656004)

                      reason: CrashLoopBackOff
                hostIP: 10.1.0.5
                phase: Running
                podIP: 10.1.0.5
                podIPs:
                - ip: 10.1.0.5
... skipping 279 lines ...
                  imageID: sha256:066f734ecf45f03f1a29b2c4432153044af372540aec60a4e46e4a8b627cf1ed
                  lastState:
                    terminated:
                      containerID: containerd://39f24b2731756787768c3581c711a27fc3bc56470b3c79d1d4bf2d3bae83468b
                      exitCode: -1073741510
                      finishedAt: "2023-03-20T20:51:56Z"
                      reason: Error

                      startedAt: "2023-03-20T20:51:56Z"
                  name: kube-proxy
                  ready: false
                  restartCount: 9
                  started: false
                  state:
                    waiting:
                      message: back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-windows-527hb_kube-system(00140840-3274-4053-b4b9-49e8d5996de7)

                      reason: CrashLoopBackOff
                hostIP: 10.1.0.4
                phase: Running
                podIP: 10.1.0.4
                podIPs:
                - ip: 10.1.0.4
... skipping 279 lines ...
                  imageID: sha256:066f734ecf45f03f1a29b2c4432153044af372540aec60a4e46e4a8b627cf1ed
                  lastState:
                    terminated:
                      containerID: containerd://2e0305ad0952156deb179bd5d9b7d8b1583328d2294ce9dddab65ae4da035397
                      exitCode: -1073741510
                      finishedAt: "2023-03-20T20:51:53Z"
                      reason: Error

                      startedAt: "2023-03-20T20:51:52Z"
                  name: kube-proxy
                  ready: false
                  restartCount: 9
                  started: false
                  state:
                    waiting:
                      message: back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-windows-wmp2s_kube-system(bcd38796-26a8-4f15-9513-2a8ac58d2ba4)

                      reason: CrashLoopBackOff
                hostIP: 10.1.0.5
                phase: Running
                podIP: 10.1.0.5
                podIPs:
                - ip: 10.1.0.5
                qosClass: BestEffort
                startTime: "2023-03-20T20:40:55Z"
    In [SynchronizedBeforeSuite] at: test/e2e/e2e.go:242 @ 03/20/23 20:55:12.014
  ------------------------------
  [SynchronizedBeforeSuite] [FAILED] [758.311 seconds]

  [SynchronizedBeforeSuite] 
  test/e2e/e2e.go:77
  
    [FAILED] SynchronizedBeforeSuite failed on Ginkgo parallel process #1

      The first SynchronizedBeforeSuite function running on Ginkgo parallel process
      #1 failed.  This suite will now abort.

  
    
    In [SynchronizedBeforeSuite] at: test/e2e/e2e.go:77 @ 03/20/23 20:55:12.036
  ------------------------------
  [SynchronizedBeforeSuite] [FAILED] [758.335 seconds]

  [SynchronizedBeforeSuite] 
  test/e2e/e2e.go:77
  
    [FAILED] SynchronizedBeforeSuite failed on Ginkgo parallel process #1

      The first SynchronizedBeforeSuite function running on Ginkgo parallel process
      #1 failed.  This suite will now abort.

  
    
    In [SynchronizedBeforeSuite] at: test/e2e/e2e.go:77 @ 03/20/23 20:55:12.036
  ------------------------------
  [SynchronizedBeforeSuite] [FAILED] [758.331 seconds]

  [SynchronizedBeforeSuite] 
  test/e2e/e2e.go:77
  
    [FAILED] SynchronizedBeforeSuite failed on Ginkgo parallel process #1

      The first SynchronizedBeforeSuite function running on Ginkgo parallel process
      #1 failed.  This suite will now abort.

  
    
    In [SynchronizedBeforeSuite] at: test/e2e/e2e.go:77 @ 03/20/23 20:55:12.036
  ------------------------------
  
  Summarizing 4 Failures:
    [FAIL] [SynchronizedBeforeSuite] 

    test/e2e/e2e.go:77
    [FAIL] [SynchronizedBeforeSuite] 

    test/e2e/e2e.go:77
    [FAIL] [SynchronizedBeforeSuite] 

    test/e2e/e2e.go:77
    [FAIL] [SynchronizedBeforeSuite] 

    test/e2e/e2e.go:242
  
  Ran 0 of 7207 Specs in 758.428 seconds
  FAIL! -- A BeforeSuite node failed so all tests were skipped.

  
    I0320 20:42:33.233579      15 e2e.go:117] Starting e2e run "6de80a66-9fe0-470f-a3de-d8b524a156e7" on Ginkgo node 1
  You're using deprecated Ginkgo functionality:
  =============================================
    --ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead
    Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags
    --ginkgo.progress is deprecated .  The functionality provided by --progress was confusing and is no longer needed.  Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs.  Or you can run with -vv to always see all node events.  Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.

    --ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo.  This feature has proved to be more noisy than useful.  You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.
  
  To silence deprecations that can be silenced set the following environment variable:
    ACK_GINKGO_DEPRECATIONS=2.9.1
  
  --- FAIL: TestE2E (758.86s)

  FAIL

  
    I0320 20:42:33.231585      16 e2e.go:117] Starting e2e run "1c4d63ab-4761-4bda-93a7-5f81513b835c" on Ginkgo node 2
  You're using deprecated Ginkgo functionality:
  =============================================
    --ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead
    Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags
    --ginkgo.progress is deprecated .  The functionality provided by --progress was confusing and is no longer needed.  Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs.  Or you can run with -vv to always see all node events.  Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.

    --ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo.  This feature has proved to be more noisy than useful.  You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.
  
  To silence deprecations that can be silenced set the following environment variable:
    ACK_GINKGO_DEPRECATIONS=2.9.1
  
  --- FAIL: TestE2E (758.81s)

  FAIL

  
    I0320 20:42:33.243068      17 e2e.go:117] Starting e2e run "7672dc9e-5156-48cf-a02d-283c07070e7d" on Ginkgo node 3
  You're using deprecated Ginkgo functionality:
  =============================================
    --ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead
    Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags
    --ginkgo.progress is deprecated .  The functionality provided by --progress was confusing and is no longer needed.  Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs.  Or you can run with -vv to always see all node events.  Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.

    --ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo.  This feature has proved to be more noisy than useful.  You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.
  
  To silence deprecations that can be silenced set the following environment variable:
    ACK_GINKGO_DEPRECATIONS=2.9.1
  
  --- FAIL: TestE2E (758.80s)

  FAIL

  
    I0320 20:42:33.229510      19 e2e.go:117] Starting e2e run "def2f57d-e7a3-42b4-89c4-1100437748c0" on Ginkgo node 4
  You're using deprecated Ginkgo functionality:
  =============================================
    --ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo.  This feature has proved to be more noisy than useful.  You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.
    --ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead
    Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags
    --ginkgo.progress is deprecated .  The functionality provided by --progress was confusing and is no longer needed.  Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs.  Or you can run with -vv to always see all node events.  Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.

  
  To silence deprecations that can be silenced set the following environment variable:
    ACK_GINKGO_DEPRECATIONS=2.9.1
  
  --- FAIL: TestE2E (758.81s)

  FAIL

  
  
  Ginkgo ran 1 suite in 12m38.988656037s
  
  Test Suite Failed

  You're using deprecated Ginkgo functionality:
  =============================================
    --slowSpecThreshold is deprecated use --slow-spec-threshold instead and pass in a duration string (e.g. '5s', not '5.0')
    Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed--slowspecthreshold
  
  To silence deprecations that can be silenced set the following environment variable:
    ACK_GINKGO_DEPRECATIONS=2.9.1
  
  [FAILED] in [It] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227 @ 03/20/23 20:55:12.532
  Mar 20 20:55:12.532: INFO: FAILED!
  Mar 20 20:55:12.533: INFO: Cleaning up after "Conformance Tests conformance-tests" spec
  Mar 20 20:55:12.533: INFO: Dumping all the Cluster API resources in the "capz-conf-1plfqp" namespace
  STEP: Dumping logs from the "capz-conf-1plfqp" workload cluster @ 03/20/23 20:55:12.877
  Mar 20 20:55:12.877: INFO: Dumping workload cluster capz-conf-1plfqp/capz-conf-1plfqp logs
  Mar 20 20:55:12.914: INFO: Collecting logs for Linux node capz-conf-1plfqp-control-plane-2j2gm in cluster capz-conf-1plfqp in namespace capz-conf-1plfqp

  Mar 20 20:55:25.757: INFO: Collecting boot logs for AzureMachine capz-conf-1plfqp-control-plane-2j2gm

  Mar 20 20:55:26.746: INFO: Collecting logs for Windows node capz-conf-gm7xg in cluster capz-conf-1plfqp in namespace capz-conf-1plfqp

  Mar 20 20:58:06.583: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-gm7xg to /logs/artifacts/clusters/capz-conf-1plfqp/machines/capz-conf-1plfqp-md-win-65dbf97bf6-csgg7/crashdumps.tar
  Mar 20 20:58:08.384: INFO: Collecting boot logs for AzureMachine capz-conf-1plfqp-md-win-gm7xg

Failed to get logs for Machine capz-conf-1plfqp-md-win-65dbf97bf6-csgg7, Cluster capz-conf-1plfqp/capz-conf-1plfqp: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
  Mar 20 20:58:09.375: INFO: Collecting logs for Windows node capz-conf-vvvcd in cluster capz-conf-1plfqp in namespace capz-conf-1plfqp

  Mar 20 21:00:38.829: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-vvvcd to /logs/artifacts/clusters/capz-conf-1plfqp/machines/capz-conf-1plfqp-md-win-65dbf97bf6-j9qvz/crashdumps.tar
  Mar 20 21:00:40.689: INFO: Collecting boot logs for AzureMachine capz-conf-1plfqp-md-win-vvvcd

Failed to get logs for Machine capz-conf-1plfqp-md-win-65dbf97bf6-j9qvz, Cluster capz-conf-1plfqp/capz-conf-1plfqp: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
  Mar 20 21:00:41.549: INFO: Dumping workload cluster capz-conf-1plfqp/capz-conf-1plfqp nodes
  Mar 20 21:00:41.850: INFO: Describing Node capz-conf-1plfqp-control-plane-2j2gm
  Mar 20 21:00:42.067: INFO: Describing Node capz-conf-gm7xg
  Mar 20 21:00:42.265: INFO: Describing Node capz-conf-vvvcd
  Mar 20 21:00:42.461: INFO: Fetching nodes took 912.555835ms
  Mar 20 21:00:42.462: INFO: Dumping workload cluster capz-conf-1plfqp/capz-conf-1plfqp pod logs
... skipping 5 lines ...
  Mar 20 21:00:42.883: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-59d9cb8fbb-8ft2d, container calico-kube-controllers
  Mar 20 21:00:42.959: INFO: Describing Pod calico-system/calico-node-bdvzb
  Mar 20 21:00:42.959: INFO: Creating log watcher for controller calico-system/calico-node-bdvzb, container calico-node
  Mar 20 21:00:43.043: INFO: Describing Pod calico-system/calico-node-windows-9f96h
  Mar 20 21:00:43.043: INFO: Creating log watcher for controller calico-system/calico-node-windows-9f96h, container calico-node-startup
  Mar 20 21:00:43.044: INFO: Creating log watcher for controller calico-system/calico-node-windows-9f96h, container calico-node-felix
  Mar 20 21:00:43.100: INFO: Error starting logs stream for pod calico-system/calico-node-windows-9f96h, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-9f96h" is waiting to start: PodInitializing
  Mar 20 21:00:43.100: INFO: Error starting logs stream for pod calico-system/calico-node-windows-9f96h, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-9f96h" is waiting to start: PodInitializing
  Mar 20 21:00:43.115: INFO: Describing Pod calico-system/calico-node-windows-k9kth
  Mar 20 21:00:43.115: INFO: Creating log watcher for controller calico-system/calico-node-windows-k9kth, container calico-node-startup
  Mar 20 21:00:43.115: INFO: Creating log watcher for controller calico-system/calico-node-windows-k9kth, container calico-node-felix
  Mar 20 21:00:43.169: INFO: Error starting logs stream for pod calico-system/calico-node-windows-k9kth, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-k9kth" is waiting to start: PodInitializing
  Mar 20 21:00:43.170: INFO: Error starting logs stream for pod calico-system/calico-node-windows-k9kth, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-k9kth" is waiting to start: PodInitializing
  Mar 20 21:00:43.507: INFO: Describing Pod calico-system/calico-typha-96fb785dc-c7sr9
  Mar 20 21:00:43.507: INFO: Creating log watcher for controller calico-system/calico-typha-96fb785dc-c7sr9, container calico-typha
  Mar 20 21:00:43.908: INFO: Describing Pod calico-system/csi-node-driver-j9ptp
  Mar 20 21:00:43.908: INFO: Creating log watcher for controller calico-system/csi-node-driver-j9ptp, container csi-node-driver-registrar
  Mar 20 21:00:43.908: INFO: Creating log watcher for controller calico-system/csi-node-driver-j9ptp, container calico-csi
  Mar 20 21:00:44.310: INFO: Describing Pod kube-system/containerd-logger-ng4wl
... skipping 16 lines ...
  Mar 20 21:00:46.309: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-jtlzl, container node-driver-registrar
  Mar 20 21:00:46.309: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-jtlzl, container azuredisk
  Mar 20 21:00:46.707: INFO: Describing Pod kube-system/csi-azuredisk-node-win-778bd
  Mar 20 21:00:46.707: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-778bd, container liveness-probe
  Mar 20 21:00:46.707: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-778bd, container azuredisk
  Mar 20 21:00:46.707: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-778bd, container node-driver-registrar
  Mar 20 21:00:46.755: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-778bd, container liveness-probe: container "liveness-probe" in pod "csi-azuredisk-node-win-778bd" is waiting to start: PodInitializing
  Mar 20 21:00:46.755: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-778bd, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-win-778bd" is waiting to start: PodInitializing
  Mar 20 21:00:46.755: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-778bd, container node-driver-registrar: container "node-driver-registrar" in pod "csi-azuredisk-node-win-778bd" is waiting to start: PodInitializing
  Mar 20 21:00:47.110: INFO: Describing Pod kube-system/csi-azuredisk-node-win-nrh82
  Mar 20 21:00:47.110: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-nrh82, container node-driver-registrar
  Mar 20 21:00:47.110: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-nrh82, container liveness-probe
  Mar 20 21:00:47.110: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-nrh82, container azuredisk
  Mar 20 21:00:47.148: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-nrh82, container node-driver-registrar: container "node-driver-registrar" in pod "csi-azuredisk-node-win-nrh82" is waiting to start: PodInitializing
  Mar 20 21:00:47.149: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-nrh82, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-win-nrh82" is waiting to start: PodInitializing
  Mar 20 21:00:47.149: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-nrh82, container liveness-probe: container "liveness-probe" in pod "csi-azuredisk-node-win-nrh82" is waiting to start: PodInitializing
  Mar 20 21:00:47.509: INFO: Describing Pod kube-system/csi-proxy-4v7zg
  Mar 20 21:00:47.510: INFO: Creating log watcher for controller kube-system/csi-proxy-4v7zg, container csi-proxy
  Mar 20 21:00:47.912: INFO: Describing Pod kube-system/csi-proxy-bnsgh
  Mar 20 21:00:47.913: INFO: Creating log watcher for controller kube-system/csi-proxy-bnsgh, container csi-proxy
  Mar 20 21:00:48.308: INFO: Describing Pod kube-system/etcd-capz-conf-1plfqp-control-plane-2j2gm
  Mar 20 21:00:48.309: INFO: Creating log watcher for controller kube-system/etcd-capz-conf-1plfqp-control-plane-2j2gm, container etcd
... skipping 21 lines ...
  INFO: Waiting for the Cluster capz-conf-1plfqp/capz-conf-1plfqp to be deleted
  STEP: Waiting for cluster capz-conf-1plfqp to be deleted @ 03/20/23 21:00:53.851
  Mar 20 21:06:34.026: INFO: Deleting namespace used for hosting the "conformance-tests" test spec
  INFO: Deleting namespace capz-conf-1plfqp
  Mar 20 21:06:34.047: INFO: Checking if any resources are left over in Azure for spec "conformance-tests"
  STEP: Redacting sensitive information from logs @ 03/20/23 21:06:34.776
• [FAILED] [2004.781 seconds]
Conformance Tests [It] conformance-tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:98

  [FAILED] Unexpected error:
      <*errors.withStack | 0xc000f9b470>: {
          error: <*errors.withMessage | 0xc002656300>{
              cause: <*errors.errorString | 0xc00021f130>{
                  s: "error container run failed with exit code 1",
              },
              msg: "Unable to run conformance tests",
          },
          stack: [0x34b656e, 0x376dca7, 0x196a59b, 0x197e6d8, 0x14ec761],
      }
      Unable to run conformance tests: error container run failed with exit code 1
  occurred
  In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227 @ 03/20/23 20:55:12.532

  Full Stack Trace
    sigs.k8s.io/cluster-api-provider-azure/test/e2e.glob..func3.2()
    	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227 +0x175a
... skipping 6 lines ...
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
[ReportAfterSuite] PASSED [0.007 seconds]
------------------------------

Summarizing 1 Failure:
  [FAIL] Conformance Tests [It] conformance-tests
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:227

Ran 1 of 25 Specs in 2183.473 seconds
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 24 Skipped
--- FAIL: TestE2E (2183.48s)
FAIL
You're using deprecated Ginkgo functionality:
=============================================
  CurrentGinkgoTestDescription() is deprecated in Ginkgo V2.  Use CurrentSpecReport() instead.
  Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:297
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:300

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=2.8.4


Ginkgo ran 1 suite in 38m22.34137712s

Test Suite Failed
make[3]: *** [Makefile:663: test-e2e-run] Error 1
make[3]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: *** [Makefile:678: test-e2e-skip-push] Error 2
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[1]: *** [Makefile:694: test-conformance] Error 2
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:704: test-windows-upstream] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 8 lines ...