This job view page is being replaced by Spyglass soon. Check out the new job view.
PRDivya063: [E2E] Add support for pulling images from private registry
ResultFAILURE
Tests 1 failed / 2 succeeded
Started2023-01-27 16:41
Elapsed1h32m
Revision421d6736f75568ccadfc393a9d7bc7a66a291fae
Refs 114625

Test Failures


capz-e2e [It] Conformance Tests conformance-tests 55m59s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sConformance\sTests\sconformance\-tests$'
[FAILED] Unexpected error:
    <*errors.withStack | 0xc0014a1db8>: {
        error: <*errors.withMessage | 0xc000ff27e0>{
            cause: <*errors.errorString | 0xc00043fd30>{
                s: "error container run failed with exit code 1",
            },
            msg: "Unable to run conformance tests",
        },
        stack: [0x33a3f59, 0x3651f67, 0x194537b, 0x1959958, 0x14d9741],
    }
    Unable to run conformance tests: error container run failed with exit code 1
occurred
In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238 @ 01/27/23 17:59:57.586

				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 2 Passed Tests

Show 23 Skipped Tests

Error lines from build-log.txt

... skipping 120 lines ...
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   138  100   138    0     0   2029      0 --:--:-- --:--:-- --:--:--  2059

100    33  100    33    0     0    237      0 --:--:-- --:--:-- --:--:--   237
using CI_VERSION=v1.27.0-alpha.1.56+4d9e8f76959f16
using KUBERNETES_VERSION=v1.27.0-alpha.1.56+4d9e8f76959f16
using IMAGE_TAG=v1.27.0-alpha.1.60_cc3cf560a0b0a7
Error response from daemon: manifest for capzci.azurecr.io/kube-apiserver:v1.27.0-alpha.1.60_cc3cf560a0b0a7 not found: manifest unknown: manifest tagged by "v1.27.0-alpha.1.60_cc3cf560a0b0a7" is not found
Building Kubernetes
make: Entering directory '/home/prow/go/src/k8s.io/kubernetes'
+++ [0127 16:42:15] Verifying Prerequisites....
+++ [0127 16:42:16] Building Docker image kube-build:build-b35bc751e7-5-v1.26.0-go1.19.5-bullseye.0
+++ [0127 16:45:33] Creating data container kube-build-data-b35bc751e7-5-v1.26.0-go1.19.5-bullseye.0
+++ [0127 16:45:55] Syncing sources to container
... skipping 761 lines ...
------------------------------
Conformance Tests conformance-tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100
  INFO: Cluster name is capz-conf-2nvvsh
  STEP: Creating namespace "capz-conf-2nvvsh" for hosting the cluster @ 01/27/23 17:17:31.25
  Jan 27 17:17:31.250: INFO: starting to create namespace for hosting the "capz-conf-2nvvsh" test spec
2023/01/27 17:17:31 failed trying to get namespace (capz-conf-2nvvsh):namespaces "capz-conf-2nvvsh" not found
  INFO: Creating namespace capz-conf-2nvvsh
  INFO: Creating event watcher for namespace "capz-conf-2nvvsh"
  conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:102 @ 01/27/23 17:17:31.297
    conformance-tests
    Name | N | Min | Median | Mean | StdDev | Max
  INFO: Creating the workload cluster with name "capz-conf-2nvvsh" using the "conformance-presubmit-artifacts-windows-containerd" template (Kubernetes v1.27.0-alpha.1.56+4d9e8f76959f16, 1 control-plane machines, 0 worker machines)
... skipping 50 lines ...
  Jan 27 17:21:27.190: INFO: creating 1 resource(s)
  Jan 27 17:21:27.464: INFO: creating 1 resource(s)
  Jan 27 17:21:27.559: INFO: Clearing discovery cache
  Jan 27 17:21:27.560: INFO: beginning wait for 21 resources with timeout of 1m0s
  Jan 27 17:21:31.301: INFO: creating 1 resource(s)
  Jan 27 17:21:31.625: INFO: creating 6 resource(s)
  Jan 27 17:21:32.234: INFO: failed to record the release: update: failed to update: Put "https://capz-conf-2nvvsh-58f57015.eastus.cloudapp.azure.com:6443/api/v1/namespaces/tigera-operator/secrets/sh.helm.release.v1.projectcalico.v1": read tcp 10.60.19.201:53352->20.241.179.207:6443: read: connection reset by peer
  Jan 27 17:21:32.234: INFO: Install complete
  STEP: Waiting for Ready tigera-operator deployment pods @ 01/27/23 17:21:42.2
  STEP: waiting for deployment tigera-operator/tigera-operator to be available @ 01/27/23 17:21:42.384
  Jan 27 17:21:42.384: INFO: starting to wait for deployment to become available
  Jan 27 17:22:12.519: INFO: Deployment tigera-operator/tigera-operator is now available, took 30.134900054s
felixconfiguration.crd.projectcalico.org/default created
... skipping 38 lines ...
  Random Seed: 1674840373 - will randomize all specs
  
  Will run 340 of 7082 specs
  Running in parallel across 4 processes
  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSS•S•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSS•SS•SSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSS•SSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSS•SSS•SSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSS•SSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSS•SS•SSSSSS•S•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSS•SSSSSSSS•SSSSSSSSSSS•SSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSS••SSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSS•SSSSS•SSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSS•SSS•SSSSSSSS•SSS•SSSSSSSSSS•SSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSS•SSSSS•SSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSS•SSSSSSSSS•SSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSS••SSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSS•SS•SSSS•SSSSSSSSSSSSSSSSSSSSSSSSSS•SSSS•SSSSS•S•SSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•S•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSS•SSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSS•SSSSSSSSS•S•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSS•SSSSSSSS•SSSSSSSSSSSSS•SSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS••SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSS•SSSSS•S•SSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSS•SSSSSSSSSSSS•SSSSSSS••SSSSSSSSSSSSSSSSSSSS•SSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSS•S•SSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSS•SSSSSSSSSSSSSSSSSSSSS•SSSSSSSS•SSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS••SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•S•SSSSSSSSSSSSSSSSSSS•SSS•SSSSSSSSSSSSS•SSSSSSSSSSSS•SSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSS•SSSSSSS•SSSSSSSSSSSSSSS•SSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SS•SSSSSSS•SSSSSSSSSS•S•SSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSS•SSSSSSSSSSSSSSSSSSSSSS•SSSSSS•SSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSS•SSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSS•SSSS•SSSSSSSSSSSSSSSSSSSSSSSSSS•SS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSS•SSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•S•SSSSSSSSSSSSSSSSSSSS•SSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSS•SSSSSSSS•SSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSS•S•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSS•SSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSS••SSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSS•SSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  • [FAILED] [74.117 seconds]

  [sig-storage] EmptyDir volumes [It] pod should support shared volumes between containers [Conformance]
  test/e2e/common/storage/empty_dir.go:227
  
    Timeline >>
    STEP: Creating a kubernetes client @ 01/27/23 17:41:37.186
    Jan 27 17:41:37.186: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir @ 01/27/23 17:41:37.187
    STEP: Waiting for a default service account to be provisioned in namespace @ 01/27/23 17:41:37.288
    E0127 17:41:37.328028      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 01/27/23 17:41:37.349
    STEP: Creating Pod @ 01/27/23 17:41:37.41
    Jan 27 17:41:37.451: INFO: Waiting up to 5m0s for pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25" in namespace "emptydir-3171" to be "running"
    Jan 27 17:41:37.482: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 30.98355ms
    E0127 17:41:38.328454      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:41:39.329511      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:41:39.515: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064371543s
    E0127 17:41:40.330452      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:41:41.331610      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:41:41.515: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064487251s
    E0127 17:41:42.332053      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:41:43.332673      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:41:43.515: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064727094s
    E0127 17:41:44.333712      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:41:45.334675      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:41:45.515: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064314922s
    E0127 17:41:46.335639      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:41:47.339805      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:41:47.516: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064879371s
    E0127 17:41:48.340519      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:41:49.340755      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:41:49.520: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 12.069475362s
    E0127 17:41:50.340865      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:41:51.342235      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:41:51.517: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 14.066224892s
    E0127 17:41:52.342832      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:41:53.343669      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:41:53.516: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 16.064871854s
    E0127 17:41:54.343896      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:41:55.344709      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:41:55.516: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 18.065728026s
    E0127 17:41:56.345548      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:41:57.345890      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:41:57.519: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 20.068745145s
    E0127 17:41:58.346148      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:41:59.347428      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:41:59.515: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 22.06470169s
    E0127 17:42:00.347452      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:01.348184      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:01.516: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 24.064759912s
    E0127 17:42:02.348450      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:03.349239      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:03.515: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 26.06385909s
    E0127 17:42:04.350096      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:05.350464      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:05.516: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 28.06506058s
    E0127 17:42:06.351618      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:07.352531      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:07.515: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 30.064269079s
    E0127 17:42:08.353604      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:09.354855      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:09.516: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 32.065197405s
    E0127 17:42:10.355808      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:11.357100      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:11.515: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 34.064712546s
    E0127 17:42:12.357342      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:13.357995      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:13.515: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 36.064630671s
    E0127 17:42:14.358876      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:15.359436      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:15.516: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 38.064782358s
    E0127 17:42:16.360389      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:17.361217      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:17.519: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 40.068329545s
    E0127 17:42:18.361939      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:19.362042      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:19.514: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 42.063398145s
    E0127 17:42:20.362188      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:21.362897      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:21.515: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 44.064159651s
    E0127 17:42:22.363280      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:23.363652      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:23.515: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 46.064573932s
    E0127 17:42:24.364673      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:25.365919      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:25.515: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 48.064388349s
    E0127 17:42:26.366434      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:27.367594      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:27.515: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 50.064665423s
    E0127 17:42:28.368547      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:29.369310      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:29.516: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 52.065167251s
    E0127 17:42:30.370473      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:31.370933      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:31.514: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 54.06339124s
    E0127 17:42:32.371482      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:33.372197      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:33.524: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 56.073132894s
    E0127 17:42:34.373030      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:35.374007      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:35.515: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 58.064640792s
    E0127 17:42:36.374491      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:37.374644      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:37.516: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.064796313s
    E0127 17:42:38.375054      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:39.375929      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:39.514: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.063636785s
    E0127 17:42:40.376782      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:41.377413      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:41.515: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.064323237s
    E0127 17:42:42.377849      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:43.378591      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:43.515: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.064608395s
    E0127 17:42:44.378976      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:45.378987      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:45.514: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.063485483s
    E0127 17:42:46.382603      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:47.383235      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:47.516: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.065305889s
    E0127 17:42:48.384251      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    E0127 17:42:49.385175      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:49.515: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.063761288s
    Jan 27 17:42:49.515: INFO: Pod "pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25" satisfied condition "running"
    STEP: Reading file content from the nginx-container @ 01/27/23 17:42:49.515
    Jan 27 17:42:49.515: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3171 PodName:pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25 ContainerName:busybox-main-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Jan 27 17:42:49.515: INFO: >>> kubeConfig: /tmp/kubeconfig
    Jan 27 17:42:49.516: INFO: ExecWithOptions: Clientset creation
    Jan 27 17:42:49.516: INFO: ExecWithOptions: execute(POST https://capz-conf-2nvvsh-58f57015.eastus.cloudapp.azure.com:6443/api/v1/namespaces/emptydir-3171/pods/pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true)
    Jan 27 17:42:49.895: INFO: Exec stderr: ""
    Jan 27 17:42:49.895: INFO: Unexpected error: failed to execute command in pod pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25, container busybox-main-container: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "6c5c3e01226d901b88f8109d21386b13e3c67cdc7b84e2782df38498c6e52ae6": hcs::System::CreateProcess 5956fd001f8e1004b864d8e1c7f4bc744f9bf07e3ea93390d310d3ad3cfbdb60: The system cannot find the file specified.: unknown: 

        <*errors.errorString | 0xc001038f10>: {
            s: "Internal error occurred: error executing command in container: failed to exec in container: failed to start exec \"6c5c3e01226d901b88f8109d21386b13e3c67cdc7b84e2782df38498c6e52ae6\": hcs::System::CreateProcess 5956fd001f8e1004b864d8e1c7f4bc744f9bf07e3ea93390d310d3ad3cfbdb60: The system cannot find the file specified.: unknown",

        }
    [FAILED] in [It] - test/e2e/framework/pod/exec_util.go:107 @ 01/27/23 17:42:49.895

    Jan 27 17:42:49.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: dump namespace information after failure @ 01/27/23 17:42:49.938
    STEP: Collecting events from namespace "emptydir-3171". @ 01/27/23 17:42:49.938
    STEP: Found 7 events. @ 01/27/23 17:42:49.971
    Jan 27 17:42:49.971: INFO: At 2023-01-27 17:41:37 +0000 UTC - event for pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25: {default-scheduler } Scheduled: Successfully assigned emptydir-3171/pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25 to capz-conf-pl764
    Jan 27 17:42:49.971: INFO: At 2023-01-27 17:41:42 +0000 UTC - event for pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25: {kubelet capz-conf-pl764} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine
... skipping 53 lines ...
    Jan 27 17:42:50.307: INFO: 	Container azuredisk ready: true, restart count 0
    Jan 27 17:42:50.307: INFO: 	Container csi-attacher ready: true, restart count 0
    Jan 27 17:42:50.307: INFO: 	Container csi-provisioner ready: true, restart count 0
    Jan 27 17:42:50.307: INFO: 	Container csi-resizer ready: true, restart count 0
    Jan 27 17:42:50.307: INFO: 	Container csi-snapshotter ready: true, restart count 0
    Jan 27 17:42:50.307: INFO: 	Container liveness-probe ready: true, restart count 0
    E0127 17:42:50.385812      18 retrywatcher.go:130] "Watch failed" err="context canceled"

    Jan 27 17:42:50.506: INFO: 
    Latency metrics for node capz-conf-2nvvsh-control-plane-mkbzc
    Jan 27 17:42:50.506: INFO: 
    Logging node info for node capz-conf-mq85r
    Jan 27 17:42:50.542: INFO: Node Info: &Node{ObjectMeta:{capz-conf-mq85r    8e7edb35-4b53-4bb4-9ad2-727659302e9f 16537 0 2023-01-27 17:24:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-mq85r kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.disk.csi.azure.com/zone: topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-2nvvsh cluster.x-k8s.io/cluster-namespace:capz-conf-2nvvsh cluster.x-k8s.io/machine:capz-conf-2nvvsh-md-win-7f6f8c8f4c-6dk2t cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-2nvvsh-md-win-7f6f8c8f4c csi.volume.kubernetes.io/nodeid:{"disk.csi.azure.com":"capz-conf-mq85r"} kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.246.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:44:dd:45 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-01-27 17:24:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2023-01-27 17:24:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-27 17:25:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2023-01-27 17:25:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {calico-node.exe Update v1 2023-01-27 17:26:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {kubelet.exe Update v1 2023-01-27 17:42:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.disk.csi.azure.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-conf-2nvvsh/providers/Microsoft.Compute/virtualMachines/capz-conf-mq85r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-27 17:42:47 +0000 UTC,LastTransitionTime:2023-01-27 17:24:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-27 17:42:47 +0000 UTC,LastTransitionTime:2023-01-27 17:24:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-27 17:42:47 +0000 UTC,LastTransitionTime:2023-01-27 17:24:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-27 17:42:47 +0000 UTC,LastTransitionTime:2023-01-27 17:25:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-mq85r,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-mq85r,SystemUUID:2262889D-3713-48FB-B83D-929698624C2F,BootID:9,KernelVersion:10.0.17763.3887,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.27.0-alpha.1.60+cc3cf560a0b0a7-dirty,KubeProxyVersion:v1.27.0-alpha.1.60+cc3cf560a0b0a7-dirty,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:269513752,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:207280609,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:204576694,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:203784192,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:203697965,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:179603505,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:168375296,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:167222041,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:907b259fe0c9f5adda9f00a91b8a8228f4f38768021fb6d05cbad0538ef8f99a mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.26.1],SizeBytes:130115533,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:127002486,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.27.0-alpha.1.56_4d9e8f76959f16-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:515b883deb0ae8d58eef60312f4d460ff8a3f52a2a5e487c94a8ebb2ca362720 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2],SizeBytes:112797444,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:fcb73e1939d9abeb2d1e1680b476a10a422a04a73ea5a65e64eec3fde1f2a5a1 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0],SizeBytes:111834447,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:2082c9b6488b3a2839141f472740c36484d5cbc91f7c24d67bc77ea311d4602b docker.io/sigwindowstools/calico-install:v3.24.5-hostprocess],SizeBytes:49820336,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ba0ac4633a832430a00374ef6cf1c701797017b8d09ccc3fb12db253e250887a docker.io/sigwindowstools/calico-node:v3.24.5-hostprocess],SizeBytes:28623190,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
    Jan 27 17:42:50.542: INFO: 
... skipping 66 lines ...
    Jan 27 17:42:50.979: INFO: 	Container calico-node-startup ready: true, restart count 0
    Jan 27 17:42:51.268: INFO: 
    Latency metrics for node capz-conf-pl764
    STEP: Destroying namespace "emptydir-3171" for this suite. @ 01/27/23 17:42:51.268
    << Timeline
  
    [FAILED] failed to execute command in pod pod-sharedvolume-4488f2af-a7bb-469a-a61d-8f13e3123f25, container busybox-main-container: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "6c5c3e01226d901b88f8109d21386b13e3c67cdc7b84e2782df38498c6e52ae6": hcs::System::CreateProcess 5956fd001f8e1004b864d8e1c7f4bc744f9bf07e3ea93390d310d3ad3cfbdb60: The system cannot find the file specified.: unknown: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "6c5c3e01226d901b88f8109d21386b13e3c67cdc7b84e2782df38498c6e52ae6": hcs::System::CreateProcess 5956fd001f8e1004b864d8e1c7f4bc744f9bf07e3ea93390d310d3ad3cfbdb60: The system cannot find the file specified.: unknown

    In [It] at: test/e2e/framework/pod/exec_util.go:107 @ 01/27/23 17:42:49.895
  ------------------------------
  S•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSS•SSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSS•SSSSSSSSSSSSS•SSSSSSSSSSSSSS•SSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSS•SSSS•SSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSS•SSSSSSSSSSSSSSSSSS•SSSSSSS•S•S•SSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSS•SSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•S•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSS•SSS•SSSSSSSSSS••SSSSSSSSS•SSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SS•SSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSS•SS•SSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSS•SSSSSSSSSSSSSS•SSSS•SSSSSSSSSSS•SSSSSSSSSS•SSSSS•SSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSS•SSSSSSSSSS•SSSSSSSSSSSSS•SSSSSSS•SSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSS•SSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SS•SSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSS•SSSSSSSS•SSS•SSSSSSSSSSSSSSSSSSSSS•SSSSSS••SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSS•S•SSSSSSSSSSSS•SSSSS•SSSSS•SSSSSSSSSSS•SSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSS•SS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSS•SSSSSSSSSSSSSSSS•SS••SSSSSSSSSSS••SSSSSSSS•SSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSS•SSSS•SSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSS•SSSSSSSSSSSSSSSSSSSSSSS•S•SSSSSSSS•SSSSSS•SSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSS••SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSS•SSSSS•SSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSS•SSSSSSSSSS•SSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSS•••
  
  Summarizing 1 Failure:
    [FAIL] [sig-storage] EmptyDir volumes [It] pod should support shared volumes between containers [Conformance]

    test/e2e/framework/pod/exec_util.go:107
  
  Ran 338 of 7082 Specs in 2021.431 seconds
  FAIL! -- 337 Passed | 1 Failed | 0 Pending | 6744 Skipped

  
    I0127 17:26:14.066096      13 e2e.go:126] Starting e2e run "c5a0de00-a888-40ec-b72c-c015971df96e" on Ginkgo node 1
  You're using deprecated Ginkgo functionality:
  =============================================
    --ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead
    Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags
    --ginkgo.progress is deprecated .  The functionality provided by --progress was confusing and is no longer needed.  Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs.  Or you can run with -vv to always see all node events.  Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.

    --ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo.  This feature has proved to be more noisy than useful.  You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.
  
  To silence deprecations that can be silenced set the following environment variable:
    ACK_GINKGO_DEPRECATIONS=2.7.0
  
  PASS
  
    I0127 17:26:14.069874      15 e2e.go:126] Starting e2e run "bf8df3dc-71ca-4d96-96a7-7f2ede6839f8" on Ginkgo node 2
  You're using deprecated Ginkgo functionality:
  =============================================
    --ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead
    Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags
    --ginkgo.progress is deprecated .  The functionality provided by --progress was confusing and is no longer needed.  Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs.  Or you can run with -vv to always see all node events.  Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.

    --ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo.  This feature has proved to be more noisy than useful.  You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.
  
  To silence deprecations that can be silenced set the following environment variable:
    ACK_GINKGO_DEPRECATIONS=2.7.0
  
  PASS
  
    I0127 17:26:14.058485      16 e2e.go:126] Starting e2e run "db61c672-1d00-4c5f-9041-df2e4a1936bf" on Ginkgo node 3
  You're using deprecated Ginkgo functionality:
  =============================================
    --ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead
    Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags
    --ginkgo.progress is deprecated .  The functionality provided by --progress was confusing and is no longer needed.  Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs.  Or you can run with -vv to always see all node events.  Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.

    --ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo.  This feature has proved to be more noisy than useful.  You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.
  
  To silence deprecations that can be silenced set the following environment variable:
    ACK_GINKGO_DEPRECATIONS=2.7.0
  
  PASS
  
    I0127 17:26:14.077714      18 e2e.go:126] Starting e2e run "a66f3372-ff94-4024-817f-5622bb4f8222" on Ginkgo node 4
  You're using deprecated Ginkgo functionality:
  =============================================
    --ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead
    Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags
    --ginkgo.progress is deprecated .  The functionality provided by --progress was confusing and is no longer needed.  Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs.  Or you can run with -vv to always see all node events.  Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.

    --ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo.  This feature has proved to be more noisy than useful.  You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.
  
  To silence deprecations that can be silenced set the following environment variable:
    ACK_GINKGO_DEPRECATIONS=2.7.0
  
  --- FAIL: TestE2E (1899.93s)

  FAIL

  
  
  Ginkgo ran 1 suite in 33m43.272194021s
  
  Test Suite Failed

  You're using deprecated Ginkgo functionality:
  =============================================
    --slowSpecThreshold is deprecated use --slow-spec-threshold instead and pass in a duration string (e.g. '5s', not '5.0')
    Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed--slowspecthreshold
  
  To silence deprecations that can be silenced set the following environment variable:
    ACK_GINKGO_DEPRECATIONS=2.7.0
  
  [FAILED] in [It] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238 @ 01/27/23 17:59:57.586
  Jan 27 17:59:57.587: INFO: FAILED!
  Jan 27 17:59:57.588: INFO: Cleaning up after "Conformance Tests conformance-tests" spec
  STEP: Dumping logs from the "capz-conf-2nvvsh" workload cluster @ 01/27/23 17:59:57.588
  Jan 27 17:59:57.588: INFO: Dumping workload cluster capz-conf-2nvvsh/capz-conf-2nvvsh logs
  Jan 27 17:59:57.681: INFO: Collecting logs for Linux node capz-conf-2nvvsh-control-plane-mkbzc in cluster capz-conf-2nvvsh in namespace capz-conf-2nvvsh

  Jan 27 18:00:15.848: INFO: Collecting boot logs for AzureMachine capz-conf-2nvvsh-control-plane-mkbzc

  Jan 27 18:00:16.872: INFO: Collecting logs for Windows node capz-conf-mq85r in cluster capz-conf-2nvvsh in namespace capz-conf-2nvvsh

  Jan 27 18:03:35.660: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-mq85r to /logs/artifacts/clusters/capz-conf-2nvvsh/machines/capz-conf-2nvvsh-md-win-7f6f8c8f4c-6dk2t/crashdumps.tar
  Jan 27 18:03:36.903: INFO: Collecting boot logs for AzureMachine capz-conf-2nvvsh-md-win-mq85r

Failed to get logs for Machine capz-conf-2nvvsh-md-win-7f6f8c8f4c-6dk2t, Cluster capz-conf-2nvvsh/capz-conf-2nvvsh: running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1
  Jan 27 18:03:37.905: INFO: Collecting logs for Windows node capz-conf-pl764 in cluster capz-conf-2nvvsh in namespace capz-conf-2nvvsh

  Jan 27 18:06:57.044: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-pl764 to /logs/artifacts/clusters/capz-conf-2nvvsh/machines/capz-conf-2nvvsh-md-win-7f6f8c8f4c-s9qxk/crashdumps.tar
  Jan 27 18:06:58.941: INFO: Collecting boot logs for AzureMachine capz-conf-2nvvsh-md-win-pl764

Failed to get logs for Machine capz-conf-2nvvsh-md-win-7f6f8c8f4c-s9qxk, Cluster capz-conf-2nvvsh/capz-conf-2nvvsh: running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1
  Jan 27 18:06:59.862: INFO: Dumping workload cluster capz-conf-2nvvsh/capz-conf-2nvvsh kube-system pod logs
  Jan 27 18:07:00.195: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-765bf76756-dn4tv, container calico-apiserver
  Jan 27 18:07:00.195: INFO: Describing Pod calico-apiserver/calico-apiserver-765bf76756-dn4tv
  Jan 27 18:07:00.289: INFO: Describing Pod calico-apiserver/calico-apiserver-765bf76756-ktcl9
  Jan 27 18:07:00.289: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-765bf76756-ktcl9, container calico-apiserver
  Jan 27 18:07:00.355: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-6b7b9c649d-w888q, container calico-kube-controllers
... skipping 75 lines ...
  Jan 27 18:07:11.747: INFO: Creating log watcher for controller kube-system/kube-proxy-windows-qv5qs, container kube-proxy
  Jan 27 18:07:11.747: INFO: Describing Pod kube-system/kube-proxy-windows-qv5qs
  Jan 27 18:07:12.149: INFO: Describing Pod kube-system/kube-scheduler-capz-conf-2nvvsh-control-plane-mkbzc
  Jan 27 18:07:12.149: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-conf-2nvvsh-control-plane-mkbzc, container kube-scheduler
  Jan 27 18:07:12.551: INFO: Describing Pod kube-system/metrics-server-c9574f845-6m558
  Jan 27 18:07:12.551: INFO: Creating log watcher for controller kube-system/metrics-server-c9574f845-6m558, container metrics-server
  Jan 27 18:07:12.953: INFO: failed to describe pod limitrange-9119/pfpod: pods "pfpod" not found
  Jan 27 18:07:12.953: INFO: Creating log watcher for controller limitrange-9119/pfpod, container pause
  Jan 27 18:07:12.953: INFO: Describing Pod limitrange-9119/pfpod
  Jan 27 18:07:12.983: INFO: Error starting logs stream for pod limitrange-9119/pfpod, container pause: pods "pfpod" not found
  Jan 27 18:07:13.348: INFO: Fetching kube-system pod logs took 13.485828737s
  Jan 27 18:07:13.348: INFO: Dumping workload cluster capz-conf-2nvvsh/capz-conf-2nvvsh Azure activity log
  Jan 27 18:07:13.348: INFO: Creating log watcher for controller tigera-operator/tigera-operator-54b47459dd-kj49z, container tigera-operator
  Jan 27 18:07:13.348: INFO: Describing Pod tigera-operator/tigera-operator-54b47459dd-kj49z
  Jan 27 18:07:18.871: INFO: Fetching activity logs took 5.52301374s
  Jan 27 18:07:18.871: INFO: Dumping all the Cluster API resources in the "capz-conf-2nvvsh" namespace
... skipping 2 lines ...
  INFO: Waiting for the Cluster capz-conf-2nvvsh/capz-conf-2nvvsh to be deleted
  STEP: Waiting for cluster capz-conf-2nvvsh to be deleted @ 01/27/23 18:07:19.316
  Jan 27 18:13:09.545: INFO: Deleting namespace used for hosting the "conformance-tests" test spec
  INFO: Deleting namespace capz-conf-2nvvsh
  Jan 27 18:13:09.567: INFO: Checking if any resources are left over in Azure for spec "conformance-tests"
  STEP: Redacting sensitive information from logs @ 01/27/23 18:13:10.078
• [FAILED] [3359.148 seconds]
Conformance Tests [It] conformance-tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100

  [FAILED] Unexpected error:
      <*errors.withStack | 0xc0014a1db8>: {
          error: <*errors.withMessage | 0xc000ff27e0>{
              cause: <*errors.errorString | 0xc00043fd30>{
                  s: "error container run failed with exit code 1",
              },
              msg: "Unable to run conformance tests",
          },
          stack: [0x33a3f59, 0x3651f67, 0x194537b, 0x1959958, 0x14d9741],
      }
      Unable to run conformance tests: error container run failed with exit code 1
  occurred
  In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238 @ 01/27/23 17:59:57.586

  Full Stack Trace
    sigs.k8s.io/cluster-api-provider-azure/test/e2e.glob..func3.2()
    	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238 +0x18fa
... skipping 8 lines ...
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
[ReportAfterSuite] PASSED [0.006 seconds]
------------------------------

Summarizing 1 Failure:
  [FAIL] Conformance Tests [It] conformance-tests
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238

Ran 1 of 24 Specs in 3513.736 seconds
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 23 Skipped
--- FAIL: TestE2E (3513.74s)
FAIL
You're using deprecated Ginkgo functionality:
=============================================
  CurrentGinkgoTestDescription() is deprecated in Ginkgo V2.  Use CurrentSpecReport() instead.
  Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:282
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:285

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=2.7.0


Ginkgo ran 1 suite in 1h1m3.170554662s

Test Suite Failed
make[3]: *** [Makefile:654: test-e2e-run] Error 1
make[3]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: *** [Makefile:669: test-e2e-skip-push] Error 2
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[1]: *** [Makefile:685: test-conformance] Error 2
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:695: test-windows-upstream] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 8 lines ...