Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 58m3s |
Revision | main |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sConformance\sTests\sconformance\-tests$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:99 Timed out after 1500.001s. Expected <int>: 0 to equal <int>: 2 /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.4/framework/machinedeployment_helpers.go:129from junit.e2e_suite.1.xml
[BeforeEach] Conformance Tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:55 INFO: Cluster name is capz-conf-yh2dw8 �[1mSTEP�[0m: Creating namespace "capz-conf-yh2dw8" for hosting the cluster Nov 2 00:58:07.194: INFO: starting to create namespace for hosting the "capz-conf-yh2dw8" test spec INFO: Creating namespace capz-conf-yh2dw8 INFO: Creating event watcher for namespace "capz-conf-yh2dw8" [Measure] conformance-tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:99 INFO: Creating the workload cluster with name "capz-conf-yh2dw8" using the "conformance-ci-artifacts-windows-containerd" template (Kubernetes v1.26.0-alpha.2.535+2452a95bd4ee1a, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-conf-yh2dw8 --infrastructure (default) --kubernetes-version v1.26.0-alpha.2.535+2452a95bd4ee1a --control-plane-machine-count 1 --worker-machine-count 0 --flavor conformance-ci-artifacts-windows-containerd INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-conf-yh2dw8/capz-conf-yh2dw8-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-conf-yh2dw8/capz-conf-yh2dw8-control-plane to be ready (implies underlying nodes to be ready as well) �[1mSTEP�[0m: Waiting for the control plane to be ready �[1mSTEP�[0m: Checking all the the control plane machines are in the expected failure domains INFO: Waiting for the machine deployments to be provisioned �[1mSTEP�[0m: Waiting for the workload nodes to exist �[1mSTEP�[0m: Checking all the machines controlled by capz-conf-yh2dw8-md-0 are in the "<None>" failure domain �[1mSTEP�[0m: Waiting for the workload nodes to exist [AfterEach] Conformance Tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:237 Nov 2 01:30:11.642: INFO: FAILED! Nov 2 01:30:11.642: INFO: Cleaning up after "Conformance Tests conformance-tests" spec �[1mSTEP�[0m: Dumping logs from the "capz-conf-yh2dw8" workload cluster �[1mSTEP�[0m: Dumping workload cluster capz-conf-yh2dw8/capz-conf-yh2dw8 logs Nov 2 01:30:11.689: INFO: Collecting logs for Linux node capz-conf-yh2dw8-control-plane-zl5gv in cluster capz-conf-yh2dw8 in namespace capz-conf-yh2dw8 Nov 2 01:30:26.835: INFO: Collecting boot logs for AzureMachine capz-conf-yh2dw8-control-plane-zl5gv Nov 2 01:30:27.861: INFO: Unable to collect logs as node doesn't have addresses Nov 2 01:30:27.861: INFO: Collecting logs for Windows node capz-conf-yh2dw8-md-win-p7p8w in cluster capz-conf-yh2dw8 in namespace capz-conf-yh2dw8 Nov 2 01:34:40.288: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-yh2dw8-md-win-p7p8w to /logs/artifacts/clusters/capz-conf-yh2dw8/machines/capz-conf-yh2dw8-md-win-5ff5458c86-5gx7f/crashdumps.tar Nov 2 01:34:40.777: INFO: Collecting boot logs for AzureMachine capz-conf-yh2dw8-md-win-p7p8w Nov 2 01:34:40.810: INFO: Unable to collect logs as node doesn't have addresses Nov 2 01:34:40.810: INFO: Collecting logs for Windows node capz-conf-yh2dw8-md-win-9sngz in cluster capz-conf-yh2dw8 in namespace capz-conf-yh2dw8 Nov 2 01:38:53.068: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-yh2dw8-md-win-9sngz to /logs/artifacts/clusters/capz-conf-yh2dw8/machines/capz-conf-yh2dw8-md-win-5ff5458c86-krld4/crashdumps.tar Nov 2 01:38:54.240: INFO: Collecting boot logs for AzureMachine capz-conf-yh2dw8-md-win-9sngz �[1mSTEP�[0m: Dumping workload cluster capz-conf-yh2dw8/capz-conf-yh2dw8 kube-system pod logs �[1mSTEP�[0m: Collecting events for Pod kube-system/calico-kube-controllers-755ff8d7b5-gjk52 �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-84994b8c4-mzb92, container coredns �[1mSTEP�[0m: Collecting events for Pod kube-system/kube-apiserver-capz-conf-yh2dw8-control-plane-zl5gv �[1mSTEP�[0m: Collecting events for Pod kube-system/kube-proxy-kvlfs �[1mSTEP�[0m: Collecting events for Pod kube-system/coredns-84994b8c4-tckch �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-kube-controllers-755ff8d7b5-gjk52, container calico-kube-controllers �[1mSTEP�[0m: Collecting events for Pod kube-system/etcd-capz-conf-yh2dw8-control-plane-zl5gv �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-conf-yh2dw8-control-plane-zl5gv, container etcd �[1mSTEP�[0m: Fetching kube-system pod logs took 389.499564ms �[1mSTEP�[0m: Dumping workload cluster capz-conf-yh2dw8/capz-conf-yh2dw8 Azure activity log �[1mSTEP�[0m: Collecting events for Pod kube-system/kube-scheduler-capz-conf-yh2dw8-control-plane-zl5gv �[1mSTEP�[0m: failed to find events of Pod "kube-scheduler-capz-conf-yh2dw8-control-plane-zl5gv" �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-conf-yh2dw8-control-plane-zl5gv, container kube-scheduler �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-conf-yh2dw8-control-plane-zl5gv, container kube-controller-manager �[1mSTEP�[0m: Collecting events for Pod kube-system/metrics-server-76f7667fbf-bt8w8 �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-jkkkz, container calico-node �[1mSTEP�[0m: Collecting events for Pod kube-system/kube-controller-manager-capz-conf-yh2dw8-control-plane-zl5gv �[1mSTEP�[0m: failed to find events of Pod "kube-controller-manager-capz-conf-yh2dw8-control-plane-zl5gv" �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-kvlfs, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/metrics-server-76f7667fbf-bt8w8, container metrics-server �[1mSTEP�[0m: Collecting events for Pod kube-system/coredns-84994b8c4-mzb92 �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-conf-yh2dw8-control-plane-zl5gv, container kube-apiserver �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-84994b8c4-tckch, container coredns �[1mSTEP�[0m: Collecting events for Pod kube-system/calico-node-jkkkz �[1mSTEP�[0m: failed to find events of Pod "etcd-capz-conf-yh2dw8-control-plane-zl5gv" �[1mSTEP�[0m: failed to find events of Pod "kube-apiserver-capz-conf-yh2dw8-control-plane-zl5gv" �[1mSTEP�[0m: Fetching activity logs took 2.762275179s Nov 2 01:38:57.413: INFO: Dumping all the Cluster API resources in the "capz-conf-yh2dw8" namespace Nov 2 01:38:57.814: INFO: Deleting all clusters in the capz-conf-yh2dw8 namespace �[1mSTEP�[0m: Deleting cluster capz-conf-yh2dw8 INFO: Waiting for the Cluster capz-conf-yh2dw8/capz-conf-yh2dw8 to be deleted �[1mSTEP�[0m: Waiting for cluster capz-conf-yh2dw8 to be deleted Nov 2 01:45:38.080: INFO: Deleting namespace used for hosting the "conformance-tests" test spec INFO: Deleting namespace capz-conf-yh2dw8 Nov 2 01:45:38.101: INFO: Checking if any resources are left over in Azure for spec "conformance-tests" �[1mSTEP�[0m: Redacting sensitive information from logs
Filter through log files | View test history on testgrid
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e Workload cluster creation Creating an AKS cluster [EXPERIMENTAL] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 489 lines ... [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:99[0m [BeforeEach] Conformance Tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:55 INFO: Cluster name is capz-conf-yh2dw8 [1mSTEP[0m: Creating namespace "capz-conf-yh2dw8" for hosting the cluster Nov 2 00:58:07.194: INFO: starting to create namespace for hosting the "capz-conf-yh2dw8" test spec 2022/11/02 00:58:07 failed trying to get namespace (capz-conf-yh2dw8):namespaces "capz-conf-yh2dw8" not found INFO: Creating namespace capz-conf-yh2dw8 INFO: Creating event watcher for namespace "capz-conf-yh2dw8" [Measure] conformance-tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:99 INFO: Creating the workload cluster with name "capz-conf-yh2dw8" using the "conformance-ci-artifacts-windows-containerd" template (Kubernetes v1.26.0-alpha.2.535+2452a95bd4ee1a, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml ... skipping 32 lines ... INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Checking all the machines controlled by capz-conf-yh2dw8-md-0 are in the "<None>" failure domain [1mSTEP[0m: Waiting for the workload nodes to exist [AfterEach] Conformance Tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:237 Nov 2 01:30:11.642: INFO: FAILED! Nov 2 01:30:11.642: INFO: Cleaning up after "Conformance Tests conformance-tests" spec [1mSTEP[0m: Dumping logs from the "capz-conf-yh2dw8" workload cluster [1mSTEP[0m: Dumping workload cluster capz-conf-yh2dw8/capz-conf-yh2dw8 logs Nov 2 01:30:11.689: INFO: Collecting logs for Linux node capz-conf-yh2dw8-control-plane-zl5gv in cluster capz-conf-yh2dw8 in namespace capz-conf-yh2dw8 Nov 2 01:30:26.835: INFO: Collecting boot logs for AzureMachine capz-conf-yh2dw8-control-plane-zl5gv Nov 2 01:30:27.861: INFO: Unable to collect logs as node doesn't have addresses Nov 2 01:30:27.861: INFO: Collecting logs for Windows node capz-conf-yh2dw8-md-win-p7p8w in cluster capz-conf-yh2dw8 in namespace capz-conf-yh2dw8 Nov 2 01:34:40.288: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-yh2dw8-md-win-p7p8w to /logs/artifacts/clusters/capz-conf-yh2dw8/machines/capz-conf-yh2dw8-md-win-5ff5458c86-5gx7f/crashdumps.tar Nov 2 01:34:40.777: INFO: Collecting boot logs for AzureMachine capz-conf-yh2dw8-md-win-p7p8w Failed to get logs for machine capz-conf-yh2dw8-md-win-5ff5458c86-5gx7f, cluster capz-conf-yh2dw8/capz-conf-yh2dw8: [dialing from control plane to target node at capz-conf-yh2dw8-md-win-p7p8w: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: AzureMachine provider ID is nil] Nov 2 01:34:40.810: INFO: Unable to collect logs as node doesn't have addresses Nov 2 01:34:40.810: INFO: Collecting logs for Windows node capz-conf-yh2dw8-md-win-9sngz in cluster capz-conf-yh2dw8 in namespace capz-conf-yh2dw8 Nov 2 01:38:53.068: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-yh2dw8-md-win-9sngz to /logs/artifacts/clusters/capz-conf-yh2dw8/machines/capz-conf-yh2dw8-md-win-5ff5458c86-krld4/crashdumps.tar Nov 2 01:38:54.240: INFO: Collecting boot logs for AzureMachine capz-conf-yh2dw8-md-win-9sngz Failed to get logs for machine capz-conf-yh2dw8-md-win-5ff5458c86-krld4, cluster capz-conf-yh2dw8/capz-conf-yh2dw8: [dialing from control plane to target node at capz-conf-yh2dw8-md-win-9sngz: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: AzureMachine provider ID is nil] [1mSTEP[0m: Dumping workload cluster capz-conf-yh2dw8/capz-conf-yh2dw8 kube-system pod logs [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-755ff8d7b5-gjk52 [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-84994b8c4-mzb92, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-capz-conf-yh2dw8-control-plane-zl5gv [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-kvlfs [1mSTEP[0m: Collecting events for Pod kube-system/coredns-84994b8c4-tckch [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-755ff8d7b5-gjk52, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/etcd-capz-conf-yh2dw8-control-plane-zl5gv [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-capz-conf-yh2dw8-control-plane-zl5gv, container etcd [1mSTEP[0m: Fetching kube-system pod logs took 389.499564ms [1mSTEP[0m: Dumping workload cluster capz-conf-yh2dw8/capz-conf-yh2dw8 Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-capz-conf-yh2dw8-control-plane-zl5gv [1mSTEP[0m: failed to find events of Pod "kube-scheduler-capz-conf-yh2dw8-control-plane-zl5gv" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-conf-yh2dw8-control-plane-zl5gv, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-conf-yh2dw8-control-plane-zl5gv, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/metrics-server-76f7667fbf-bt8w8 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-jkkkz, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-capz-conf-yh2dw8-control-plane-zl5gv [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-capz-conf-yh2dw8-control-plane-zl5gv" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-kvlfs, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/metrics-server-76f7667fbf-bt8w8, container metrics-server [1mSTEP[0m: Collecting events for Pod kube-system/coredns-84994b8c4-mzb92 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-conf-yh2dw8-control-plane-zl5gv, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-84994b8c4-tckch, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-jkkkz [1mSTEP[0m: failed to find events of Pod "etcd-capz-conf-yh2dw8-control-plane-zl5gv" [1mSTEP[0m: failed to find events of Pod "kube-apiserver-capz-conf-yh2dw8-control-plane-zl5gv" [1mSTEP[0m: Fetching activity logs took 2.762275179s Nov 2 01:38:57.413: INFO: Dumping all the Cluster API resources in the "capz-conf-yh2dw8" namespace Nov 2 01:38:57.814: INFO: Deleting all clusters in the capz-conf-yh2dw8 namespace [1mSTEP[0m: Deleting cluster capz-conf-yh2dw8 INFO: Waiting for the Cluster capz-conf-yh2dw8/capz-conf-yh2dw8 to be deleted [1mSTEP[0m: Waiting for cluster capz-conf-yh2dw8 to be deleted ... skipping 68 lines ... JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mConformance Tests [0m[91m[1m[Measurement] conformance-tests [0m [37m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.4/framework/machinedeployment_helpers.go:129[0m [1m[91mRan 1 of 23 Specs in 3010.621 seconds[0m [1m[91mFAIL![0m -- [32m[1m0 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m22 Skipped[0m --- FAIL: TestE2E (3010.64s) FAIL [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m ... skipping 13 lines ... [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=1.16.5[0m Ginkgo ran 1 suite in 52m25.415512767s Test Suite Failed [38;5;228mGinkgo 2.0 is coming soon![0m [38;5;228m==========================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m - For instructions on using the Release Candidate visit [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta[0m - To comment, chime in at [38;5;14m[4mhttps://github.com/onsi/ginkgo/issues/711[0m To [1m[38;5;204msilence this notice[0m, set the environment variable: [1mACK_GINKGO_RC=true[0m Alternatively you can: [1mtouch $HOME/.ack-ginkgo-rc[0m make[3]: *** [Makefile:655: test-e2e-run] Error 1 make[3]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[2]: *** [Makefile:672: test-e2e-local] Error 2 make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[1]: *** [Makefile:682: test-conformance] Error 2 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:692: test-windows-upstream] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...