Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 52m54s |
Revision | main |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sConformance\sTests\sconformance\-tests$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:99 Timed out after 1500.000s. Expected <int>: 1 to equal <int>: 2 /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.4/framework/machinedeployment_helpers.go:129from junit.e2e_suite.1.xml
[BeforeEach] Conformance Tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:55 INFO: Cluster name is capz-conf-mczppm �[1mSTEP�[0m: Creating namespace "capz-conf-mczppm" for hosting the cluster Nov 25 01:05:47.315: INFO: starting to create namespace for hosting the "capz-conf-mczppm" test spec INFO: Creating namespace capz-conf-mczppm INFO: Creating event watcher for namespace "capz-conf-mczppm" [Measure] conformance-tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:99 INFO: Creating the workload cluster with name "capz-conf-mczppm" using the "conformance-ci-artifacts-windows-containerd" template (Kubernetes v1.27.0-alpha.0.46+8f2371bcceff79, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-conf-mczppm --infrastructure (default) --kubernetes-version v1.27.0-alpha.0.46+8f2371bcceff79 --control-plane-machine-count 1 --worker-machine-count 0 --flavor conformance-ci-artifacts-windows-containerd INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-conf-mczppm/capz-conf-mczppm-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-conf-mczppm/capz-conf-mczppm-control-plane to be ready (implies underlying nodes to be ready as well) �[1mSTEP�[0m: Waiting for the control plane to be ready �[1mSTEP�[0m: Checking all the the control plane machines are in the expected failure domains INFO: Waiting for the machine deployments to be provisioned �[1mSTEP�[0m: Waiting for the workload nodes to exist �[1mSTEP�[0m: Checking all the machines controlled by capz-conf-mczppm-md-0 are in the "<None>" failure domain �[1mSTEP�[0m: Waiting for the workload nodes to exist [AfterEach] Conformance Tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:237 Nov 25 01:35:41.637: INFO: FAILED! Nov 25 01:35:41.637: INFO: Cleaning up after "Conformance Tests conformance-tests" spec �[1mSTEP�[0m: Dumping logs from the "capz-conf-mczppm" workload cluster �[1mSTEP�[0m: Dumping workload cluster capz-conf-mczppm/capz-conf-mczppm logs Nov 25 01:35:41.691: INFO: Collecting logs for Linux node capz-conf-mczppm-control-plane-nptxr in cluster capz-conf-mczppm in namespace capz-conf-mczppm Nov 25 01:35:56.173: INFO: Collecting boot logs for AzureMachine capz-conf-mczppm-control-plane-nptxr Nov 25 01:35:57.062: INFO: Collecting logs for Windows node capz-conf-rds69 in cluster capz-conf-mczppm in namespace capz-conf-mczppm Nov 25 01:40:10.018: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-rds69 to /logs/artifacts/clusters/capz-conf-mczppm/machines/capz-conf-mczppm-md-win-6d79bb9cb7-g7ls5/crashdumps.tar Nov 25 01:40:10.493: INFO: Collecting boot logs for AzureMachine capz-conf-mczppm-md-win-rds69 Nov 25 01:40:11.541: INFO: Collecting logs for Windows node capz-conf-sss2m in cluster capz-conf-mczppm in namespace capz-conf-mczppm Nov 25 01:42:38.607: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-sss2m to /logs/artifacts/clusters/capz-conf-mczppm/machines/capz-conf-mczppm-md-win-6d79bb9cb7-vrw2m/crashdumps.tar Nov 25 01:42:40.235: INFO: Collecting boot logs for AzureMachine capz-conf-mczppm-md-win-sss2m �[1mSTEP�[0m: Dumping workload cluster capz-conf-mczppm/capz-conf-mczppm kube-system pod logs �[1mSTEP�[0m: Fetching kube-system pod logs took 367.067153ms �[1mSTEP�[0m: Dumping workload cluster capz-conf-mczppm/capz-conf-mczppm Azure activity log �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-kube-controllers-657b584867-9pmns, container calico-kube-controllers �[1mSTEP�[0m: Collecting events for Pod kube-system/calico-kube-controllers-657b584867-9pmns �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-7fk9g, container calico-node �[1mSTEP�[0m: Collecting events for Pod kube-system/calico-node-7fk9g �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-windows-2wkd4, container calico-node-startup �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-windows-2wkd4, container calico-node-felix �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-conf-mczppm-control-plane-nptxr, container kube-apiserver �[1mSTEP�[0m: Collecting events for Pod kube-system/kube-scheduler-capz-conf-mczppm-control-plane-nptxr �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-windows-8k484, container kube-proxy �[1mSTEP�[0m: Collecting events for Pod kube-system/kube-proxy-windows-8k484 �[1mSTEP�[0m: Collecting events for Pod kube-system/kube-controller-manager-capz-conf-mczppm-control-plane-nptxr �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-conf-mczppm-control-plane-nptxr, container kube-scheduler �[1mSTEP�[0m: Collecting events for Pod kube-system/kube-proxy-98bn5 �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-98bn5, container kube-proxy �[1mSTEP�[0m: Collecting events for Pod kube-system/calico-node-windows-2wkd4 �[1mSTEP�[0m: Creating log watcher for controller kube-system/containerd-logger-xp4wr, container containerd-logger �[1mSTEP�[0m: Collecting events for Pod kube-system/containerd-logger-xp4wr �[1mSTEP�[0m: Collecting events for Pod kube-system/coredns-787d4945fb-bwmpd �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-787d4945fb-bwmpd, container coredns �[1mSTEP�[0m: Collecting events for Pod kube-system/coredns-787d4945fb-m7wgg �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-proxy-l2lnw, container csi-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-787d4945fb-m7wgg, container coredns �[1mSTEP�[0m: Collecting events for Pod kube-system/csi-proxy-l2lnw �[1mSTEP�[0m: Creating log watcher for controller kube-system/metrics-server-c9574f845-lprjm, container metrics-server �[1mSTEP�[0m: Collecting events for Pod kube-system/etcd-capz-conf-mczppm-control-plane-nptxr �[1mSTEP�[0m: Collecting events for Pod kube-system/metrics-server-c9574f845-lprjm �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-conf-mczppm-control-plane-nptxr, container etcd �[1mSTEP�[0m: Collecting events for Pod kube-system/kube-apiserver-capz-conf-mczppm-control-plane-nptxr �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-conf-mczppm-control-plane-nptxr, container kube-controller-manager �[1mSTEP�[0m: Fetching activity logs took 3.680970716s Nov 25 01:42:45.247: INFO: Dumping all the Cluster API resources in the "capz-conf-mczppm" namespace Nov 25 01:42:45.660: INFO: Deleting all clusters in the capz-conf-mczppm namespace �[1mSTEP�[0m: Deleting cluster capz-conf-mczppm INFO: Waiting for the Cluster capz-conf-mczppm/capz-conf-mczppm to be deleted �[1mSTEP�[0m: Waiting for cluster capz-conf-mczppm to be deleted Nov 25 01:49:15.993: INFO: Deleting namespace used for hosting the "conformance-tests" test spec INFO: Deleting namespace capz-conf-mczppm Nov 25 01:49:16.013: INFO: Checking if any resources are left over in Azure for spec "conformance-tests" �[1mSTEP�[0m: Redacting sensitive information from logs
Filter through log files | View test history on testgrid
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 488 lines ... [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:99[0m [BeforeEach] Conformance Tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:55 INFO: Cluster name is capz-conf-mczppm [1mSTEP[0m: Creating namespace "capz-conf-mczppm" for hosting the cluster Nov 25 01:05:47.315: INFO: starting to create namespace for hosting the "capz-conf-mczppm" test spec 2022/11/25 01:05:47 failed trying to get namespace (capz-conf-mczppm):namespaces "capz-conf-mczppm" not found INFO: Creating namespace capz-conf-mczppm INFO: Creating event watcher for namespace "capz-conf-mczppm" [Measure] conformance-tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:99 INFO: Creating the workload cluster with name "capz-conf-mczppm" using the "conformance-ci-artifacts-windows-containerd" template (Kubernetes v1.27.0-alpha.0.46+8f2371bcceff79, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml ... skipping 32 lines ... INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Checking all the machines controlled by capz-conf-mczppm-md-0 are in the "<None>" failure domain [1mSTEP[0m: Waiting for the workload nodes to exist [AfterEach] Conformance Tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:237 Nov 25 01:35:41.637: INFO: FAILED! Nov 25 01:35:41.637: INFO: Cleaning up after "Conformance Tests conformance-tests" spec [1mSTEP[0m: Dumping logs from the "capz-conf-mczppm" workload cluster [1mSTEP[0m: Dumping workload cluster capz-conf-mczppm/capz-conf-mczppm logs Nov 25 01:35:41.691: INFO: Collecting logs for Linux node capz-conf-mczppm-control-plane-nptxr in cluster capz-conf-mczppm in namespace capz-conf-mczppm Nov 25 01:35:56.173: INFO: Collecting boot logs for AzureMachine capz-conf-mczppm-control-plane-nptxr Nov 25 01:35:57.062: INFO: Collecting logs for Windows node capz-conf-rds69 in cluster capz-conf-mczppm in namespace capz-conf-mczppm Nov 25 01:40:10.018: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-rds69 to /logs/artifacts/clusters/capz-conf-mczppm/machines/capz-conf-mczppm-md-win-6d79bb9cb7-g7ls5/crashdumps.tar Nov 25 01:40:10.493: INFO: Collecting boot logs for AzureMachine capz-conf-mczppm-md-win-rds69 Failed to get logs for machine capz-conf-mczppm-md-win-6d79bb9cb7-g7ls5, cluster capz-conf-mczppm/capz-conf-mczppm: dialing from control plane to target node at capz-conf-rds69: ssh: rejected: connect failed (Temporary failure in name resolution) Nov 25 01:40:11.541: INFO: Collecting logs for Windows node capz-conf-sss2m in cluster capz-conf-mczppm in namespace capz-conf-mczppm Nov 25 01:42:38.607: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-sss2m to /logs/artifacts/clusters/capz-conf-mczppm/machines/capz-conf-mczppm-md-win-6d79bb9cb7-vrw2m/crashdumps.tar Nov 25 01:42:40.235: INFO: Collecting boot logs for AzureMachine capz-conf-mczppm-md-win-sss2m Failed to get logs for machine capz-conf-mczppm-md-win-6d79bb9cb7-vrw2m, cluster capz-conf-mczppm/capz-conf-mczppm: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster capz-conf-mczppm/capz-conf-mczppm kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 367.067153ms [1mSTEP[0m: Dumping workload cluster capz-conf-mczppm/capz-conf-mczppm Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-657b584867-9pmns, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-657b584867-9pmns [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-7fk9g, container calico-node ... skipping 100 lines ... JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mConformance Tests [0m[91m[1m[Measurement] conformance-tests [0m [37m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.4/framework/machinedeployment_helpers.go:129[0m [1m[91mRan 1 of 22 Specs in 2788.295 seconds[0m [1m[91mFAIL![0m -- [32m[1m0 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m21 Skipped[0m --- FAIL: TestE2E (2788.32s) FAIL [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m ... skipping 13 lines ... [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=1.16.5[0m Ginkgo ran 1 suite in 48m36.196041503s Test Suite Failed [38;5;228mGinkgo 2.0 is coming soon![0m [38;5;228m==========================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m - For instructions on using the Release Candidate visit [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta[0m - To comment, chime in at [38;5;14m[4mhttps://github.com/onsi/ginkgo/issues/711[0m To [1m[38;5;204msilence this notice[0m, set the environment variable: [1mACK_GINKGO_RC=true[0m Alternatively you can: [1mtouch $HOME/.ack-ginkgo-rc[0m make[3]: *** [Makefile:652: test-e2e-run] Error 1 make[3]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[2]: *** [Makefile:669: test-e2e-local] Error 2 make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[1]: *** [Makefile:679: test-conformance] Error 2 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:689: test-windows-upstream] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...