Recent runs || View in Spyglass
PR | upxinxin: Enable public MEC on CAPZ |
Result | FAILURE |
Tests | 1 failed / 27 succeeded |
Started | |
Elapsed | 1h9m |
Revision | b55dbac22960eb0ff42a2c2edf946014cfebdfc5 |
Refs |
2836 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sclusters\son\spublic\sMEC\s\[OPTIONAL\]\swith\s1\scontrol\splane\snodes\sand\s1\sworker\snode$'
[FAILED] Timed out after 1200.001s. Timed out waiting for Cluster capz-e2e-v3wmxg/capz-e2e-v3wmxg-edgezone to provision Expected <string>: Provisioning to equal <string>: Provisioned In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:144 @ 02/02/23 19:58:37.423 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit.e2e_suite.1.xml
2023/02/02 19:38:33 failed trying to get namespace (capz-e2e-v3wmxg):namespaces "capz-e2e-v3wmxg" not found cluster.cluster.x-k8s.io/capz-e2e-v3wmxg-edgezone created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-v3wmxg-edgezone created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-v3wmxg-edgezone-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-v3wmxg-edgezone-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-v3wmxg-edgezone-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-v3wmxg-edgezone-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-v3wmxg-edgezone-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created > Enter [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 02/02/23 19:38:33.282 INFO: "" started at Thu, 02 Feb 2023 19:38:33 UTC on Ginkgo node 7 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml STEP: Creating namespace "capz-e2e-v3wmxg" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:38:33.282 Feb 2 19:38:33.282: INFO: starting to create namespace for hosting the "capz-e2e-v3wmxg" test spec INFO: Creating namespace capz-e2e-v3wmxg INFO: Creating event watcher for namespace "capz-e2e-v3wmxg" Feb 2 19:38:33.449: INFO: Using existing cluster identity secret < Exit [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 02/02/23 19:38:33.449 (167ms) > Enter [It] with 1 control plane nodes and 1 worker node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:966 @ 02/02/23 19:38:33.449 STEP: using user-assigned identity - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:967 @ 02/02/23 19:38:33.449 INFO: Cluster name is capz-e2e-v3wmxg-edgezone INFO: Creating the workload cluster with name "capz-e2e-v3wmxg-edgezone" using the "edgezone" template (Kubernetes v1.25.6, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-v3wmxg-edgezone --infrastructure (default) --kubernetes-version v1.25.6 --control-plane-machine-count 1 --worker-machine-count 1 --flavor edgezone INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 02/02/23 19:38:37.42 [FAILED] Timed out after 1200.001s. Timed out waiting for Cluster capz-e2e-v3wmxg/capz-e2e-v3wmxg-edgezone to provision Expected <string>: Provisioning to equal <string>: Provisioned In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:144 @ 02/02/23 19:58:37.423 < Exit [It] with 1 control plane nodes and 1 worker node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:966 @ 02/02/23 19:58:37.423 (20m3.974s) > Enter [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:117 @ 02/02/23 19:58:37.423 Feb 2 19:58:37.561: INFO: FAILED! Feb 2 19:58:37.561: INFO: Cleaning up after "Workload cluster creation Creating clusters on public MEC [OPTIONAL] with 1 control plane nodes and 1 worker node" spec STEP: Unable to dump workload cluster logs as the cluster is nil - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:154 @ 02/02/23 19:58:37.561 Feb 2 19:58:37.561: INFO: Dumping all the Cluster API resources in the "capz-e2e-v3wmxg" namespace Feb 2 19:58:40.949: INFO: Deleting all clusters in the capz-e2e-v3wmxg namespace STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:218 @ 02/02/23 19:58:40.95 [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 @ 02/02/23 20:01:52.794 runtime error: invalid memory address or nil pointer dereference Full Stack Trace sigs.k8s.io/cluster-api-provider-azure/test/e2e.dumpSpecResourcesAndCleanup({0x4356720, 0xc000128008}, {{0x3eab037, 0x17}, {0x4368c90, 0xc0000b14d0}, {0xc0001e7500, 0xf}, 0xc000636c60, 0xc00053c9b0, ...}) /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:177 +0x4ad sigs.k8s.io/cluster-api-provider-azure/test/e2e.glob..func1.2() /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:138 +0x2d0 < Exit [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:117 @ 02/02/23 20:01:52.794 (3m15.371s)
Filter through log files | View test history on testgrid
capz-e2e [It] Workload cluster creation Creating a Flatcar cluster [OPTIONAL] With Flatcar control-plane and worker nodes
capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Conformance Tests conformance-tests
capz-e2e [It] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating an AKS cluster [Managed Kubernetes] with a single control plane node and 1 node
... skipping 643 lines ... [38;5;243m------------------------------[0m [38;5;10m• [887.067 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a Flatcar cluster [OPTIONAL] [38;5;10m[1mWith Flatcar control-plane and worker nodes[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:321[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/02/02 19:38:33 failed trying to get namespace (capz-e2e-egunjp):namespaces "capz-e2e-egunjp" not found cluster.cluster.x-k8s.io/capz-e2e-egunjp-flatcar created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-egunjp-flatcar created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-egunjp-flatcar-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-egunjp-flatcar-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-egunjp-flatcar-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-egunjp-flatcar-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-egunjp-flatcar-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created Failed to get logs for Machine capz-e2e-egunjp-flatcar-control-plane-j2xpk, Cluster capz-e2e-egunjp/capz-e2e-egunjp-flatcar: [dialing public load balancer at capz-e2e-egunjp-flatcar-86b17205.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.34.37:34244->20.253.101.85:22: read: connection reset by peer, dialing public load balancer at capz-e2e-egunjp-flatcar-86b17205.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.34.37:34248->20.253.101.85:22: read: connection reset by peer] Failed to get logs for Machine capz-e2e-egunjp-flatcar-md-0-75b7b5cbb-cgg72, Cluster capz-e2e-egunjp/capz-e2e-egunjp-flatcar: [dialing public load balancer at capz-e2e-egunjp-flatcar-86b17205.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.34.37:36522->20.253.101.85:22: read: connection reset by peer, dialing public load balancer at capz-e2e-egunjp-flatcar-86b17205.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.34.37:36536->20.253.101.85:22: read: connection reset by peer, dialing public load balancer at capz-e2e-egunjp-flatcar-86b17205.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.34.37:36528->20.253.101.85:22: read: connection reset by peer, dialing public load balancer at capz-e2e-egunjp-flatcar-86b17205.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.34.37:36524->20.253.101.85:22: read: connection reset by peer, dialing public load balancer at capz-e2e-egunjp-flatcar-86b17205.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.34.37:36534->20.253.101.85:22: read: connection reset by peer, dialing public load balancer at capz-e2e-egunjp-flatcar-86b17205.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.34.37:36538->20.253.101.85:22: read: connection reset by peer, dialing public load balancer at capz-e2e-egunjp-flatcar-86b17205.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.34.37:36526->20.253.101.85:22: read: connection reset by peer, dialing public load balancer at capz-e2e-egunjp-flatcar-86b17205.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.34.37:36532->20.253.101.85:22: read: connection reset by peer, dialing public load balancer at capz-e2e-egunjp-flatcar-86b17205.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.34.37:36530->20.253.101.85:22: read: connection reset by peer] [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Thu, 02 Feb 2023 19:38:33 UTC on Ginkgo node 6 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating namespace "capz-e2e-egunjp" for hosting the cluster [38;5;243m@ 02/02/23 19:38:33.243[0m Feb 2 19:38:33.243: INFO: starting to create namespace for hosting the "capz-e2e-egunjp" test spec ... skipping 157 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1091.617 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a cluster that uses the external cloud provider and machinepools [OPTIONAL] [38;5;10m[1mwith 1 control plane node and 1 machinepool[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:575[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/02/02 19:38:33 failed trying to get namespace (capz-e2e-13463b):namespaces "capz-e2e-13463b" not found cluster.cluster.x-k8s.io/capz-e2e-13463b-flex created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-13463b-flex created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-13463b-flex-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-13463b-flex-control-plane created machinepool.cluster.x-k8s.io/capz-e2e-13463b-flex-mp-0 created azuremachinepool.infrastructure.cluster.x-k8s.io/capz-e2e-13463b-flex-mp-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-13463b-flex-mp-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created W0202 19:47:00.222575 37499 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning 2023/02/02 19:47:40 [DEBUG] GET http://52.151.241.56 W0202 19:48:14.281100 37499 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning Failed to get logs for MachinePool capz-e2e-13463b-flex-mp-0, Cluster capz-e2e-13463b/capz-e2e-13463b-flex: Unable to collect VMSS Boot Diagnostic logs: failed to parse resource id: parsing failed for /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-13463b-flex/providers/Microsoft.Compute. Invalid resource Id format [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Thu, 02 Feb 2023 19:38:33 UTC on Ginkgo node 9 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating namespace "capz-e2e-13463b" for hosting the cluster [38;5;243m@ 02/02/23 19:38:33.267[0m Feb 2 19:38:33.267: INFO: starting to create namespace for hosting the "capz-e2e-13463b" test spec ... skipping 229 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1345.603 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] [38;5;10m[1mwith a 1 control plane nodes and 2 worker nodes[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:639[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/02/02 19:38:33 failed trying to get namespace (capz-e2e-06ane4):namespaces "capz-e2e-06ane4" not found cluster.cluster.x-k8s.io/capz-e2e-06ane4-oot created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-06ane4-oot created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-06ane4-oot-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-06ane4-oot-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-06ane4-oot-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-06ane4-oot-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-06ane4-oot-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created W0202 19:47:14.771999 37507 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning 2023/02/02 19:48:35 [DEBUG] GET http://52.151.246.25 2023/02/02 19:49:05 [ERR] GET http://52.151.246.25 request failed: Get "http://52.151.246.25": dial tcp 52.151.246.25:80: i/o timeout 2023/02/02 19:49:05 [DEBUG] GET http://52.151.246.25: retrying in 1s (4 left) 2023/02/02 19:49:36 [ERR] GET http://52.151.246.25 request failed: Get "http://52.151.246.25": dial tcp 52.151.246.25:80: i/o timeout 2023/02/02 19:49:36 [DEBUG] GET http://52.151.246.25: retrying in 2s (3 left) W0202 19:50:06.259211 37507 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning 2023/02/02 19:50:09 Error trying to deploy storage class oot-managedhdd9aug06 in namespace :storageclasses.storage.k8s.io "oot-managedhdd9aug06" already exists 2023/02/02 19:50:12 Error trying to deploy storage class oot-dd-managed-hdd-5g70a2s2 in namespace :persistentvolumeclaims "oot-dd-managed-hdd-5g70a2s2" already exists [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Thu, 02 Feb 2023 19:38:33 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating namespace "capz-e2e-06ane4" for hosting the cluster [38;5;243m@ 02/02/23 19:38:33.267[0m Feb 2 19:38:33.267: INFO: starting to create namespace for hosting the "capz-e2e-06ane4" test spec ... skipping 271 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1376.353 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a dual-stack cluster [OPTIONAL] [38;5;10m[1mWith dual-stack worker node[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:832[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/02/02 19:38:33 failed trying to get namespace (capz-e2e-1tp91j):namespaces "capz-e2e-1tp91j" not found cluster.cluster.x-k8s.io/capz-e2e-1tp91j-dual-stack created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-1tp91j-dual-stack created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-1tp91j-dual-stack-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-1tp91j-dual-stack-control-plane created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created machinedeployment.cluster.x-k8s.io/capz-e2e-1tp91j-dual-stack-md-0 created ... skipping 330 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1380.875 seconds][0m [0mWorkload cluster creation [38;5;243mCreating clusters using clusterclass [OPTIONAL] [38;5;10m[1mwith a single control plane node, one linux worker node, and one windows worker node[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:908[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/02/02 19:38:33 failed trying to get namespace (capz-e2e-9qmsbr):namespaces "capz-e2e-9qmsbr" not found clusterclass.cluster.x-k8s.io/ci-default created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/ci-default-kubeadm-control-plane created azureclustertemplate.infrastructure.cluster.x-k8s.io/ci-default-azure-cluster created azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-control-plane created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ci-default-worker created azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-worker created ... skipping 3 lines ... cluster.cluster.x-k8s.io/capz-e2e-9qmsbr-cc created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-9qmsbr-cc-calico created clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created configmap/cni-capz-e2e-9qmsbr-cc-calico-windows created configmap/csi-proxy-addon created Failed to get logs for Machine capz-e2e-9qmsbr-cc-md-0-x8zkk-797b67c54-w87np, Cluster capz-e2e-9qmsbr/capz-e2e-9qmsbr-cc: dialing public load balancer at capz-e2e-9qmsbr-cc-6fc49815.eastus.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain Failed to get logs for Machine capz-e2e-9qmsbr-cc-md-win-6t4ms-8598b754d6-ttdh7, Cluster capz-e2e-9qmsbr/capz-e2e-9qmsbr-cc: dialing public load balancer at capz-e2e-9qmsbr-cc-6fc49815.eastus.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain Failed to get logs for Machine capz-e2e-9qmsbr-cc-qbrfm-r6prt, Cluster capz-e2e-9qmsbr/capz-e2e-9qmsbr-cc: dialing public load balancer at capz-e2e-9qmsbr-cc-6fc49815.eastus.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Thu, 02 Feb 2023 19:38:33 UTC on Ginkgo node 5 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating namespace "capz-e2e-9qmsbr" for hosting the cluster [38;5;243m@ 02/02/23 19:38:33.282[0m Feb 2 19:38:33.282: INFO: starting to create namespace for hosting the "capz-e2e-9qmsbr" test spec ... skipping 185 lines ... Feb 2 19:52:48.746: INFO: Describing Pod kube-system/kube-scheduler-capz-e2e-9qmsbr-cc-control-plane-rlwqj-t67hd Feb 2 19:52:48.746: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-9qmsbr-cc-control-plane-rlwqj-t67hd, container kube-scheduler Feb 2 19:52:49.146: INFO: Fetching kube-system pod logs took 8.282086758s Feb 2 19:52:49.146: INFO: Dumping workload cluster capz-e2e-9qmsbr/capz-e2e-9qmsbr-cc Azure activity log Feb 2 19:52:49.147: INFO: Creating log watcher for controller tigera-operator/tigera-operator-64db64cb98-8vn9p, container tigera-operator Feb 2 19:52:49.148: INFO: Describing Pod tigera-operator/tigera-operator-64db64cb98-8vn9p Feb 2 19:52:49.170: INFO: Error fetching activity logs for cluster capz-e2e-9qmsbr-cc in namespace capz-e2e-9qmsbr. Not able to find the AzureManagedControlPlane on the management cluster: azuremanagedcontrolplanes.infrastructure.cluster.x-k8s.io "capz-e2e-9qmsbr-cc" not found Feb 2 19:52:49.170: INFO: Fetching activity logs took 23.464982ms Feb 2 19:52:49.170: INFO: Dumping all the Cluster API resources in the "capz-e2e-9qmsbr" namespace Feb 2 19:52:49.603: INFO: Deleting all clusters in the capz-e2e-9qmsbr namespace [1mSTEP:[0m Deleting cluster capz-e2e-9qmsbr-cc [38;5;243m@ 02/02/23 19:52:49.628[0m INFO: Waiting for the Cluster capz-e2e-9qmsbr/capz-e2e-9qmsbr-cc to be deleted [1mSTEP:[0m Waiting for cluster capz-e2e-9qmsbr-cc to be deleted [38;5;243m@ 02/02/23 19:52:49.641[0m ... skipping 5 lines ... [38;5;243m<< Timeline[0m [38;5;243m------------------------------[0m [38;5;10m[SynchronizedAfterSuite] PASSED [0.000 seconds][0m [38;5;10m[1m[SynchronizedAfterSuite] [0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:116[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [1399.512 seconds][0m [0mWorkload cluster creation [38;5;243mCreating clusters on public MEC [OPTIONAL] [38;5;9m[1m[It] with 1 control plane nodes and 1 worker node[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:966[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/02/02 19:38:33 failed trying to get namespace (capz-e2e-v3wmxg):namespaces "capz-e2e-v3wmxg" not found cluster.cluster.x-k8s.io/capz-e2e-v3wmxg-edgezone created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-v3wmxg-edgezone created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-v3wmxg-edgezone-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-v3wmxg-edgezone-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-v3wmxg-edgezone-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-v3wmxg-edgezone-md-0 created ... skipping 14 lines ... INFO: Creating the workload cluster with name "capz-e2e-v3wmxg-edgezone" using the "edgezone" template (Kubernetes v1.25.6, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-v3wmxg-edgezone --infrastructure (default) --kubernetes-version v1.25.6 --control-plane-machine-count 1 --worker-machine-count 1 --flavor edgezone INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned [1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 02/02/23 19:38:37.42[0m [38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:144 [38;5;243m@ 02/02/23 19:58:37.423[0m Feb 2 19:58:37.561: INFO: FAILED! Feb 2 19:58:37.561: INFO: Cleaning up after "Workload cluster creation Creating clusters on public MEC [OPTIONAL] with 1 control plane nodes and 1 worker node" spec [1mSTEP:[0m Unable to dump workload cluster logs as the cluster is nil [38;5;243m@ 02/02/23 19:58:37.561[0m Feb 2 19:58:37.561: INFO: Dumping all the Cluster API resources in the "capz-e2e-v3wmxg" namespace Feb 2 19:58:40.949: INFO: Deleting all clusters in the capz-e2e-v3wmxg namespace [1mSTEP:[0m Redacting sensitive information from logs [38;5;243m@ 02/02/23 19:58:40.95[0m [38;5;13m[PANICKED][0m in [AfterEach] - /usr/local/go/src/runtime/panic.go:260 [38;5;243m@ 02/02/23 20:01:52.794[0m [38;5;243m<< Timeline[0m [38;5;9m[FAILED] Timed out after 1200.001s. Timed out waiting for Cluster capz-e2e-v3wmxg/capz-e2e-v3wmxg-edgezone to provision Expected <string>: Provisioning to equal <string>: Provisioned[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:144[0m [38;5;243m@ 02/02/23 19:58:37.423[0m ... skipping 16 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1406.293 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a GPU-enabled cluster [OPTIONAL] [38;5;10m[1mwith a single control plane node and 1 node[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:506[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/02/02 19:38:33 failed trying to get namespace (capz-e2e-mx4imr):namespaces "capz-e2e-mx4imr" not found cluster.cluster.x-k8s.io/capz-e2e-mx4imr-gpu created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-mx4imr-gpu created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-mx4imr-gpu-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-mx4imr-gpu-control-plane created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created machinedeployment.cluster.x-k8s.io/capz-e2e-mx4imr-gpu-md-0 created ... skipping 232 lines ... [38;5;243m------------------------------[0m [38;5;10m• [3340.133 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a private cluster [OPTIONAL] [38;5;10m[1mCreates a public management cluster in a custom vnet[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:156[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/02/02 19:38:33 failed trying to get namespace (capz-e2e-jcyppy):namespaces "capz-e2e-jcyppy" not found cluster.cluster.x-k8s.io/capz-e2e-jcyppy-public-custom-vnet created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-jcyppy-public-custom-vnet created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-jcyppy-public-custom-vnet-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-jcyppy-public-custom-vnet-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-jcyppy-public-custom-vnet-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-jcyppy-public-custom-vnet-md-0 created ... skipping 247 lines ... Feb 2 20:27:25.660: INFO: Describing Pod kube-system/kube-scheduler-capz-e2e-jcyppy-public-custom-vnet-control-plane-jf6wp Feb 2 20:27:25.660: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-jcyppy-public-custom-vnet-control-plane-jf6wp, container kube-scheduler Feb 2 20:27:26.055: INFO: Fetching kube-system pod logs took 9.830837401s Feb 2 20:27:26.055: INFO: Dumping workload cluster capz-e2e-jcyppy/capz-e2e-jcyppy-public-custom-vnet Azure activity log Feb 2 20:27:26.055: INFO: Describing Pod tigera-operator/tigera-operator-64db64cb98-xrjgp Feb 2 20:27:26.055: INFO: Creating log watcher for controller tigera-operator/tigera-operator-64db64cb98-xrjgp, container tigera-operator Feb 2 20:27:36.648: INFO: Got error while iterating over activity logs for resource group capz-e2e-jcyppy-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure responding to next results request: StatusCode=404 -- Original Error: autorest/azure: error response cannot be parsed: {"<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Strict//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd\">\r\n<html xmlns=\"http://www.w3.org/1999/xhtml\">\r\n<head>\r\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=iso-8859-1\"/>\r\n<title>404 - File or directory not found.</title>\r\n<style type=\"text/css\">\r\n<!--\r\nbody{margin:0;font-size:.7em;font-family:Verdana, Arial, Helvetica, sans-serif;background:#EEEEEE;}\r\nfieldset{padding:0 15px 10px 15px;} \r\nh1{font-size:2.4em;margin:0;color:#FFF;}\r\nh2{font-si" '\x00' '\x00'} error: invalid character '<' looking for beginning of value Feb 2 20:27:36.648: INFO: Fetching activity logs took 10.592809265s Feb 2 20:27:36.648: INFO: Dumping all the Cluster API resources in the "capz-e2e-jcyppy" namespace Feb 2 20:27:37.039: INFO: Deleting all clusters in the capz-e2e-jcyppy namespace [1mSTEP:[0m Deleting cluster capz-e2e-jcyppy-public-custom-vnet [38;5;243m@ 02/02/23 20:27:37.059[0m INFO: Waiting for the Cluster capz-e2e-jcyppy/capz-e2e-jcyppy-public-custom-vnet to be deleted [1mSTEP:[0m Waiting for cluster capz-e2e-jcyppy-public-custom-vnet to be deleted [38;5;243m@ 02/02/23 20:27:37.073[0m INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-687b6fd9bc-4zfqb, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-669bd95bbb-9wts7, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6f7b75f796-5cjj2, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-77fc5bd96f-jz24v, container manager: http2: client connection lost Feb 2 20:30:37.179: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-jcyppy Feb 2 20:30:37.199: INFO: Running additional cleanup for the "create-workload-cluster" test spec Feb 2 20:30:37.199: INFO: deleting an existing virtual network "custom-vnet" Feb 2 20:30:47.763: INFO: deleting an existing route table "node-routetable" Feb 2 20:30:50.267: INFO: deleting an existing network security group "node-nsg" ... skipping 16 lines ... [38;5;10m[ReportAfterSuite] PASSED [0.012 seconds][0m [38;5;10m[1m[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report[0m [38;5;243mautogenerated by Ginkgo[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0mWorkload cluster creation [38;5;243mCreating clusters on public MEC [OPTIONAL] [38;5;9m[1m[It] with 1 control plane nodes and 1 worker node[0m [38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:144[0m [38;5;9m[1mRan 8 of 25 Specs in 3488.727 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m7 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m17 Skipped[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mCurrentGinkgoTestDescription() is deprecated in Ginkgo V2. Use CurrentSpecReport() instead.[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:427[0m ... skipping 56 lines ... [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:284[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:287[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.7.1[0m --- FAIL: TestE2E (1546.68s) FAIL [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mCurrentGinkgoTestDescription() is deprecated in Ginkgo V2. Use CurrentSpecReport() instead.[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:427[0m ... skipping 34 lines ... PASS Ginkgo ran 1 suite in 1h0m47.475591189s Test Suite Failed make[1]: *** [Makefile:655: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:664: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...