Recent runs || View in Spyglass
PR | willie-yao: Refactor repeated code in E2E test specs to helper functions |
Result | ABORTED |
Tests | 3 failed / 24 succeeded |
Started | |
Elapsed | 51m37s |
Revision | 46e6d68ccda756eb79bd908d62f7f3e3a7c34a42 |
Refs |
3003 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sa\sFlatcar\scluster\s\[OPTIONAL\]\sWith\sFlatcar\scontrol\-plane\sand\sworker\snodes$'
[FAILED] Failed to run clusterctl config cluster Unexpected error: <*errors.fundamental | 0xc000931aa0>: { msg: "invalid KubernetesVersion. Please use a semantic version number", stack: [0x2fe268b, 0x2fe13e5, 0x2feda38, 0x2ff15ef, 0x364e731, 0x19472db, 0x195b7f8, 0x14db741], } invalid KubernetesVersion. Please use a semantic version number occurred In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/clusterctl/client.go:302 @ 02/02/23 19:17:26.678 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit.e2e_suite.1.xml
2023/02/02 19:17:26 failed trying to get namespace (capz-e2e-ev68tt):namespaces "capz-e2e-ev68tt" not found > Enter [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:54 @ 02/02/23 19:17:26.323 INFO: "" started at Thu, 02 Feb 2023 19:17:26 UTC on Ginkgo node 3 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml STEP: Creating namespace "capz-e2e-ev68tt" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:17:26.323 Feb 2 19:17:26.323: INFO: starting to create namespace for hosting the "capz-e2e-ev68tt" test spec INFO: Creating namespace capz-e2e-ev68tt INFO: Creating event watcher for namespace "capz-e2e-ev68tt" Feb 2 19:17:26.570: INFO: Creating cluster identity secret "cluster-identity-secret" < Exit [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:54 @ 02/02/23 19:17:26.676 (353ms) > Enter [It] With Flatcar control-plane and worker nodes - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:293 @ 02/02/23 19:17:26.676 INFO: Cluster name is capz-e2e-ev68tt-flatcar INFO: Creating the workload cluster with name "capz-e2e-ev68tt-flatcar" using the "flatcar" template (Kubernetes FLATCAR_KUBERNETES_VERSION, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-ev68tt-flatcar --infrastructure (default) --kubernetes-version FLATCAR_KUBERNETES_VERSION --control-plane-machine-count 1 --worker-machine-count 1 --flavor flatcar [FAILED] Failed to run clusterctl config cluster Unexpected error: <*errors.fundamental | 0xc000931aa0>: { msg: "invalid KubernetesVersion. Please use a semantic version number", stack: [0x2fe268b, 0x2fe13e5, 0x2feda38, 0x2ff15ef, 0x364e731, 0x19472db, 0x195b7f8, 0x14db741], } invalid KubernetesVersion. Please use a semantic version number occurred In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/clusterctl/client.go:302 @ 02/02/23 19:17:26.678 < Exit [It] With Flatcar control-plane and worker nodes - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:293 @ 02/02/23 19:17:26.678 (2ms) > Enter [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 02/02/23 19:17:26.678 Feb 2 19:17:26.736: INFO: FAILED! Feb 2 19:17:26.736: INFO: Cleaning up after "Workload cluster creation Creating a Flatcar cluster [OPTIONAL] With Flatcar control-plane and worker nodes" spec STEP: Unable to dump workload cluster logs as the cluster is nil - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:154 @ 02/02/23 19:17:26.736 Feb 2 19:17:26.736: INFO: Dumping all the Cluster API resources in the "capz-e2e-ev68tt" namespace Feb 2 19:17:27.946: INFO: Deleting all clusters in the capz-e2e-ev68tt namespace STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:218 @ 02/02/23 19:17:27.947 [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 @ 02/02/23 19:17:33.617 runtime error: invalid memory address or nil pointer dereference Full Stack Trace sigs.k8s.io/cluster-api-provider-azure/test/e2e.dumpSpecResourcesAndCleanup({0x4356100, 0xc00005c0d0}, {{0x3eaae53, 0x17}, {0x4368670, 0xc0003e6ad0}, {0xc000851500, 0xf}, 0xc000976160, 0xc000122200, ...}) /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:177 +0x4ad sigs.k8s.io/cluster-api-provider-azure/test/e2e.glob..func1.2() /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:136 +0x2d0 < Exit [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 02/02/23 19:17:33.617 (6.939s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sa\scluster\sthat\suses\sthe\sexternal\scloud\sprovider\sand\smachinepools\s\[OPTIONAL\]\swith\s1\scontrol\splane\snode\sand\s1\smachinepool$'
[FAILED] Timed out after 1.006s. Timed out waiting for 1 ready replicas for MachinePool capz-e2e-1nqo6o/capz-e2e-1nqo6o-flex-mp-0 Expected <int>: 0 to equal <int>: 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinepool_helpers.go:91 @ 02/02/23 19:24:06.027from junit.e2e_suite.1.xml
2023/02/02 19:17:26 failed trying to get namespace (capz-e2e-1nqo6o):namespaces "capz-e2e-1nqo6o" not found cluster.cluster.x-k8s.io/capz-e2e-1nqo6o-flex created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-1nqo6o-flex created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-1nqo6o-flex-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-1nqo6o-flex-control-plane created machinepool.cluster.x-k8s.io/capz-e2e-1nqo6o-flex-mp-0 created azuremachinepool.infrastructure.cluster.x-k8s.io/capz-e2e-1nqo6o-flex-mp-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-1nqo6o-flex-mp-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created > Enter [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:54 @ 02/02/23 19:17:26.33 INFO: "" started at Thu, 02 Feb 2023 19:17:26 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml STEP: Creating namespace "capz-e2e-1nqo6o" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:17:26.331 Feb 2 19:17:26.331: INFO: starting to create namespace for hosting the "capz-e2e-1nqo6o" test spec INFO: Creating namespace capz-e2e-1nqo6o INFO: Creating event watcher for namespace "capz-e2e-1nqo6o" Feb 2 19:17:26.583: INFO: Using existing cluster identity secret < Exit [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:54 @ 02/02/23 19:17:26.583 (253ms) > Enter [It] with 1 control plane node and 1 machinepool - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:501 @ 02/02/23 19:17:26.583 STEP: using user-assigned identity - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:502 @ 02/02/23 19:17:26.583 INFO: Cluster name is capz-e2e-1nqo6o-flex INFO: Creating the workload cluster with name "capz-e2e-1nqo6o-flex" using the "external-cloud-provider-vmss-flex" template (Kubernetes v1.26.0, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-1nqo6o-flex --infrastructure (default) --kubernetes-version v1.26.0 --control-plane-machine-count 1 --worker-machine-count 1 --flavor external-cloud-provider-vmss-flex INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 02/02/23 19:17:32.532 INFO: Waiting for control plane to be initialized STEP: Installing cloud-provider-azure components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:45 @ 02/02/23 19:19:12.691 Feb 2 19:21:35.249: INFO: getting history for release cloud-provider-azure-oot Feb 2 19:21:35.309: INFO: Release cloud-provider-azure-oot does not exist, installing it Feb 2 19:21:37.657: INFO: creating 1 resource(s) Feb 2 19:21:37.842: INFO: creating 10 resource(s) Feb 2 19:21:39.169: INFO: Install complete STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:49 @ 02/02/23 19:21:39.169 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:102 @ 02/02/23 19:21:39.169 Feb 2 19:21:39.253: INFO: getting history for release projectcalico Feb 2 19:21:39.312: INFO: Release projectcalico does not exist, installing it Feb 2 19:21:39.946: INFO: creating 1 resource(s) Feb 2 19:21:40.030: INFO: creating 1 resource(s) Feb 2 19:21:40.103: INFO: creating 1 resource(s) Feb 2 19:21:40.178: INFO: creating 1 resource(s) Feb 2 19:21:40.255: INFO: creating 1 resource(s) Feb 2 19:21:40.337: INFO: creating 1 resource(s) Feb 2 19:21:40.487: INFO: creating 1 resource(s) Feb 2 19:21:40.584: INFO: creating 1 resource(s) Feb 2 19:21:40.677: INFO: creating 1 resource(s) Feb 2 19:21:40.800: INFO: creating 1 resource(s) Feb 2 19:21:40.883: INFO: creating 1 resource(s) Feb 2 19:21:40.961: INFO: creating 1 resource(s) Feb 2 19:21:41.051: INFO: creating 1 resource(s) Feb 2 19:21:41.125: INFO: creating 1 resource(s) Feb 2 19:21:41.202: INFO: creating 1 resource(s) Feb 2 19:21:41.296: INFO: creating 1 resource(s) Feb 2 19:21:41.392: INFO: creating 1 resource(s) Feb 2 19:21:41.471: INFO: creating 1 resource(s) Feb 2 19:21:41.570: INFO: creating 1 resource(s) Feb 2 19:21:41.741: INFO: creating 1 resource(s) Feb 2 19:21:42.088: INFO: creating 1 resource(s) Feb 2 19:21:42.158: INFO: Clearing discovery cache Feb 2 19:21:42.158: INFO: beginning wait for 21 resources with timeout of 1m0s Feb 2 19:21:46.059: INFO: creating 1 resource(s) Feb 2 19:21:46.533: INFO: creating 6 resource(s) Feb 2 19:21:47.398: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:58 @ 02/02/23 19:21:47.819 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:21:48.061 Feb 2 19:21:48.061: INFO: starting to wait for deployment to become available Feb 2 19:21:58.177: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.116324911s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:64 @ 02/02/23 19:21:58.177 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:21:58.463 Feb 2 19:21:58.463: INFO: starting to wait for deployment to become available Feb 2 19:22:58.902: INFO: Deployment calico-system/calico-kube-controllers is now available, took 1m0.438377686s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:22:59.444 Feb 2 19:22:59.444: INFO: starting to wait for deployment to become available Feb 2 19:22:59.501: INFO: Deployment calico-system/calico-typha is now available, took 57.89005ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:69 @ 02/02/23 19:22:59.502 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:22:59.918 Feb 2 19:22:59.918: INFO: starting to wait for deployment to become available Feb 2 19:23:20.118: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 20.199394753s STEP: Waiting for Ready cloud-controller-manager deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:55 @ 02/02/23 19:23:20.118 STEP: waiting for deployment kube-system/cloud-controller-manager to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:23:20.408 Feb 2 19:23:20.408: INFO: starting to wait for deployment to become available Feb 2 19:23:20.466: INFO: Deployment kube-system/cloud-controller-manager is now available, took 57.371415ms INFO: Waiting for the first control plane machine managed by capz-e2e-1nqo6o/capz-e2e-1nqo6o-flex-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:133 @ 02/02/23 19:23:20.492 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:65 @ 02/02/23 19:23:20.498 Feb 2 19:23:20.575: INFO: getting history for release azuredisk-csi-driver-oot Feb 2 19:23:20.633: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Feb 2 19:23:23.679: INFO: creating 1 resource(s) Feb 2 19:23:23.850: INFO: creating 18 resource(s) Feb 2 19:23:24.398: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:75 @ 02/02/23 19:23:24.398 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:23:24.637 Feb 2 19:23:24.637: INFO: starting to wait for deployment to become available Feb 2 19:24:04.932: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 40.29470995s INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-1nqo6o/capz-e2e-1nqo6o-flex-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:165 @ 02/02/23 19:24:04.947 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:196 @ 02/02/23 19:24:04.954 INFO: Waiting for the machine deployments to be provisioned INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinepool_helpers.go:79 @ 02/02/23 19:24:05.021 [FAILED] Timed out after 1.006s. Timed out waiting for 1 ready replicas for MachinePool capz-e2e-1nqo6o/capz-e2e-1nqo6o-flex-mp-0 Expected <int>: 0 to equal <int>: 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinepool_helpers.go:91 @ 02/02/23 19:24:06.027 < Exit [It] with 1 control plane node and 1 machinepool - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:501 @ 02/02/23 19:24:06.027 (6m39.444s) > Enter [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 02/02/23 19:24:06.027 Feb 2 19:24:06.027: INFO: FAILED! Feb 2 19:24:06.027: INFO: Cleaning up after "Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool" spec STEP: Dumping logs from the "capz-e2e-1nqo6o-flex" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:24:06.027 Feb 2 19:24:06.027: INFO: Dumping workload cluster capz-e2e-1nqo6o/capz-e2e-1nqo6o-flex logs Feb 2 19:24:06.075: INFO: Collecting logs for Linux node capz-e2e-1nqo6o-flex-control-plane-w7b86 in cluster capz-e2e-1nqo6o-flex in namespace capz-e2e-1nqo6o Feb 2 19:24:14.535: INFO: Collecting boot logs for AzureMachine capz-e2e-1nqo6o-flex-control-plane-w7b86 Feb 2 19:24:15.776: INFO: Dumping workload cluster capz-e2e-1nqo6o/capz-e2e-1nqo6o-flex kube-system pod logs Feb 2 19:24:16.347: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-fff4bd65c-48xjk, container calico-apiserver Feb 2 19:24:16.347: INFO: Describing Pod calico-apiserver/calico-apiserver-fff4bd65c-48xjk Feb 2 19:24:16.463: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-fff4bd65c-5r2b9, container calico-apiserver Feb 2 19:24:16.463: INFO: Describing Pod calico-apiserver/calico-apiserver-fff4bd65c-5r2b9 Feb 2 19:24:16.581: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-6b7b9c649d-gqlgc, container calico-kube-controllers Feb 2 19:24:16.581: INFO: Describing Pod calico-system/calico-kube-controllers-6b7b9c649d-gqlgc Feb 2 19:24:16.701: INFO: Creating log watcher for controller calico-system/calico-node-dwcg4, container calico-node Feb 2 19:24:16.701: INFO: Describing Pod calico-system/calico-node-dwcg4 Feb 2 19:24:16.772: INFO: Error starting logs stream for pod calico-system/calico-node-dwcg4, container calico-node: container "calico-node" in pod "calico-node-dwcg4" is waiting to start: PodInitializing Feb 2 19:24:16.821: INFO: Creating log watcher for controller calico-system/calico-node-ghfrt, container calico-node Feb 2 19:24:16.821: INFO: Describing Pod calico-system/calico-node-ghfrt Feb 2 19:24:16.970: INFO: Creating log watcher for controller calico-system/calico-typha-6459668b7d-z6pbr, container calico-typha Feb 2 19:24:16.970: INFO: Describing Pod calico-system/calico-typha-6459668b7d-z6pbr Feb 2 19:24:17.089: INFO: Creating log watcher for controller calico-system/csi-node-driver-2wg4q, container calico-csi Feb 2 19:24:17.089: INFO: Creating log watcher for controller calico-system/csi-node-driver-2wg4q, container csi-node-driver-registrar Feb 2 19:24:17.090: INFO: Describing Pod calico-system/csi-node-driver-2wg4q Feb 2 19:24:17.490: INFO: Creating log watcher for controller kube-system/cloud-controller-manager-f7946fb6f-gv7wn, container cloud-controller-manager Feb 2 19:24:17.490: INFO: Describing Pod kube-system/cloud-controller-manager-f7946fb6f-gv7wn Feb 2 19:24:17.888: INFO: Creating log watcher for controller kube-system/cloud-node-manager-rzhv9, container cloud-node-manager Feb 2 19:24:17.889: INFO: Describing Pod kube-system/cloud-node-manager-rzhv9 Feb 2 19:24:18.302: INFO: Describing Pod kube-system/cloud-node-manager-x7whc Feb 2 19:24:18.302: INFO: Creating log watcher for controller kube-system/cloud-node-manager-x7whc, container cloud-node-manager Feb 2 19:24:18.690: INFO: Describing Pod kube-system/coredns-787d4945fb-dg2d6 Feb 2 19:24:18.690: INFO: Creating log watcher for controller kube-system/coredns-787d4945fb-dg2d6, container coredns Feb 2 19:24:19.091: INFO: Creating log watcher for controller kube-system/coredns-787d4945fb-hsw77, container coredns Feb 2 19:24:19.091: INFO: Describing Pod kube-system/coredns-787d4945fb-hsw77 Feb 2 19:24:19.491: INFO: Describing Pod kube-system/csi-azuredisk-controller-b484449d7-w8wff Feb 2 19:24:19.491: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-b484449d7-w8wff, container csi-snapshotter Feb 2 19:24:19.492: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-b484449d7-w8wff, container csi-resizer Feb 2 19:24:19.493: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-b484449d7-w8wff, container liveness-probe Feb 2 19:24:19.493: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-b484449d7-w8wff, container csi-provisioner Feb 2 19:24:19.493: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-b484449d7-w8wff, container azuredisk Feb 2 19:24:19.494: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-b484449d7-w8wff, container csi-attacher Feb 2 19:24:19.891: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-hl5dp, container node-driver-registrar Feb 2 19:24:19.891: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-hl5dp, container liveness-probe Feb 2 19:24:19.891: INFO: Describing Pod kube-system/csi-azuredisk-node-hl5dp Feb 2 19:24:19.892: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-hl5dp, container azuredisk Feb 2 19:24:20.291: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-pgpmt, container node-driver-registrar Feb 2 19:24:20.291: INFO: Describing Pod kube-system/csi-azuredisk-node-pgpmt Feb 2 19:24:20.291: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-pgpmt, container azuredisk Feb 2 19:24:20.291: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-pgpmt, container liveness-probe Feb 2 19:24:20.380: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-pgpmt, container liveness-probe: container "liveness-probe" in pod "csi-azuredisk-node-pgpmt" is waiting to start: ContainerCreating Feb 2 19:24:20.381: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-pgpmt, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-pgpmt" is waiting to start: ContainerCreating Feb 2 19:24:20.381: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-pgpmt, container node-driver-registrar: container "node-driver-registrar" in pod "csi-azuredisk-node-pgpmt" is waiting to start: ContainerCreating Feb 2 19:24:20.689: INFO: Creating log watcher for controller kube-system/etcd-capz-e2e-1nqo6o-flex-control-plane-w7b86, container etcd Feb 2 19:24:20.689: INFO: Describing Pod kube-system/etcd-capz-e2e-1nqo6o-flex-control-plane-w7b86 Feb 2 19:24:21.089: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-1nqo6o-flex-control-plane-w7b86, container kube-apiserver Feb 2 19:24:21.089: INFO: Describing Pod kube-system/kube-apiserver-capz-e2e-1nqo6o-flex-control-plane-w7b86 Feb 2 19:24:21.489: INFO: Describing Pod kube-system/kube-controller-manager-capz-e2e-1nqo6o-flex-control-plane-w7b86 Feb 2 19:24:21.489: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-1nqo6o-flex-control-plane-w7b86, container kube-controller-manager Feb 2 19:24:21.889: INFO: Describing Pod kube-system/kube-proxy-rxjgd Feb 2 19:24:21.889: INFO: Creating log watcher for controller kube-system/kube-proxy-rxjgd, container kube-proxy Feb 2 19:24:22.290: INFO: Creating log watcher for controller kube-system/kube-proxy-vvhn7, container kube-proxy Feb 2 19:24:22.290: INFO: Describing Pod kube-system/kube-proxy-vvhn7 Feb 2 19:24:22.689: INFO: Describing Pod kube-system/kube-scheduler-capz-e2e-1nqo6o-flex-control-plane-w7b86 Feb 2 19:24:22.689: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-1nqo6o-flex-control-plane-w7b86, container kube-scheduler Feb 2 19:24:23.089: INFO: Fetching kube-system pod logs took 7.312309776s Feb 2 19:24:23.089: INFO: Dumping workload cluster capz-e2e-1nqo6o/capz-e2e-1nqo6o-flex Azure activity log Feb 2 19:24:23.089: INFO: Creating log watcher for controller tigera-operator/tigera-operator-54b47459dd-92mjs, container tigera-operator Feb 2 19:24:23.089: INFO: Describing Pod tigera-operator/tigera-operator-54b47459dd-92mjs Feb 2 19:24:25.432: INFO: Fetching activity logs took 2.343901823s Feb 2 19:24:25.432: INFO: Dumping all the Cluster API resources in the "capz-e2e-1nqo6o" namespace Feb 2 19:24:25.768: INFO: Deleting all clusters in the capz-e2e-1nqo6o namespace STEP: Deleting cluster capz-e2e-1nqo6o-flex - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 02/02/23 19:24:25.79 INFO: Waiting for the Cluster capz-e2e-1nqo6o/capz-e2e-1nqo6o-flex to be deleted STEP: Waiting for cluster capz-e2e-1nqo6o-flex to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 02/02/23 19:24:25.805 Feb 2 19:28:55.958: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-1nqo6o Feb 2 19:28:55.980: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:218 @ 02/02/23 19:28:56.599 INFO: "with 1 control plane node and 1 machinepool" started at Thu, 02 Feb 2023 19:29:06 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 02/02/23 19:29:06.694 (5m0.667s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sa\sprivate\scluster\s\[OPTIONAL\]\sCreates\sa\spublic\smanagement\scluster\sin\sa\scustom\svnet$'
[FAILED] Timed out after 1800.001s. Timed out waiting for Cluster capz-e2e-itge2h/capz-e2e-qt4b58-private to provision Expected <string>: Provisioning to equal <string>: Provisioned In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:144 @ 02/02/23 19:55:08.198 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit.e2e_suite.1.xml
2023/02/02 19:17:26 failed trying to get namespace (capz-e2e-itge2h):namespaces "capz-e2e-itge2h" not found cluster.cluster.x-k8s.io/capz-e2e-itge2h-public-custom-vnet created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-itge2h-public-custom-vnet created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-itge2h-public-custom-vnet-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-itge2h-public-custom-vnet-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-itge2h-public-custom-vnet-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-itge2h-public-custom-vnet-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-itge2h-public-custom-vnet-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created machinehealthcheck.cluster.x-k8s.io/capz-e2e-itge2h-public-custom-vnet-mhc-0 created cluster.cluster.x-k8s.io/capz-e2e-qt4b58-private created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-qt4b58-private created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-qt4b58-private-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-qt4b58-private-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-qt4b58-private-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-qt4b58-private-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-qt4b58-private-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-user-assigned created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-qt4b58-private-calico created configmap/cni-capz-e2e-qt4b58-private-calico created > Enter [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:54 @ 02/02/23 19:17:26.323 INFO: "" started at Thu, 02 Feb 2023 19:17:26 UTC on Ginkgo node 9 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml STEP: Creating namespace "capz-e2e-itge2h" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:17:26.323 Feb 2 19:17:26.323: INFO: starting to create namespace for hosting the "capz-e2e-itge2h" test spec INFO: Creating namespace capz-e2e-itge2h INFO: Creating event watcher for namespace "capz-e2e-itge2h" Feb 2 19:17:26.552: INFO: Creating cluster identity secret "cluster-identity-secret" < Exit [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:54 @ 02/02/23 19:17:26.672 (350ms) > Enter [It] Creates a public management cluster in a custom vnet - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:154 @ 02/02/23 19:17:26.672 INFO: Cluster name is capz-e2e-itge2h-public-custom-vnet STEP: Creating a custom virtual network - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:156 @ 02/02/23 19:17:26.672 STEP: creating Azure clients with the workload cluster's subscription - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_privatecluster.go:214 @ 02/02/23 19:17:26.672 STEP: creating a resource group - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_privatecluster.go:229 @ 02/02/23 19:17:26.673 STEP: creating a network security group - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_privatecluster.go:240 @ 02/02/23 19:17:28.49 STEP: creating a node security group - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_privatecluster.go:282 @ 02/02/23 19:17:33.064 STEP: creating a node routetable - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_privatecluster.go:295 @ 02/02/23 19:17:37.209 STEP: creating a virtual network - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_privatecluster.go:306 @ 02/02/23 19:17:40.342 END STEP: Creating a custom virtual network - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:156 @ 02/02/23 19:17:44.557 (17.885s) INFO: Creating the workload cluster with name "capz-e2e-itge2h-public-custom-vnet" using the "custom-vnet" template (Kubernetes v1.25.6, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-itge2h-public-custom-vnet --infrastructure (default) --kubernetes-version v1.25.6 --control-plane-machine-count 1 --worker-machine-count 1 --flavor custom-vnet INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 02/02/23 19:17:45.977 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:49 @ 02/02/23 19:18:06.044 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:102 @ 02/02/23 19:18:06.044 Feb 2 19:20:03.849: INFO: getting history for release projectcalico Feb 2 19:20:03.908: INFO: Release projectcalico does not exist, installing it Feb 2 19:20:05.010: INFO: creating 1 resource(s) Feb 2 19:20:05.128: INFO: creating 1 resource(s) Feb 2 19:20:05.225: INFO: creating 1 resource(s) Feb 2 19:20:05.303: INFO: creating 1 resource(s) Feb 2 19:20:05.394: INFO: creating 1 resource(s) Feb 2 19:20:05.477: INFO: creating 1 resource(s) Feb 2 19:20:05.644: INFO: creating 1 resource(s) Feb 2 19:20:05.761: INFO: creating 1 resource(s) Feb 2 19:20:05.838: INFO: creating 1 resource(s) Feb 2 19:20:05.918: INFO: creating 1 resource(s) Feb 2 19:20:05.995: INFO: creating 1 resource(s) Feb 2 19:20:06.072: INFO: creating 1 resource(s) Feb 2 19:20:06.142: INFO: creating 1 resource(s) Feb 2 19:20:06.221: INFO: creating 1 resource(s) Feb 2 19:20:06.298: INFO: creating 1 resource(s) Feb 2 19:20:06.384: INFO: creating 1 resource(s) Feb 2 19:20:06.495: INFO: creating 1 resource(s) Feb 2 19:20:06.579: INFO: creating 1 resource(s) Feb 2 19:20:06.693: INFO: creating 1 resource(s) Feb 2 19:20:06.878: INFO: creating 1 resource(s) Feb 2 19:20:07.289: INFO: creating 1 resource(s) Feb 2 19:20:07.369: INFO: Clearing discovery cache Feb 2 19:20:07.369: INFO: beginning wait for 21 resources with timeout of 1m0s Feb 2 19:20:13.031: INFO: creating 1 resource(s) Feb 2 19:20:14.229: INFO: creating 6 resource(s) Feb 2 19:20:15.272: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:58 @ 02/02/23 19:20:15.707 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:20:15.954 Feb 2 19:20:15.954: INFO: starting to wait for deployment to become available Feb 2 19:20:26.071: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.117274843s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:64 @ 02/02/23 19:20:26.071 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:20:26.364 Feb 2 19:20:26.364: INFO: starting to wait for deployment to become available Feb 2 19:21:26.778: INFO: Deployment calico-system/calico-kube-controllers is now available, took 1m0.414014295s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:21:27.308 Feb 2 19:21:27.308: INFO: starting to wait for deployment to become available Feb 2 19:21:27.367: INFO: Deployment calico-system/calico-typha is now available, took 59.017629ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:69 @ 02/02/23 19:21:27.367 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:21:27.854 Feb 2 19:21:27.854: INFO: starting to wait for deployment to become available Feb 2 19:21:48.028: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 20.174276217s INFO: Waiting for the first control plane machine managed by capz-e2e-itge2h/capz-e2e-itge2h-public-custom-vnet-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:133 @ 02/02/23 19:21:48.051 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:65 @ 02/02/23 19:21:48.058 Feb 2 19:21:48.141: INFO: getting history for release azuredisk-csi-driver-oot Feb 2 19:21:48.200: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Feb 2 19:21:51.306: INFO: creating 1 resource(s) Feb 2 19:21:51.520: INFO: creating 18 resource(s) Feb 2 19:21:52.067: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:75 @ 02/02/23 19:21:52.067 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:21:52.32 Feb 2 19:21:52.320: INFO: starting to wait for deployment to become available Feb 2 19:22:32.627: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 40.30742606s INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-itge2h/capz-e2e-itge2h-public-custom-vnet-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:165 @ 02/02/23 19:22:32.642 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:196 @ 02/02/23 19:22:32.648 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:102 @ 02/02/23 19:22:32.677 STEP: Checking all the machines controlled by capz-e2e-itge2h-public-custom-vnet-md-0 are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 02/02/23 19:22:32.688 INFO: Waiting for the machine pools to be provisioned INFO: Calling PostMachinesProvisioned STEP: Waiting for all DaemonSet Pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/daemonsets.go:71 @ 02/02/23 19:22:32.781 STEP: waiting for 2 daemonset calico-system/calico-node pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:22:33.146 STEP: waiting for 2 daemonset calico-system/calico-node pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:22:43.207 STEP: waiting for 2 daemonset calico-system/calico-node pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:22:53.266 STEP: waiting for 2 daemonset calico-system/calico-node pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:23:03.326 STEP: waiting for 2 daemonset calico-system/calico-node pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:23:13.386 Feb 2 19:23:13.386: INFO: 2 daemonset calico-system/calico-node pods are running, took 40.303043977s STEP: waiting for 2 daemonset calico-system/csi-node-driver pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:23:13.444 STEP: waiting for 2 daemonset calico-system/csi-node-driver pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:23:23.502 Feb 2 19:23:23.502: INFO: 2 daemonset calico-system/csi-node-driver pods are running, took 10.114286629s STEP: waiting for 2 daemonset kube-system/csi-azuredisk-node pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:23:23.561 Feb 2 19:23:23.561: INFO: 2 daemonset kube-system/csi-azuredisk-node pods are running, took 58.249054ms STEP: daemonset kube-system/csi-azuredisk-node-win has no schedulable nodes, will skip - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:23:23.624 STEP: waiting for 2 daemonset kube-system/kube-proxy pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:23:23.687 Feb 2 19:23:23.687: INFO: 2 daemonset kube-system/kube-proxy pods are running, took 62.20831ms STEP: Creating a private cluster from the management cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:185 @ 02/02/23 19:23:23.687 STEP: creating a Kubernetes client to the workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_privatecluster.go:79 @ 02/02/23 19:23:23.687 STEP: Creating a namespace for hosting the azure-private-cluster test spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:23:23.706 Feb 2 19:23:23.706: INFO: starting to create namespace for hosting the azure-private-cluster test spec INFO: Creating namespace capz-e2e-itge2h INFO: Creating event watcher for namespace "capz-e2e-itge2h" STEP: Initializing the workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_privatecluster.go:94 @ 02/02/23 19:23:23.997 INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure azure --ipam --runtime-extension --config /logs/artifacts/repository/clusterctl-config.yaml --kubeconfig /tmp/e2e-kubeconfig870787871 INFO: Waiting for provider controllers to be running STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 02/02/23 19:24:59.79 INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-687b6fd9bc-sx4kn, container manager STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 02/02/23 19:25:00.084 INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-669bd95bbb-6kj7j, container manager STEP: Waiting for deployment capi-system/capi-controller-manager to be available - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 02/02/23 19:25:00.376 INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-6f7b75f796-r4hdl, container manager STEP: Waiting for deployment capz-system/capz-controller-manager to be available - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 02/02/23 19:25:00.673 INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-6d8cb67cb7-cgm6r, container manager STEP: Ensure public API server is stable before creating private cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_privatecluster.go:102 @ 02/02/23 19:25:00.972 STEP: Creating a private workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_privatecluster.go:126 @ 02/02/23 19:25:06.837 INFO: Creating the workload cluster with name "capz-e2e-qt4b58-private" using the "private" template (Kubernetes v1.25.6, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-qt4b58-private --infrastructure (default) --kubernetes-version v1.25.6 --control-plane-machine-count 3 --worker-machine-count 1 --flavor private INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 02/02/23 19:25:08.197 END STEP: Creating a private cluster from the management cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:185 @ 02/02/23 19:55:08.198 (31m44.511s) [FAILED] Timed out after 1800.001s. Timed out waiting for Cluster capz-e2e-itge2h/capz-e2e-qt4b58-private to provision Expected <string>: Provisioning to equal <string>: Provisioned In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:144 @ 02/02/23 19:55:08.198 < Exit [It] Creates a public management cluster in a custom vnet - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:154 @ 02/02/23 19:55:08.198 (37m41.526s) > Enter [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 02/02/23 19:55:08.199 Feb 2 19:55:08.199: INFO: FAILED! Feb 2 19:55:08.199: INFO: Cleaning up after "Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet" spec STEP: Dumping logs from the "capz-e2e-itge2h-public-custom-vnet" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 19:55:08.199 Feb 2 19:55:08.199: INFO: Dumping workload cluster capz-e2e-itge2h/capz-e2e-itge2h-public-custom-vnet logs Feb 2 19:55:08.250: INFO: Collecting logs for Linux node capz-e2e-itge2h-public-custom-vnet-control-plane-qchs7 in cluster capz-e2e-itge2h-public-custom-vnet in namespace capz-e2e-itge2h Feb 2 19:55:26.828: INFO: Collecting boot logs for AzureMachine capz-e2e-itge2h-public-custom-vnet-control-plane-qchs7 STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:218 @ 02/02/23 19:58:28.049 [FAILED] Failed to get controller-runtime client Unexpected error: <*url.Error | 0xc0010acfc0>: { Op: "Get", URL: "https://127.0.0.1:45113/api?timeout=32s", Err: <*net.OpError | 0xc000d648c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00052f0e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 45113, Zone: "", }, Err: <*os.SyscallError | 0xc00128f0a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Get "https://127.0.0.1:45113/api?timeout=32s": dial tcp 127.0.0.1:45113: connect: connection refused occurred In [AfterEach] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_proxy.go:193 @ 02/02/23 19:59:46.093 < Exit [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 02/02/23 19:59:46.093 (4m37.894s)
Filter through log files | View test history on testgrid
capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Conformance Tests conformance-tests
capz-e2e [It] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating an AKS cluster [Managed Kubernetes] with a single control plane node and 1 node