Recent runs || View in Spyglass
PR | willie-yao: Refactor repeated code in E2E test specs to helper functions |
Result | FAILURE |
Tests | 4 failed / 23 succeeded |
Started | |
Elapsed | 57m38s |
Revision | 6306d5b5cd9d7a39bc7d7a80c3db459b3bdb3e0b |
Refs |
3003 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sa\sFlatcar\scluster\s\[OPTIONAL\]\sWith\sFlatcar\scontrol\-plane\sand\sworker\snodes$'
[FAILED] Failed to run clusterctl config cluster Unexpected error: <*errors.fundamental | 0xc000a3a018>: { msg: "invalid KubernetesVersion. Please use a semantic version number", stack: [0x2fe268b, 0x2fe13e5, 0x2feda38, 0x2ff15ef, 0x364e731, 0x19472db, 0x195b7f8, 0x14db741], } invalid KubernetesVersion. Please use a semantic version number occurred In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/clusterctl/client.go:302 @ 02/02/23 20:06:11.382 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit.e2e_suite.1.xml
2023/02/02 20:06:11 failed trying to get namespace (capz-e2e-cbbtow):namespaces "capz-e2e-cbbtow" not found > Enter [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:54 @ 02/02/23 20:06:11.209 INFO: "" started at Thu, 02 Feb 2023 20:06:11 UTC on Ginkgo node 2 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml STEP: Creating namespace "capz-e2e-cbbtow" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 20:06:11.209 Feb 2 20:06:11.209: INFO: starting to create namespace for hosting the "capz-e2e-cbbtow" test spec INFO: Creating namespace capz-e2e-cbbtow INFO: Creating event watcher for namespace "capz-e2e-cbbtow" Feb 2 20:06:11.329: INFO: Creating cluster identity secret "cluster-identity-secret" < Exit [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:54 @ 02/02/23 20:06:11.38 (171ms) > Enter [It] With Flatcar control-plane and worker nodes - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:293 @ 02/02/23 20:06:11.38 INFO: Cluster name is capz-e2e-cbbtow-flatcar INFO: Creating the workload cluster with name "capz-e2e-cbbtow-flatcar" using the "flatcar" template (Kubernetes FLATCAR_KUBERNETES_VERSION, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-cbbtow-flatcar --infrastructure (default) --kubernetes-version FLATCAR_KUBERNETES_VERSION --control-plane-machine-count 1 --worker-machine-count 1 --flavor flatcar [FAILED] Failed to run clusterctl config cluster Unexpected error: <*errors.fundamental | 0xc000a3a018>: { msg: "invalid KubernetesVersion. Please use a semantic version number", stack: [0x2fe268b, 0x2fe13e5, 0x2feda38, 0x2ff15ef, 0x364e731, 0x19472db, 0x195b7f8, 0x14db741], } invalid KubernetesVersion. Please use a semantic version number occurred In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/clusterctl/client.go:302 @ 02/02/23 20:06:11.382 < Exit [It] With Flatcar control-plane and worker nodes - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:293 @ 02/02/23 20:06:11.382 (1ms) > Enter [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 02/02/23 20:06:11.382 Feb 2 20:06:11.428: INFO: FAILED! Feb 2 20:06:11.428: INFO: Cleaning up after "Workload cluster creation Creating a Flatcar cluster [OPTIONAL] With Flatcar control-plane and worker nodes" spec STEP: Unable to dump workload cluster logs as the cluster is nil - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:154 @ 02/02/23 20:06:11.428 Feb 2 20:06:11.428: INFO: Dumping all the Cluster API resources in the "capz-e2e-cbbtow" namespace Feb 2 20:06:11.939: INFO: Deleting all clusters in the capz-e2e-cbbtow namespace STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:218 @ 02/02/23 20:06:11.939 [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 @ 02/02/23 20:06:15.491 runtime error: invalid memory address or nil pointer dereference Full Stack Trace sigs.k8s.io/cluster-api-provider-azure/test/e2e.dumpSpecResourcesAndCleanup({0x43560e0, 0xc0001b0008}, {{0x3eaae3d, 0x17}, {0x4368650, 0xc00028f070}, {0xc0003d9500, 0xf}, 0xc000d74000, 0xc0000ba740, ...}) /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:177 +0x4ad sigs.k8s.io/cluster-api-provider-azure/test/e2e.glob..func1.2() /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:136 +0x2d0 < Exit [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 02/02/23 20:06:15.491 (4.11s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sa\scluster\sthat\suses\sthe\sexternal\scloud\sprovider\sand\smachinepools\s\[OPTIONAL\]\swith\s1\scontrol\splane\snode\sand\s1\smachinepool$'
[FAILED] Timed out after 1800.000s. Timed out waiting for 1 ready replicas for MachinePool capz-e2e-33rt4c/capz-e2e-33rt4c-flex-mp-0 Expected <int>: 0 to equal <int>: 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinepool_helpers.go:91 @ 02/02/23 20:42:43.39from junit.e2e_suite.1.xml
2023/02/02 20:06:11 failed trying to get namespace (capz-e2e-33rt4c):namespaces "capz-e2e-33rt4c" not found cluster.cluster.x-k8s.io/capz-e2e-33rt4c-flex created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-33rt4c-flex created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-33rt4c-flex-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-33rt4c-flex-control-plane created machinepool.cluster.x-k8s.io/capz-e2e-33rt4c-flex-mp-0 created azuremachinepool.infrastructure.cluster.x-k8s.io/capz-e2e-33rt4c-flex-mp-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-33rt4c-flex-mp-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created > Enter [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:54 @ 02/02/23 20:06:11.21 INFO: "" started at Thu, 02 Feb 2023 20:06:11 UTC on Ginkgo node 6 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml STEP: Creating namespace "capz-e2e-33rt4c" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 20:06:11.21 Feb 2 20:06:11.210: INFO: starting to create namespace for hosting the "capz-e2e-33rt4c" test spec INFO: Creating namespace capz-e2e-33rt4c INFO: Creating event watcher for namespace "capz-e2e-33rt4c" Feb 2 20:06:11.309: INFO: Creating cluster identity secret "cluster-identity-secret" < Exit [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:54 @ 02/02/23 20:06:11.378 (168ms) > Enter [It] with 1 control plane node and 1 machinepool - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:501 @ 02/02/23 20:06:11.378 STEP: using user-assigned identity - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:502 @ 02/02/23 20:06:11.378 INFO: Cluster name is capz-e2e-33rt4c-flex INFO: Creating the workload cluster with name "capz-e2e-33rt4c-flex" using the "external-cloud-provider-vmss-flex" template (Kubernetes v1.26.0, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-33rt4c-flex --infrastructure (default) --kubernetes-version v1.26.0 --control-plane-machine-count 1 --worker-machine-count 1 --flavor external-cloud-provider-vmss-flex INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 02/02/23 20:06:15.292 INFO: Waiting for control plane to be initialized STEP: Installing cloud-provider-azure components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:45 @ 02/02/23 20:08:05.452 Feb 2 20:10:17.660: INFO: getting history for release cloud-provider-azure-oot Feb 2 20:10:17.693: INFO: Release cloud-provider-azure-oot does not exist, installing it Feb 2 20:10:19.338: INFO: creating 1 resource(s) Feb 2 20:10:19.481: INFO: creating 10 resource(s) Feb 2 20:10:19.800: INFO: Install complete STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:49 @ 02/02/23 20:10:19.8 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:102 @ 02/02/23 20:10:19.8 Feb 2 20:10:19.854: INFO: getting history for release projectcalico Feb 2 20:10:19.887: INFO: Release projectcalico does not exist, installing it Feb 2 20:10:21.137: INFO: creating 1 resource(s) Feb 2 20:10:21.193: INFO: creating 1 resource(s) Feb 2 20:10:21.243: INFO: creating 1 resource(s) Feb 2 20:10:21.305: INFO: creating 1 resource(s) Feb 2 20:10:21.372: INFO: creating 1 resource(s) Feb 2 20:10:21.428: INFO: creating 1 resource(s) Feb 2 20:10:21.525: INFO: creating 1 resource(s) Feb 2 20:10:21.593: INFO: creating 1 resource(s) Feb 2 20:10:21.644: INFO: creating 1 resource(s) Feb 2 20:10:21.700: INFO: creating 1 resource(s) Feb 2 20:10:21.747: INFO: creating 1 resource(s) Feb 2 20:10:21.792: INFO: creating 1 resource(s) Feb 2 20:10:21.838: INFO: creating 1 resource(s) Feb 2 20:10:21.882: INFO: creating 1 resource(s) Feb 2 20:10:21.941: INFO: creating 1 resource(s) Feb 2 20:10:22.014: INFO: creating 1 resource(s) Feb 2 20:10:22.072: INFO: creating 1 resource(s) Feb 2 20:10:22.124: INFO: creating 1 resource(s) Feb 2 20:10:22.186: INFO: creating 1 resource(s) Feb 2 20:10:22.344: INFO: creating 1 resource(s) Feb 2 20:10:22.585: INFO: creating 1 resource(s) Feb 2 20:10:22.649: INFO: Clearing discovery cache Feb 2 20:10:22.649: INFO: beginning wait for 21 resources with timeout of 1m0s Feb 2 20:10:25.494: INFO: creating 1 resource(s) Feb 2 20:10:25.954: INFO: creating 6 resource(s) Feb 2 20:10:26.559: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:58 @ 02/02/23 20:10:26.82 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 20:10:26.956 Feb 2 20:10:26.956: INFO: starting to wait for deployment to become available Feb 2 20:10:37.022: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.066251278s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:64 @ 02/02/23 20:10:37.022 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 20:10:37.193 Feb 2 20:10:37.193: INFO: starting to wait for deployment to become available Feb 2 20:11:28.322: INFO: Deployment calico-system/calico-kube-controllers is now available, took 51.129250565s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 20:11:28.779 Feb 2 20:11:28.779: INFO: starting to wait for deployment to become available Feb 2 20:11:28.812: INFO: Deployment calico-system/calico-typha is now available, took 32.562234ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:69 @ 02/02/23 20:11:28.812 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 20:11:29.062 Feb 2 20:11:29.062: INFO: starting to wait for deployment to become available Feb 2 20:11:49.168: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 20.106519359s STEP: Waiting for Ready cloud-controller-manager deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:55 @ 02/02/23 20:11:49.168 STEP: waiting for deployment kube-system/cloud-controller-manager to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 20:11:49.451 Feb 2 20:11:49.451: INFO: starting to wait for deployment to become available Feb 2 20:11:49.484: INFO: Deployment kube-system/cloud-controller-manager is now available, took 33.054208ms INFO: Waiting for the first control plane machine managed by capz-e2e-33rt4c/capz-e2e-33rt4c-flex-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:133 @ 02/02/23 20:11:49.516 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:65 @ 02/02/23 20:11:49.521 Feb 2 20:11:49.574: INFO: getting history for release azuredisk-csi-driver-oot Feb 2 20:11:49.613: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Feb 2 20:11:52.201: INFO: creating 1 resource(s) Feb 2 20:11:52.325: INFO: creating 18 resource(s) Feb 2 20:11:52.671: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:75 @ 02/02/23 20:11:52.671 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 20:11:52.809 Feb 2 20:11:52.809: INFO: starting to wait for deployment to become available Feb 2 20:12:43.308: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 50.498752154s INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-33rt4c/capz-e2e-33rt4c-flex-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:165 @ 02/02/23 20:12:43.321 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:196 @ 02/02/23 20:12:43.328 INFO: Waiting for the machine deployments to be provisioned INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinepool_helpers.go:79 @ 02/02/23 20:12:43.389 [FAILED] Timed out after 1800.000s. Timed out waiting for 1 ready replicas for MachinePool capz-e2e-33rt4c/capz-e2e-33rt4c-flex-mp-0 Expected <int>: 0 to equal <int>: 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinepool_helpers.go:91 @ 02/02/23 20:42:43.39 < Exit [It] with 1 control plane node and 1 machinepool - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:501 @ 02/02/23 20:42:43.39 (36m32.012s) > Enter [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 02/02/23 20:42:43.39 Feb 2 20:42:43.390: INFO: FAILED! Feb 2 20:42:43.390: INFO: Cleaning up after "Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool" spec STEP: Dumping logs from the "capz-e2e-33rt4c-flex" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 20:42:43.39 Feb 2 20:42:43.390: INFO: Dumping workload cluster capz-e2e-33rt4c/capz-e2e-33rt4c-flex logs Feb 2 20:42:43.434: INFO: Collecting logs for Linux node capz-e2e-33rt4c-flex-control-plane-cwg9s in cluster capz-e2e-33rt4c-flex in namespace capz-e2e-33rt4c Feb 2 20:43:00.246: INFO: Collecting boot logs for AzureMachine capz-e2e-33rt4c-flex-control-plane-cwg9s Feb 2 20:43:01.485: INFO: Dumping workload cluster capz-e2e-33rt4c/capz-e2e-33rt4c-flex kube-system pod logs Feb 2 20:43:01.916: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-77d577c9b9-bv2z4, container calico-apiserver Feb 2 20:43:01.917: INFO: Describing Pod calico-apiserver/calico-apiserver-77d577c9b9-bv2z4 Feb 2 20:43:01.980: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-77d577c9b9-ll296, container calico-apiserver Feb 2 20:43:01.980: INFO: Describing Pod calico-apiserver/calico-apiserver-77d577c9b9-ll296 Feb 2 20:43:02.049: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-6b7b9c649d-f58ww, container calico-kube-controllers Feb 2 20:43:02.049: INFO: Describing Pod calico-system/calico-kube-controllers-6b7b9c649d-f58ww Feb 2 20:43:02.119: INFO: Creating log watcher for controller calico-system/calico-node-hs2b4, container calico-node Feb 2 20:43:02.120: INFO: Describing Pod calico-system/calico-node-hs2b4 Feb 2 20:43:02.186: INFO: Creating log watcher for controller calico-system/calico-typha-6c66cb7f5-79nb5, container calico-typha Feb 2 20:43:02.187: INFO: Describing Pod calico-system/calico-typha-6c66cb7f5-79nb5 Feb 2 20:43:02.284: INFO: Creating log watcher for controller calico-system/csi-node-driver-zjjt5, container csi-node-driver-registrar Feb 2 20:43:02.284: INFO: Describing Pod calico-system/csi-node-driver-zjjt5 Feb 2 20:43:02.284: INFO: Creating log watcher for controller calico-system/csi-node-driver-zjjt5, container calico-csi Feb 2 20:43:02.684: INFO: Describing Pod kube-system/cloud-controller-manager-66f8bf6588-t465g Feb 2 20:43:02.684: INFO: Creating log watcher for controller kube-system/cloud-controller-manager-66f8bf6588-t465g, container cloud-controller-manager Feb 2 20:43:03.084: INFO: Describing Pod kube-system/cloud-node-manager-t8dns Feb 2 20:43:03.084: INFO: Creating log watcher for controller kube-system/cloud-node-manager-t8dns, container cloud-node-manager Feb 2 20:43:03.483: INFO: Creating log watcher for controller kube-system/coredns-787d4945fb-knl2j, container coredns Feb 2 20:43:03.483: INFO: Describing Pod kube-system/coredns-787d4945fb-knl2j Feb 2 20:43:03.885: INFO: Creating log watcher for controller kube-system/coredns-787d4945fb-tfd5p, container coredns Feb 2 20:43:03.885: INFO: Describing Pod kube-system/coredns-787d4945fb-tfd5p Feb 2 20:43:04.285: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-b484449d7-2zwqg, container csi-snapshotter Feb 2 20:43:04.285: INFO: Describing Pod kube-system/csi-azuredisk-controller-b484449d7-2zwqg Feb 2 20:43:04.285: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-b484449d7-2zwqg, container csi-provisioner Feb 2 20:43:04.285: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-b484449d7-2zwqg, container azuredisk Feb 2 20:43:04.286: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-b484449d7-2zwqg, container csi-attacher Feb 2 20:43:04.287: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-b484449d7-2zwqg, container csi-resizer Feb 2 20:43:04.287: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-b484449d7-2zwqg, container liveness-probe Feb 2 20:43:04.684: INFO: Describing Pod kube-system/csi-azuredisk-node-2wk6m Feb 2 20:43:04.684: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-2wk6m, container node-driver-registrar Feb 2 20:43:04.684: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-2wk6m, container azuredisk Feb 2 20:43:04.684: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-2wk6m, container liveness-probe Feb 2 20:43:05.083: INFO: Describing Pod kube-system/etcd-capz-e2e-33rt4c-flex-control-plane-cwg9s Feb 2 20:43:05.083: INFO: Creating log watcher for controller kube-system/etcd-capz-e2e-33rt4c-flex-control-plane-cwg9s, container etcd Feb 2 20:43:05.484: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-33rt4c-flex-control-plane-cwg9s, container kube-apiserver Feb 2 20:43:05.484: INFO: Describing Pod kube-system/kube-apiserver-capz-e2e-33rt4c-flex-control-plane-cwg9s Feb 2 20:43:05.882: INFO: Describing Pod kube-system/kube-controller-manager-capz-e2e-33rt4c-flex-control-plane-cwg9s Feb 2 20:43:05.882: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-33rt4c-flex-control-plane-cwg9s, container kube-controller-manager Feb 2 20:43:06.282: INFO: Describing Pod kube-system/kube-proxy-p9ngt Feb 2 20:43:06.283: INFO: Creating log watcher for controller kube-system/kube-proxy-p9ngt, container kube-proxy Feb 2 20:43:06.682: INFO: Describing Pod kube-system/kube-scheduler-capz-e2e-33rt4c-flex-control-plane-cwg9s Feb 2 20:43:06.682: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-33rt4c-flex-control-plane-cwg9s, container kube-scheduler Feb 2 20:43:07.082: INFO: Fetching kube-system pod logs took 5.596230451s Feb 2 20:43:07.082: INFO: Dumping workload cluster capz-e2e-33rt4c/capz-e2e-33rt4c-flex Azure activity log Feb 2 20:43:07.082: INFO: Creating log watcher for controller tigera-operator/tigera-operator-54b47459dd-txjwj, container tigera-operator Feb 2 20:43:07.082: INFO: Describing Pod tigera-operator/tigera-operator-54b47459dd-txjwj Feb 2 20:43:12.653: INFO: Fetching activity logs took 5.57131597s Feb 2 20:43:12.653: INFO: Dumping all the Cluster API resources in the "capz-e2e-33rt4c" namespace Feb 2 20:43:12.978: INFO: Deleting all clusters in the capz-e2e-33rt4c namespace STEP: Deleting cluster capz-e2e-33rt4c-flex - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 02/02/23 20:43:12.997 INFO: Waiting for the Cluster capz-e2e-33rt4c/capz-e2e-33rt4c-flex to be deleted STEP: Waiting for cluster capz-e2e-33rt4c-flex to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 02/02/23 20:43:13.011 Feb 2 20:47:53.157: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-33rt4c Feb 2 20:47:53.175: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:218 @ 02/02/23 20:47:53.769 INFO: "with 1 control plane node and 1 machinepool" started at Thu, 02 Feb 2023 20:49:12 UTC on Ginkgo node 6 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 02/02/23 20:49:12.901 (6m29.511s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sa\sprivate\scluster\s\[OPTIONAL\]\sCreates\sa\spublic\smanagement\scluster\sin\sa\scustom\svnet$'
[FAILED] Timed out after 198.082s. Expected success, but got an error: <*errors.withStack | 0xc002bb1c38>: { error: <*errors.withMessage | 0xc0002e4920>{ cause: <*url.Error | 0xc001e58030>{ Op: "Get", URL: "https://capz-e2e-sgnxkw-public-custom-vnet-9a5377f7.eastus.cloudapp.azure.com:6443/version", Err: <*net.OpError | 0xc0018f17c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000f219b0>{IP: [20, 84, 1, 64], Port: 6443, Zone: ""}, Err: <*poll.DeadlineExceededError | 0x5d0a860>{}, }, }, msg: "Kubernetes cluster unreachable", }, stack: [0x3548e05, 0x35eba7b, 0x3645292, 0x154e085, 0x154d57c, 0x196c15a, 0x196d517, 0x196a50d, 0x3644a49, 0x363454c, 0x3637277, 0x2ff1c90, 0x364ff73, 0x19472db, 0x195b7f8, 0x14db741], } Kubernetes cluster unreachable: Get "https://capz-e2e-sgnxkw-public-custom-vnet-9a5377f7.eastus.cloudapp.azure.com:6443/version": dial tcp 20.84.1.64:6443: i/o timeout In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:949 @ 02/02/23 20:10:06.594from junit.e2e_suite.1.xml
2023/02/02 20:06:11 failed trying to get namespace (capz-e2e-sgnxkw):namespaces "capz-e2e-sgnxkw" not found cluster.cluster.x-k8s.io/capz-e2e-sgnxkw-public-custom-vnet created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-sgnxkw-public-custom-vnet created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-sgnxkw-public-custom-vnet-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-sgnxkw-public-custom-vnet-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-sgnxkw-public-custom-vnet-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-sgnxkw-public-custom-vnet-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-sgnxkw-public-custom-vnet-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created machinehealthcheck.cluster.x-k8s.io/capz-e2e-sgnxkw-public-custom-vnet-mhc-0 created Failed to get logs for Machine capz-e2e-sgnxkw-public-custom-vnet-md-0-84f6c79854-f55sx, Cluster capz-e2e-sgnxkw/capz-e2e-sgnxkw-public-custom-vnet: [dialing from control plane to target node at capz-e2e-sgnxkw-public-custom-vnet-md-0-zbpv7: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: AzureMachine provider ID is nil] > Enter [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:54 @ 02/02/23 20:06:11.199 INFO: "" started at Thu, 02 Feb 2023 20:06:11 UTC on Ginkgo node 1 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml STEP: Creating namespace "capz-e2e-sgnxkw" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 20:06:11.199 Feb 2 20:06:11.199: INFO: starting to create namespace for hosting the "capz-e2e-sgnxkw" test spec INFO: Creating namespace capz-e2e-sgnxkw INFO: Creating event watcher for namespace "capz-e2e-sgnxkw" Feb 2 20:06:11.280: INFO: Creating cluster identity secret "cluster-identity-secret" < Exit [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:54 @ 02/02/23 20:06:11.359 (160ms) > Enter [It] Creates a public management cluster in a custom vnet - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:154 @ 02/02/23 20:06:11.359 INFO: Cluster name is capz-e2e-sgnxkw-public-custom-vnet STEP: Creating a custom virtual network - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:156 @ 02/02/23 20:06:11.359 STEP: creating Azure clients with the workload cluster's subscription - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_privatecluster.go:214 @ 02/02/23 20:06:11.359 STEP: creating a resource group - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_privatecluster.go:229 @ 02/02/23 20:06:11.36 STEP: creating a network security group - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_privatecluster.go:240 @ 02/02/23 20:06:12.645 STEP: creating a node security group - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_privatecluster.go:282 @ 02/02/23 20:06:16.774 STEP: creating a node routetable - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_privatecluster.go:295 @ 02/02/23 20:06:20.636 STEP: creating a virtual network - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_privatecluster.go:306 @ 02/02/23 20:06:23.504 END STEP: Creating a custom virtual network - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:156 @ 02/02/23 20:06:27.438 (16.079s) INFO: Creating the workload cluster with name "capz-e2e-sgnxkw-public-custom-vnet" using the "custom-vnet" template (Kubernetes v1.25.6, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-sgnxkw-public-custom-vnet --infrastructure (default) --kubernetes-version v1.25.6 --control-plane-machine-count 1 --worker-machine-count 1 --flavor custom-vnet INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 02/02/23 20:06:28.405 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:49 @ 02/02/23 20:06:48.486 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:102 @ 02/02/23 20:06:48.486 [FAILED] Timed out after 198.082s. Expected success, but got an error: <*errors.withStack | 0xc002bb1c38>: { error: <*errors.withMessage | 0xc0002e4920>{ cause: <*url.Error | 0xc001e58030>{ Op: "Get", URL: "https://capz-e2e-sgnxkw-public-custom-vnet-9a5377f7.eastus.cloudapp.azure.com:6443/version", Err: <*net.OpError | 0xc0018f17c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000f219b0>{IP: [20, 84, 1, 64], Port: 6443, Zone: ""}, Err: <*poll.DeadlineExceededError | 0x5d0a860>{}, }, }, msg: "Kubernetes cluster unreachable", }, stack: [0x3548e05, 0x35eba7b, 0x3645292, 0x154e085, 0x154d57c, 0x196c15a, 0x196d517, 0x196a50d, 0x3644a49, 0x363454c, 0x3637277, 0x2ff1c90, 0x364ff73, 0x19472db, 0x195b7f8, 0x14db741], } Kubernetes cluster unreachable: Get "https://capz-e2e-sgnxkw-public-custom-vnet-9a5377f7.eastus.cloudapp.azure.com:6443/version": dial tcp 20.84.1.64:6443: i/o timeout In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:949 @ 02/02/23 20:10:06.594 < Exit [It] Creates a public management cluster in a custom vnet - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:154 @ 02/02/23 20:10:06.594 (3m55.235s) > Enter [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 02/02/23 20:10:06.594 Feb 2 20:10:06.594: INFO: FAILED! Feb 2 20:10:06.594: INFO: Cleaning up after "Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet" spec STEP: Dumping logs from the "capz-e2e-sgnxkw-public-custom-vnet" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 20:10:06.594 Feb 2 20:10:06.594: INFO: Dumping workload cluster capz-e2e-sgnxkw/capz-e2e-sgnxkw-public-custom-vnet logs Feb 2 20:10:06.636: INFO: Collecting logs for Linux node capz-e2e-sgnxkw-public-custom-vnet-control-plane-x9wdl in cluster capz-e2e-sgnxkw-public-custom-vnet in namespace capz-e2e-sgnxkw Feb 2 20:10:11.878: INFO: Collecting boot logs for AzureMachine capz-e2e-sgnxkw-public-custom-vnet-control-plane-x9wdl Feb 2 20:10:12.773: INFO: Collecting logs for Linux node capz-e2e-sgnxkw-public-custom-vnet-md-0-zbpv7 in cluster capz-e2e-sgnxkw-public-custom-vnet in namespace capz-e2e-sgnxkw Feb 2 20:11:14.948: INFO: Collecting boot logs for AzureMachine capz-e2e-sgnxkw-public-custom-vnet-md-0-zbpv7 Feb 2 20:11:14.968: INFO: Dumping workload cluster capz-e2e-sgnxkw/capz-e2e-sgnxkw-public-custom-vnet kube-system pod logs Feb 2 20:11:15.378: INFO: Creating log watcher for controller kube-system/coredns-565d847f94-bctzw, container coredns Feb 2 20:11:15.379: INFO: Describing Pod kube-system/coredns-565d847f94-bctzw Feb 2 20:11:15.446: INFO: Creating log watcher for controller kube-system/coredns-565d847f94-bnwlj, container coredns Feb 2 20:11:15.446: INFO: Describing Pod kube-system/coredns-565d847f94-bnwlj Feb 2 20:11:15.511: INFO: Creating log watcher for controller kube-system/etcd-capz-e2e-sgnxkw-public-custom-vnet-control-plane-x9wdl, container etcd Feb 2 20:11:15.511: INFO: Describing Pod kube-system/etcd-capz-e2e-sgnxkw-public-custom-vnet-control-plane-x9wdl Feb 2 20:11:15.578: INFO: Describing Pod kube-system/kube-apiserver-capz-e2e-sgnxkw-public-custom-vnet-control-plane-x9wdl Feb 2 20:11:15.578: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-sgnxkw-public-custom-vnet-control-plane-x9wdl, container kube-apiserver Feb 2 20:11:15.644: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-sgnxkw-public-custom-vnet-control-plane-x9wdl, container kube-controller-manager Feb 2 20:11:15.644: INFO: Describing Pod kube-system/kube-controller-manager-capz-e2e-sgnxkw-public-custom-vnet-control-plane-x9wdl Feb 2 20:11:15.748: INFO: Describing Pod kube-system/kube-proxy-687jn Feb 2 20:11:15.749: INFO: Creating log watcher for controller kube-system/kube-proxy-687jn, container kube-proxy Feb 2 20:11:16.144: INFO: Fetching kube-system pod logs took 1.175348565s Feb 2 20:11:16.144: INFO: Dumping workload cluster capz-e2e-sgnxkw/capz-e2e-sgnxkw-public-custom-vnet Azure activity log Feb 2 20:11:16.144: INFO: Describing Pod kube-system/kube-scheduler-capz-e2e-sgnxkw-public-custom-vnet-control-plane-x9wdl Feb 2 20:11:16.144: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-sgnxkw-public-custom-vnet-control-plane-x9wdl, container kube-scheduler Feb 2 20:11:17.785: INFO: Fetching activity logs took 1.641114901s Feb 2 20:11:17.785: INFO: Dumping all the Cluster API resources in the "capz-e2e-sgnxkw" namespace Feb 2 20:11:18.219: INFO: Deleting all clusters in the capz-e2e-sgnxkw namespace STEP: Deleting cluster capz-e2e-sgnxkw-public-custom-vnet - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 02/02/23 20:11:18.245 INFO: Waiting for the Cluster capz-e2e-sgnxkw/capz-e2e-sgnxkw-public-custom-vnet to be deleted STEP: Waiting for cluster capz-e2e-sgnxkw-public-custom-vnet to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 02/02/23 20:11:18.256 Feb 2 20:14:38.377: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-sgnxkw Feb 2 20:14:38.400: INFO: Running additional cleanup for the "create-workload-cluster" test spec Feb 2 20:14:38.400: INFO: deleting an existing virtual network "custom-vnet" Feb 2 20:14:48.997: INFO: deleting an existing route table "node-routetable" Feb 2 20:14:51.266: INFO: deleting an existing network security group "node-nsg" Feb 2 20:15:01.716: INFO: deleting an existing network security group "control-plane-nsg" Feb 2 20:15:12.021: INFO: verifying the existing resource group "capz-e2e-sgnxkw-public-custom-vnet" is empty Feb 2 20:15:12.085: INFO: deleting the existing resource group "capz-e2e-sgnxkw-public-custom-vnet" Feb 2 20:16:28.486: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:218 @ 02/02/23 20:16:29.008 INFO: "Creates a public management cluster in a custom vnet" started at Thu, 02 Feb 2023 20:16:34 UTC on Ginkgo node 1 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 02/02/23 20:16:34.179 (6m27.584s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sclusters\susing\sclusterclass\s\[OPTIONAL\]\swith\sa\ssingle\scontrol\splane\snode\,\sone\slinux\sworker\snode\,\sand\sone\swindows\sworker\snode$'
[FAILED] Timed out after 1500.001s. Timed out waiting for 1 nodes to be created for MachineDeployment capz-e2e-f3s15c/capz-e2e-f3s15c-cc-md-0-4t7xb Expected <int>: 0 to equal <int>: 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:131 @ 02/02/23 20:38:05.097from junit.e2e_suite.1.xml
2023/02/02 20:06:11 failed trying to get namespace (capz-e2e-f3s15c):namespaces "capz-e2e-f3s15c" not found clusterclass.cluster.x-k8s.io/ci-default created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/ci-default-kubeadm-control-plane created azureclustertemplate.infrastructure.cluster.x-k8s.io/ci-default-azure-cluster created azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-control-plane created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ci-default-worker created azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-worker created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ci-default-worker-win created azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-worker-win created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created cluster.cluster.x-k8s.io/capz-e2e-f3s15c-cc created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-f3s15c-cc-calico created clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created configmap/cni-capz-e2e-f3s15c-cc-calico-windows created configmap/csi-proxy-addon created Failed to get logs for Machine capz-e2e-f3s15c-cc-md-0-4t7xb-84476d8b5b-6fnmk, Cluster capz-e2e-f3s15c/capz-e2e-f3s15c-cc: dialing public load balancer at capz-e2e-f3s15c-cc-a5fa3d41.eastus.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain Failed to get logs for Machine capz-e2e-f3s15c-cc-md-win-cwqvd-6ff8f57fff-bk6pb, Cluster capz-e2e-f3s15c/capz-e2e-f3s15c-cc: [dialing public load balancer at capz-e2e-f3s15c-cc-a5fa3d41.eastus.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain, Unable to collect VM Boot Diagnostic logs: AzureMachine provider ID is nil] Failed to get logs for Machine capz-e2e-f3s15c-cc-t97lp-cftzw, Cluster capz-e2e-f3s15c/capz-e2e-f3s15c-cc: dialing public load balancer at capz-e2e-f3s15c-cc-a5fa3d41.eastus.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain > Enter [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:54 @ 02/02/23 20:06:11.228 INFO: "" started at Thu, 02 Feb 2023 20:06:11 UTC on Ginkgo node 4 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml STEP: Creating namespace "capz-e2e-f3s15c" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 20:06:11.228 Feb 2 20:06:11.228: INFO: starting to create namespace for hosting the "capz-e2e-f3s15c" test spec INFO: Creating namespace capz-e2e-f3s15c INFO: Creating event watcher for namespace "capz-e2e-f3s15c" Feb 2 20:06:11.354: INFO: Using existing cluster identity secret < Exit [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:54 @ 02/02/23 20:06:11.354 (127ms) > Enter [It] with a single control plane node, one linux worker node, and one windows worker node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:792 @ 02/02/23 20:06:11.354 INFO: Cluster name is capz-e2e-f3s15c-cc INFO: Creating the workload cluster with name "capz-e2e-f3s15c-cc" using the "topology" template (Kubernetes v1.25.6, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-f3s15c-cc --infrastructure (default) --kubernetes-version v1.25.6 --control-plane-machine-count 1 --worker-machine-count 1 --flavor topology INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 02/02/23 20:06:16.861 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:49 @ 02/02/23 20:08:06.982 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:102 @ 02/02/23 20:08:06.982 Feb 2 20:10:52.170: INFO: getting history for release projectcalico Feb 2 20:10:52.203: INFO: Release projectcalico does not exist, installing it Feb 2 20:10:52.899: INFO: creating 1 resource(s) Feb 2 20:10:52.959: INFO: creating 1 resource(s) Feb 2 20:10:53.015: INFO: creating 1 resource(s) Feb 2 20:10:53.066: INFO: creating 1 resource(s) Feb 2 20:10:53.116: INFO: creating 1 resource(s) Feb 2 20:10:53.168: INFO: creating 1 resource(s) Feb 2 20:10:53.296: INFO: creating 1 resource(s) Feb 2 20:10:53.361: INFO: creating 1 resource(s) Feb 2 20:10:53.402: INFO: creating 1 resource(s) Feb 2 20:10:53.448: INFO: creating 1 resource(s) Feb 2 20:10:53.492: INFO: creating 1 resource(s) Feb 2 20:10:53.537: INFO: creating 1 resource(s) Feb 2 20:10:53.580: INFO: creating 1 resource(s) Feb 2 20:10:53.625: INFO: creating 1 resource(s) Feb 2 20:10:53.670: INFO: creating 1 resource(s) Feb 2 20:10:53.722: INFO: creating 1 resource(s) Feb 2 20:10:53.813: INFO: creating 1 resource(s) Feb 2 20:10:53.868: INFO: creating 1 resource(s) Feb 2 20:10:53.933: INFO: creating 1 resource(s) Feb 2 20:10:54.054: INFO: creating 1 resource(s) Feb 2 20:10:54.319: INFO: creating 1 resource(s) Feb 2 20:10:54.381: INFO: Clearing discovery cache Feb 2 20:10:54.381: INFO: beginning wait for 21 resources with timeout of 1m0s Feb 2 20:10:56.943: INFO: creating 1 resource(s) Feb 2 20:10:57.310: INFO: creating 6 resource(s) Feb 2 20:10:57.847: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:58 @ 02/02/23 20:10:58.084 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 20:10:58.216 Feb 2 20:10:58.216: INFO: starting to wait for deployment to become available Feb 2 20:11:09.301: INFO: Deployment tigera-operator/tigera-operator is now available, took 11.085114829s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:64 @ 02/02/23 20:11:09.301 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 20:11:09.471 Feb 2 20:11:09.471: INFO: starting to wait for deployment to become available Feb 2 20:12:00.852: INFO: Deployment calico-system/calico-kube-controllers is now available, took 51.380745039s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 20:12:01.01 Feb 2 20:12:01.010: INFO: starting to wait for deployment to become available Feb 2 20:12:01.042: INFO: Deployment calico-system/calico-typha is now available, took 31.849119ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:69 @ 02/02/23 20:12:01.042 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 20:12:11.232 Feb 2 20:12:11.232: INFO: starting to wait for deployment to become available Feb 2 20:12:21.296: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 10.063947617s INFO: Waiting for the first control plane machine managed by capz-e2e-f3s15c/capz-e2e-f3s15c-cc-t97lp to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:133 @ 02/02/23 20:12:21.315 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:65 @ 02/02/23 20:12:21.321 Feb 2 20:12:21.375: INFO: getting history for release azuredisk-csi-driver-oot Feb 2 20:12:21.409: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Feb 2 20:12:23.522: INFO: creating 1 resource(s) Feb 2 20:12:23.651: INFO: creating 18 resource(s) Feb 2 20:12:23.968: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:75 @ 02/02/23 20:12:23.968 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 20:12:24.104 Feb 2 20:12:24.104: INFO: starting to wait for deployment to become available Feb 2 20:13:05.052: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 40.947781358s INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-f3s15c/capz-e2e-f3s15c-cc-t97lp to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:165 @ 02/02/23 20:13:05.066 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:196 @ 02/02/23 20:13:05.072 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:102 @ 02/02/23 20:13:05.096 [FAILED] Timed out after 1500.001s. Timed out waiting for 1 nodes to be created for MachineDeployment capz-e2e-f3s15c/capz-e2e-f3s15c-cc-md-0-4t7xb Expected <int>: 0 to equal <int>: 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:131 @ 02/02/23 20:38:05.097 < Exit [It] with a single control plane node, one linux worker node, and one windows worker node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:792 @ 02/02/23 20:38:05.097 (31m53.743s) > Enter [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 02/02/23 20:38:05.097 Feb 2 20:38:05.097: INFO: FAILED! Feb 2 20:38:05.097: INFO: Cleaning up after "Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node" spec STEP: Dumping logs from the "capz-e2e-f3s15c-cc" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:94 @ 02/02/23 20:38:05.097 Feb 2 20:38:05.097: INFO: Dumping workload cluster capz-e2e-f3s15c/capz-e2e-f3s15c-cc logs Feb 2 20:38:05.145: INFO: Collecting logs for Linux node capz-e2e-f3s15c-cc-md-0-infra-2pl8v-9lcwz in cluster capz-e2e-f3s15c-cc in namespace capz-e2e-f3s15c Feb 2 20:39:06.088: INFO: Collecting boot logs for AzureMachine capz-e2e-f3s15c-cc-md-0-infra-2pl8v-9lcwz Feb 2 20:39:07.394: INFO: Unable to collect logs as node doesn't have addresses Feb 2 20:39:07.394: INFO: Collecting logs for Windows node capz-e2e-f3s15c-cc-md-win-infra-wdl8k-kw7wr in cluster capz-e2e-f3s15c-cc in namespace capz-e2e-f3s15c Feb 2 20:43:10.869: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-e2e-f3s15c-cc-md-win-infra-wdl8k-kw7wr to /logs/artifacts/clusters/capz-e2e-f3s15c-cc/machines/capz-e2e-f3s15c-cc-md-win-cwqvd-6ff8f57fff-bk6pb/crashdumps.tar Feb 2 20:43:11.181: INFO: Collecting boot logs for AzureMachine capz-e2e-f3s15c-cc-md-win-infra-wdl8k-kw7wr Feb 2 20:43:11.204: INFO: Collecting logs for Linux node capz-e2e-f3s15c-cc-control-plane-s789h-lvzts in cluster capz-e2e-f3s15c-cc in namespace capz-e2e-f3s15c Feb 2 20:44:12.183: INFO: Collecting boot logs for AzureMachine capz-e2e-f3s15c-cc-control-plane-s789h-lvzts Feb 2 20:44:13.142: INFO: Dumping workload cluster capz-e2e-f3s15c/capz-e2e-f3s15c-cc kube-system pod logs Feb 2 20:44:13.521: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-97cfd4c98-rpggg, container calico-apiserver Feb 2 20:44:13.521: INFO: Describing Pod calico-apiserver/calico-apiserver-97cfd4c98-rpggg Feb 2 20:44:13.578: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-97cfd4c98-s5dg6, container calico-apiserver Feb 2 20:44:13.578: INFO: Describing Pod calico-apiserver/calico-apiserver-97cfd4c98-s5dg6 Feb 2 20:44:13.636: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-5f9dc85578-5q69j, container calico-kube-controllers Feb 2 20:44:13.636: INFO: Describing Pod calico-system/calico-kube-controllers-5f9dc85578-5q69j Feb 2 20:44:13.696: INFO: Creating log watcher for controller calico-system/calico-node-vl7kg, container calico-node Feb 2 20:44:13.696: INFO: Describing Pod calico-system/calico-node-vl7kg Feb 2 20:44:13.768: INFO: Creating log watcher for controller calico-system/calico-typha-5d887d45c-r9c6h, container calico-typha Feb 2 20:44:13.768: INFO: Describing Pod calico-system/calico-typha-5d887d45c-r9c6h Feb 2 20:44:13.888: INFO: Creating log watcher for controller calico-system/csi-node-driver-j5k7z, container calico-csi Feb 2 20:44:13.888: INFO: Creating log watcher for controller calico-system/csi-node-driver-j5k7z, container csi-node-driver-registrar Feb 2 20:44:13.888: INFO: Describing Pod calico-system/csi-node-driver-j5k7z Feb 2 20:44:14.289: INFO: Describing Pod kube-system/coredns-565d847f94-8p7kh Feb 2 20:44:14.289: INFO: Creating log watcher for controller kube-system/coredns-565d847f94-8p7kh, container coredns Feb 2 20:44:14.688: INFO: Describing Pod kube-system/coredns-565d847f94-dm8dl Feb 2 20:44:14.688: INFO: Creating log watcher for controller kube-system/coredns-565d847f94-dm8dl, container coredns Feb 2 20:44:15.089: INFO: Describing Pod kube-system/csi-azuredisk-controller-6b9657f4f7-9s46n Feb 2 20:44:15.089: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6b9657f4f7-9s46n, container csi-snapshotter Feb 2 20:44:15.089: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6b9657f4f7-9s46n, container liveness-probe Feb 2 20:44:15.089: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6b9657f4f7-9s46n, container azuredisk Feb 2 20:44:15.089: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6b9657f4f7-9s46n, container csi-provisioner Feb 2 20:44:15.089: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6b9657f4f7-9s46n, container csi-attacher Feb 2 20:44:15.090: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6b9657f4f7-9s46n, container csi-resizer Feb 2 20:44:15.489: INFO: Describing Pod kube-system/csi-azuredisk-node-dr86q Feb 2 20:44:15.489: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-dr86q, container node-driver-registrar Feb 2 20:44:15.489: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-dr86q, container liveness-probe Feb 2 20:44:15.489: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-dr86q, container azuredisk Feb 2 20:44:15.888: INFO: Creating log watcher for controller kube-system/etcd-capz-e2e-f3s15c-cc-control-plane-s789h-lvzts, container etcd Feb 2 20:44:15.888: INFO: Describing Pod kube-system/etcd-capz-e2e-f3s15c-cc-control-plane-s789h-lvzts Feb 2 20:44:16.288: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-f3s15c-cc-control-plane-s789h-lvzts, container kube-apiserver Feb 2 20:44:16.288: INFO: Describing Pod kube-system/kube-apiserver-capz-e2e-f3s15c-cc-control-plane-s789h-lvzts Feb 2 20:44:16.688: INFO: Describing Pod kube-system/kube-controller-manager-capz-e2e-f3s15c-cc-control-plane-s789h-lvzts Feb 2 20:44:16.688: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-f3s15c-cc-control-plane-s789h-lvzts, container kube-controller-manager Feb 2 20:44:17.090: INFO: Describing Pod kube-system/kube-proxy-4lwr4 Feb 2 20:44:17.090: INFO: Creating log watcher for controller kube-system/kube-proxy-4lwr4, container kube-proxy Feb 2 20:44:17.488: INFO: Describing Pod kube-system/kube-scheduler-capz-e2e-f3s15c-cc-control-plane-s789h-lvzts Feb 2 20:44:17.488: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-f3s15c-cc-control-plane-s789h-lvzts, container kube-scheduler Feb 2 20:44:17.888: INFO: Fetching kube-system pod logs took 4.7459699s Feb 2 20:44:17.888: INFO: Dumping workload cluster capz-e2e-f3s15c/capz-e2e-f3s15c-cc Azure activity log Feb 2 20:44:17.888: INFO: Creating log watcher for controller tigera-operator/tigera-operator-64db64cb98-gg7s7, container tigera-operator Feb 2 20:44:17.888: INFO: Describing Pod tigera-operator/tigera-operator-64db64cb98-gg7s7 Feb 2 20:44:17.918: INFO: Error fetching activity logs for cluster capz-e2e-f3s15c-cc in namespace capz-e2e-f3s15c. Not able to find the AzureManagedControlPlane on the management cluster: azuremanagedcontrolplanes.infrastructure.cluster.x-k8s.io "capz-e2e-f3s15c-cc" not found Feb 2 20:44:17.918: INFO: Fetching activity logs took 30.85385ms Feb 2 20:44:17.918: INFO: Dumping all the Cluster API resources in the "capz-e2e-f3s15c" namespace Feb 2 20:44:18.400: INFO: Deleting all clusters in the capz-e2e-f3s15c namespace STEP: Deleting cluster capz-e2e-f3s15c-cc - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 02/02/23 20:44:18.422 INFO: Waiting for the Cluster capz-e2e-f3s15c/capz-e2e-f3s15c-cc to be deleted STEP: Waiting for cluster capz-e2e-f3s15c-cc to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 02/02/23 20:44:18.439 Feb 2 20:49:58.681: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-f3s15c Feb 2 20:49:58.696: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:218 @ 02/02/23 20:49:59.27 INFO: "with a single control plane node, one linux worker node, and one windows worker node" started at Thu, 02 Feb 2023 20:51:19 UTC on Ginkgo node 4 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 02/02/23 20:51:19.625 (13m14.528s)
Filter through log files | View test history on testgrid
capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Conformance Tests conformance-tests
capz-e2e [It] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating an AKS cluster [Managed Kubernetes] with a single control plane node and 1 node