Recent runs || View in Spyglass
PR | jackfrancis: ci: external cloud-provider-azure includes Windows |
Result | FAILURE |
Tests | 3 failed / 23 succeeded |
Started | |
Elapsed | 1h9m |
Revision | b987c1fbcf8e2303b3a4b0c56e2774d7cea89535 |
Refs |
2865 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\s\[OPTIONAL\]\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
[FAILED] Expected success, but got an error: <*errors.StatusError | 0xc000ee7ea0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "configmaps \"kubeadm-config\" not found", Reason: "NotFound", Details: { Name: "kubeadm-config", Group: "", Kind: "configmaps", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } configmaps "kubeadm-config" not found In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:915 @ 01/19/23 00:28:46.01from junit.e2e_suite.1.xml
2023/01/19 00:24:02 failed trying to get namespace (capz-e2e-j57fai):namespaces "capz-e2e-j57fai" not found cluster.cluster.x-k8s.io/capz-e2e-j57fai-gpu serverside-applied azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-j57fai-gpu serverside-applied kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-j57fai-gpu-control-plane serverside-applied azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-j57fai-gpu-control-plane serverside-applied azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp serverside-applied machinedeployment.cluster.x-k8s.io/capz-e2e-j57fai-gpu-md-0 serverside-applied azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-j57fai-gpu-md-0 serverside-applied kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-j57fai-gpu-md-0 serverside-applied clusterresourceset.addons.cluster.x-k8s.io/crs-gpu-operator serverside-applied configmap/nvidia-clusterpolicy-crd serverside-applied configmap/nvidia-gpu-operator-components serverside-applied Failed to get logs for Machine capz-e2e-j57fai-gpu-md-0-84ccfcc4f4-qv2n4, Cluster capz-e2e-j57fai/capz-e2e-j57fai-gpu: [dialing from control plane to target node at capz-e2e-j57fai-gpu-md-0-jk9td: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: AzureMachine provider ID is nil] > Enter [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/19/23 00:24:02.851 INFO: "" started at Thu, 19 Jan 2023 00:24:02 UTC on Ginkgo node 6 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml STEP: Creating namespace "capz-e2e-j57fai" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:24:02.852 Jan 19 00:24:02.852: INFO: starting to create namespace for hosting the "capz-e2e-j57fai" test spec INFO: Creating namespace capz-e2e-j57fai INFO: Creating event watcher for namespace "capz-e2e-j57fai" Jan 19 00:24:03.050: INFO: Creating cluster identity secret "cluster-identity-secret" < Exit [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/19/23 00:24:03.157 (306ms) > Enter [It] with a single control plane node and 1 node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:440 @ 01/19/23 00:24:03.157 INFO: Cluster name is capz-e2e-j57fai-gpu INFO: Creating the workload cluster with name "capz-e2e-j57fai-gpu" using the "nvidia-gpu" template (Kubernetes v1.24.9, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-j57fai-gpu --infrastructure (default) --kubernetes-version v1.24.9 --control-plane-machine-count 1 --worker-machine-count 1 --flavor nvidia-gpu INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 01/19/23 00:24:09.239 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/19/23 00:25:59.384 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:131 @ 01/19/23 00:25:59.384 Jan 19 00:28:39.602: INFO: getting history for release projectcalico Jan 19 00:28:39.639: INFO: Release projectcalico does not exist, installing it Jan 19 00:28:40.631: INFO: creating 1 resource(s) Jan 19 00:28:40.692: INFO: creating 1 resource(s) Jan 19 00:28:40.743: INFO: creating 1 resource(s) Jan 19 00:28:40.822: INFO: creating 1 resource(s) Jan 19 00:28:40.882: INFO: creating 1 resource(s) Jan 19 00:28:40.950: INFO: creating 1 resource(s) Jan 19 00:28:41.057: INFO: creating 1 resource(s) Jan 19 00:28:41.150: INFO: creating 1 resource(s) Jan 19 00:28:41.201: INFO: creating 1 resource(s) Jan 19 00:28:41.264: INFO: creating 1 resource(s) Jan 19 00:28:41.311: INFO: creating 1 resource(s) Jan 19 00:28:41.356: INFO: creating 1 resource(s) Jan 19 00:28:41.400: INFO: creating 1 resource(s) Jan 19 00:28:41.446: INFO: creating 1 resource(s) Jan 19 00:28:41.490: INFO: creating 1 resource(s) Jan 19 00:28:41.547: INFO: creating 1 resource(s) Jan 19 00:28:41.613: INFO: creating 1 resource(s) Jan 19 00:28:41.660: INFO: creating 1 resource(s) Jan 19 00:28:41.728: INFO: creating 1 resource(s) Jan 19 00:28:41.847: INFO: creating 1 resource(s) Jan 19 00:28:42.132: INFO: creating 1 resource(s) Jan 19 00:28:42.243: INFO: Clearing discovery cache Jan 19 00:28:42.243: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 19 00:28:44.807: INFO: creating 1 resource(s) Jan 19 00:28:45.279: INFO: creating 6 resource(s) Jan 19 00:28:45.851: INFO: Install complete [FAILED] Expected success, but got an error: <*errors.StatusError | 0xc000ee7ea0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "configmaps \"kubeadm-config\" not found", Reason: "NotFound", Details: { Name: "kubeadm-config", Group: "", Kind: "configmaps", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } configmaps "kubeadm-config" not found In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:915 @ 01/19/23 00:28:46.01 < Exit [It] with a single control plane node and 1 node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:440 @ 01/19/23 00:28:46.01 (4m42.853s) > Enter [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 01/19/23 00:28:46.01 Jan 19 00:28:46.010: INFO: FAILED! Jan 19 00:28:46.010: INFO: Cleaning up after "Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node" spec STEP: Dumping logs from the "capz-e2e-j57fai-gpu" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:28:46.01 Jan 19 00:28:46.010: INFO: Dumping workload cluster capz-e2e-j57fai/capz-e2e-j57fai-gpu logs Jan 19 00:28:46.057: INFO: Collecting logs for Linux node capz-e2e-j57fai-gpu-control-plane-6lx2k in cluster capz-e2e-j57fai-gpu in namespace capz-e2e-j57fai Jan 19 00:28:55.587: INFO: Collecting boot logs for AzureMachine capz-e2e-j57fai-gpu-control-plane-6lx2k Jan 19 00:28:56.570: INFO: Collecting logs for Linux node capz-e2e-j57fai-gpu-md-0-jk9td in cluster capz-e2e-j57fai-gpu in namespace capz-e2e-j57fai Jan 19 00:29:59.349: INFO: Collecting boot logs for AzureMachine capz-e2e-j57fai-gpu-md-0-jk9td Jan 19 00:29:59.365: INFO: Dumping workload cluster capz-e2e-j57fai/capz-e2e-j57fai-gpu kube-system pod logs Jan 19 00:29:59.846: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-594d54f99-xpr2l, container calico-kube-controllers Jan 19 00:29:59.846: INFO: Creating log watcher for controller calico-system/calico-node-gs62f, container calico-node Jan 19 00:29:59.847: INFO: Collecting events for Pod calico-system/calico-node-gs62f Jan 19 00:29:59.847: INFO: Creating log watcher for controller calico-system/calico-typha-6cf7bb8684-cssm9, container calico-typha Jan 19 00:29:59.847: INFO: Collecting events for Pod calico-system/calico-typha-6cf7bb8684-cssm9 Jan 19 00:29:59.847: INFO: Collecting events for Pod calico-system/calico-kube-controllers-594d54f99-xpr2l Jan 19 00:29:59.880: INFO: Creating log watcher for controller gpu-operator-resources/gpu-operator-79c7dfc46c-sb48g, container gpu-operator Jan 19 00:29:59.880: INFO: Collecting events for Pod gpu-operator-resources/gpu-operator-79c7dfc46c-sb48g Jan 19 00:29:59.880: INFO: Creating log watcher for controller gpu-operator-resources/gpu-operator-node-feature-discovery-master-58fd98d466-kjlz8, container master Jan 19 00:29:59.880: INFO: Collecting events for Pod gpu-operator-resources/gpu-operator-node-feature-discovery-master-58fd98d466-kjlz8 Jan 19 00:29:59.907: INFO: Error starting logs stream for pod calico-system/calico-node-gs62f, container calico-node: container "calico-node" in pod "calico-node-gs62f" is waiting to start: PodInitializing Jan 19 00:29:59.915: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-j57fai-gpu-control-plane-6lx2k, container kube-apiserver Jan 19 00:29:59.915: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-wdtf7, container coredns Jan 19 00:29:59.916: INFO: Creating log watcher for controller kube-system/etcd-capz-e2e-j57fai-gpu-control-plane-6lx2k, container etcd Jan 19 00:29:59.916: INFO: Collecting events for Pod kube-system/etcd-capz-e2e-j57fai-gpu-control-plane-6lx2k Jan 19 00:29:59.916: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-wdtf7 Jan 19 00:29:59.916: INFO: Collecting events for Pod kube-system/kube-controller-manager-capz-e2e-j57fai-gpu-control-plane-6lx2k Jan 19 00:29:59.916: INFO: Collecting events for Pod kube-system/kube-apiserver-capz-e2e-j57fai-gpu-control-plane-6lx2k Jan 19 00:29:59.916: INFO: Collecting events for Pod kube-system/kube-proxy-fhv7w Jan 19 00:29:59.916: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-j57fai-gpu-control-plane-6lx2k, container kube-controller-manager Jan 19 00:29:59.917: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-j57fai-gpu-control-plane-6lx2k, container kube-scheduler Jan 19 00:29:59.917: INFO: Collecting events for Pod kube-system/kube-scheduler-capz-e2e-j57fai-gpu-control-plane-6lx2k Jan 19 00:29:59.917: INFO: Creating log watcher for controller kube-system/kube-proxy-fhv7w, container kube-proxy Jan 19 00:29:59.917: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-s8jdb, container coredns Jan 19 00:29:59.917: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-s8jdb Jan 19 00:29:59.954: INFO: Fetching kube-system pod logs took 588.834232ms Jan 19 00:29:59.954: INFO: Dumping workload cluster capz-e2e-j57fai/capz-e2e-j57fai-gpu Azure activity log Jan 19 00:29:59.954: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-7mv5m, container tigera-operator Jan 19 00:29:59.955: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-7mv5m Jan 19 00:30:01.757: INFO: Fetching activity logs took 1.802807572s Jan 19 00:30:01.757: INFO: Dumping all the Cluster API resources in the "capz-e2e-j57fai" namespace Jan 19 00:30:02.139: INFO: Deleting all clusters in the capz-e2e-j57fai namespace STEP: Deleting cluster capz-e2e-j57fai-gpu - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/19/23 00:30:02.169 INFO: Waiting for the Cluster capz-e2e-j57fai/capz-e2e-j57fai-gpu to be deleted STEP: Waiting for cluster capz-e2e-j57fai-gpu to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/19/23 00:30:02.18 Jan 19 00:34:42.347: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-j57fai Jan 19 00:34:42.366: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:212 @ 01/19/23 00:34:42.981 INFO: "with a single control plane node and 1 node" started at Thu, 19 Jan 2023 00:34:48 UTC on Ginkgo node 6 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 01/19/23 00:34:48.622 (6m2.612s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sa\scluster\sthat\suses\sthe\sexternal\scloud\sprovider\sand\sexternal\sazurediskcsi\sdriver\s\[OPTIONAL\]\swith\sa\s1\scontrol\splane\snodes\sand\s2\sworker\snodes$'
[FAILED] Timed out after 1500.000s. Timed out waiting for 2 nodes to be created for MachineDeployment capz-e2e-g1zio0/capz-e2e-g1zio0-oot-md-win Expected <int>: 0 to equal <int>: 2 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:131 @ 01/19/23 01:05:14.226from junit.e2e_suite.1.xml
2023/01/19 00:24:02 failed trying to get namespace (capz-e2e-g1zio0):namespaces "capz-e2e-g1zio0" not found cluster.cluster.x-k8s.io/capz-e2e-g1zio0-oot created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-g1zio0-oot created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-g1zio0-oot-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-g1zio0-oot-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-g1zio0-oot-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-g1zio0-oot-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-g1zio0-oot-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created machinedeployment.cluster.x-k8s.io/capz-e2e-g1zio0-oot-md-win created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-g1zio0-oot-md-win created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-g1zio0-oot-md-win created clusterresourceset.addons.cluster.x-k8s.io/containerd-logger-capz-e2e-g1zio0-oot created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-g1zio0-oot-calico-windows created configmap/cni-capz-e2e-g1zio0-oot-calico-windows created configmap/csi-proxy-addon created configmap/containerd-logger-capz-e2e-g1zio0-oot created felixconfiguration.crd.projectcalico.org/default configured W0119 01:06:04.418592 37085 reflector.go:347] pkg/mod/k8s.io/client-go@v0.25.4/tools/cache/reflector.go:169: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Failed to get logs for Machine capz-e2e-g1zio0-oot-md-win-5db4dc878d-gtss8, Cluster capz-e2e-g1zio0/capz-e2e-g1zio0-oot: [running command "ls 'c:\localdumps' -Recurse": Process exited with status 1, getting a new sftp client connection: ssh: subsystem request failed] Failed to get logs for Machine capz-e2e-g1zio0-oot-md-win-5db4dc878d-kj9r9, Cluster capz-e2e-g1zio0/capz-e2e-g1zio0-oot: [running command "ls 'c:\localdumps' -Recurse": Process exited with status 1, getting a new sftp client connection: ssh: subsystem request failed] > Enter [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/19/23 00:24:02.852 INFO: "" started at Thu, 19 Jan 2023 00:24:02 UTC on Ginkgo node 9 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml STEP: Creating namespace "capz-e2e-g1zio0" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:24:02.852 Jan 19 00:24:02.852: INFO: starting to create namespace for hosting the "capz-e2e-g1zio0" test spec INFO: Creating namespace capz-e2e-g1zio0 INFO: Creating event watcher for namespace "capz-e2e-g1zio0" Jan 19 00:24:03.054: INFO: Creating cluster identity secret "cluster-identity-secret" < Exit [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/19/23 00:24:03.163 (311ms) > Enter [It] with a 1 control plane nodes and 2 worker nodes - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:562 @ 01/19/23 00:24:03.163 STEP: using user-assigned identity - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:563 @ 01/19/23 00:24:03.163 INFO: Cluster name is capz-e2e-g1zio0-oot INFO: Creating the workload cluster with name "capz-e2e-g1zio0-oot" using the "external-cloud-provider" template (Kubernetes v1.24.9, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-g1zio0-oot --infrastructure (default) --kubernetes-version v1.24.9 --control-plane-machine-count 1 --worker-machine-count 2 --flavor external-cloud-provider INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 01/19/23 00:24:09.471 INFO: Waiting for control plane to be initialized STEP: Installing cloud-provider-azure components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:46 @ 01/19/23 00:26:09.664 Jan 19 00:28:28.295: INFO: getting history for release cloud-provider-azure-oot Jan 19 00:28:28.355: INFO: Release cloud-provider-azure-oot does not exist, installing it Jan 19 00:28:30.648: INFO: creating 1 resource(s) Jan 19 00:28:30.797: INFO: creating 10 resource(s) Jan 19 00:28:31.270: INFO: Install complete STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/19/23 00:28:31.27 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:131 @ 01/19/23 00:28:31.27 Jan 19 00:28:31.348: INFO: getting history for release projectcalico Jan 19 00:28:31.408: INFO: Release projectcalico does not exist, installing it Jan 19 00:28:32.082: INFO: creating 1 resource(s) Jan 19 00:28:32.169: INFO: creating 1 resource(s) Jan 19 00:28:32.245: INFO: creating 1 resource(s) Jan 19 00:28:32.321: INFO: creating 1 resource(s) Jan 19 00:28:32.396: INFO: creating 1 resource(s) Jan 19 00:28:32.480: INFO: creating 1 resource(s) Jan 19 00:28:32.640: INFO: creating 1 resource(s) Jan 19 00:28:32.726: INFO: creating 1 resource(s) Jan 19 00:28:32.798: INFO: creating 1 resource(s) Jan 19 00:28:32.869: INFO: creating 1 resource(s) Jan 19 00:28:32.942: INFO: creating 1 resource(s) Jan 19 00:28:33.015: INFO: creating 1 resource(s) Jan 19 00:28:33.092: INFO: creating 1 resource(s) Jan 19 00:28:33.178: INFO: creating 1 resource(s) Jan 19 00:28:33.247: INFO: creating 1 resource(s) Jan 19 00:28:33.327: INFO: creating 1 resource(s) Jan 19 00:28:33.410: INFO: creating 1 resource(s) Jan 19 00:28:33.486: INFO: creating 1 resource(s) Jan 19 00:28:33.594: INFO: creating 1 resource(s) Jan 19 00:28:33.747: INFO: creating 1 resource(s) Jan 19 00:28:34.325: INFO: creating 1 resource(s) Jan 19 00:28:34.394: INFO: Clearing discovery cache Jan 19 00:28:34.394: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 19 00:28:37.775: INFO: creating 1 resource(s) Jan 19 00:28:38.457: INFO: creating 6 resource(s) Jan 19 00:28:39.407: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/19/23 00:28:39.911 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:28:40.16 Jan 19 00:28:40.160: INFO: starting to wait for deployment to become available Jan 19 00:28:50.280: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.119341093s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/19/23 00:28:51.153 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:28:51.456 Jan 19 00:28:51.456: INFO: starting to wait for deployment to become available Jan 19 00:29:42.557: INFO: Deployment calico-system/calico-kube-controllers is now available, took 51.101741106s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:29:42.91 Jan 19 00:29:42.910: INFO: starting to wait for deployment to become available Jan 19 00:29:42.970: INFO: Deployment calico-system/calico-typha is now available, took 59.420527ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/19/23 00:29:42.97 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:29:53.327 Jan 19 00:29:53.327: INFO: starting to wait for deployment to become available Jan 19 00:30:13.513: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 20.185732563s STEP: Waiting for Ready calico-node daemonset pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:84 @ 01/19/23 00:30:13.513 STEP: waiting for daemonset calico-system/calico-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:30:13.878 Jan 19 00:30:13.878: INFO: waiting for daemonset calico-system/calico-node to be complete Jan 19 00:35:25.825: INFO: 3 daemonset calico-system/calico-node pods are running, took 5m11.947810447s STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:91 @ 01/19/23 00:35:25.825 STEP: waiting for daemonset calico-system/calico-node-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:35:26.124 Jan 19 00:35:26.124: INFO: waiting for daemonset calico-system/calico-node-windows to be complete Jan 19 00:35:26.184: INFO: 0 daemonset calico-system/calico-node-windows pods are running, took 59.684495ms STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:97 @ 01/19/23 00:35:26.184 STEP: waiting for daemonset kube-system/kube-proxy-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:35:26.481 Jan 19 00:35:26.481: INFO: waiting for daemonset kube-system/kube-proxy-windows to be complete Jan 19 00:35:26.540: INFO: 0 daemonset kube-system/kube-proxy-windows pods are running, took 59.002557ms STEP: Waiting for Ready cloud-controller-manager deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:56 @ 01/19/23 00:35:26.564 STEP: waiting for deployment kube-system/cloud-controller-manager to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:35:26.809 Jan 19 00:35:26.809: INFO: starting to wait for deployment to become available Jan 19 00:35:26.868: INFO: Deployment kube-system/cloud-controller-manager is now available, took 58.919173ms STEP: Waiting for Ready cloud-node-manager daemonset pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:61 @ 01/19/23 00:35:26.868 STEP: waiting for daemonset kube-system/cloud-node-manager to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:35:27.111 Jan 19 00:35:27.111: INFO: waiting for daemonset kube-system/cloud-node-manager to be complete Jan 19 00:35:27.170: INFO: 3 daemonset kube-system/cloud-node-manager pods are running, took 58.643509ms STEP: waiting for daemonset kube-system/cloud-node-manager-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:35:27.411 Jan 19 00:35:27.411: INFO: waiting for daemonset kube-system/cloud-node-manager-windows to be complete Jan 19 00:35:27.469: INFO: 0 daemonset kube-system/cloud-node-manager-windows pods are running, took 58.485056ms INFO: Waiting for the first control plane machine managed by capz-e2e-g1zio0/capz-e2e-g1zio0-oot-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:133 @ 01/19/23 00:35:27.499 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:71 @ 01/19/23 00:35:27.51 Jan 19 00:35:27.594: INFO: getting history for release azuredisk-csi-driver-oot Jan 19 00:35:27.654: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 19 00:35:30.575: INFO: creating 1 resource(s) Jan 19 00:35:30.835: INFO: creating 18 resource(s) Jan 19 00:35:31.353: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:81 @ 01/19/23 00:35:31.375 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:35:31.621 Jan 19 00:35:31.621: INFO: starting to wait for deployment to become available Jan 19 00:40:13.412: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 4m41.790178607s STEP: Waiting for Running azure-disk-csi node pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:86 @ 01/19/23 00:40:13.412 STEP: waiting for daemonset kube-system/csi-azuredisk-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:40:13.715 Jan 19 00:40:13.715: INFO: waiting for daemonset kube-system/csi-azuredisk-node to be complete Jan 19 00:40:13.774: INFO: 3 daemonset kube-system/csi-azuredisk-node pods are running, took 59.606568ms STEP: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:40:14.074 Jan 19 00:40:14.074: INFO: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete Jan 19 00:40:14.134: INFO: 0 daemonset kube-system/csi-azuredisk-node-win pods are running, took 60.030501ms INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-g1zio0/capz-e2e-g1zio0-oot-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:165 @ 01/19/23 00:40:14.149 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:196 @ 01/19/23 00:40:14.157 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:102 @ 01/19/23 00:40:14.194 STEP: Checking all the machines controlled by capz-e2e-g1zio0-oot-md-0 are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/19/23 00:40:14.212 STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:102 @ 01/19/23 00:40:14.225 [FAILED] Timed out after 1500.000s. Timed out waiting for 2 nodes to be created for MachineDeployment capz-e2e-g1zio0/capz-e2e-g1zio0-oot-md-win Expected <int>: 0 to equal <int>: 2 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:131 @ 01/19/23 01:05:14.226 < Exit [It] with a 1 control plane nodes and 2 worker nodes - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:562 @ 01/19/23 01:05:14.226 (41m11.063s) > Enter [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 01/19/23 01:05:14.226 Jan 19 01:05:14.226: INFO: FAILED! Jan 19 01:05:14.226: INFO: Cleaning up after "Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes" spec STEP: Dumping logs from the "capz-e2e-g1zio0-oot" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 01:05:14.226 Jan 19 01:05:14.226: INFO: Dumping workload cluster capz-e2e-g1zio0/capz-e2e-g1zio0-oot logs Jan 19 01:05:14.317: INFO: Collecting logs for Linux node capz-e2e-g1zio0-oot-control-plane-9dpvw in cluster capz-e2e-g1zio0-oot in namespace capz-e2e-g1zio0 Jan 19 01:07:10.851: INFO: Collecting boot logs for AzureMachine capz-e2e-g1zio0-oot-control-plane-9dpvw Jan 19 01:07:12.197: INFO: Collecting logs for Linux node capz-e2e-g1zio0-oot-md-0-hh5dn in cluster capz-e2e-g1zio0-oot in namespace capz-e2e-g1zio0 Jan 19 01:07:26.321: INFO: Collecting boot logs for AzureMachine capz-e2e-g1zio0-oot-md-0-hh5dn Jan 19 01:07:26.813: INFO: Collecting logs for Linux node capz-e2e-g1zio0-oot-md-0-bblcn in cluster capz-e2e-g1zio0-oot in namespace capz-e2e-g1zio0 Jan 19 01:07:38.144: INFO: Collecting boot logs for AzureMachine capz-e2e-g1zio0-oot-md-0-bblcn Jan 19 01:07:38.783: INFO: Collecting logs for Windows node capz-e2e-pfmz4 in cluster capz-e2e-g1zio0-oot in namespace capz-e2e-g1zio0 Jan 19 01:10:08.245: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-e2e-pfmz4 to /logs/artifacts/clusters/capz-e2e-g1zio0-oot/machines/capz-e2e-g1zio0-oot-md-win-5db4dc878d-gtss8/crashdumps.tar Jan 19 01:10:09.871: INFO: Collecting boot logs for AzureMachine capz-e2e-g1zio0-oot-md-win-pfmz4 Jan 19 01:10:10.964: INFO: Collecting logs for Windows node capz-e2e-skqw4 in cluster capz-e2e-g1zio0-oot in namespace capz-e2e-g1zio0 Jan 19 01:12:46.196: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-e2e-skqw4 to /logs/artifacts/clusters/capz-e2e-g1zio0-oot/machines/capz-e2e-g1zio0-oot-md-win-5db4dc878d-kj9r9/crashdumps.tar Jan 19 01:12:47.859: INFO: Collecting boot logs for AzureMachine capz-e2e-g1zio0-oot-md-win-skqw4 Jan 19 01:12:49.064: INFO: Dumping workload cluster capz-e2e-g1zio0/capz-e2e-g1zio0-oot kube-system pod logs Jan 19 01:12:49.702: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-648987b59b-7j7tt Jan 19 01:12:49.702: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-648987b59b-7j7tt, container calico-apiserver Jan 19 01:12:49.702: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-648987b59b-pgkhw, container calico-apiserver Jan 19 01:12:49.702: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-648987b59b-pgkhw Jan 19 01:12:49.786: INFO: Collecting events for Pod calico-system/calico-kube-controllers-594d54f99-zntnb Jan 19 01:12:49.786: INFO: Creating log watcher for controller calico-system/calico-typha-76b484848-lqmzx, container calico-typha Jan 19 01:12:49.787: INFO: Creating log watcher for controller calico-system/csi-node-driver-4xgmx, container csi-node-driver-registrar Jan 19 01:12:49.787: INFO: Collecting events for Pod calico-system/calico-node-qsf5l Jan 19 01:12:49.787: INFO: Creating log watcher for controller calico-system/csi-node-driver-nk5sz, container csi-node-driver-registrar Jan 19 01:12:49.787: INFO: Creating log watcher for controller calico-system/calico-node-fwz4z, container calico-node Jan 19 01:12:49.787: INFO: Creating log watcher for controller calico-system/csi-node-driver-nk5sz, container calico-csi Jan 19 01:12:49.787: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-594d54f99-zntnb, container calico-kube-controllers Jan 19 01:12:49.787: INFO: Collecting events for Pod calico-system/csi-node-driver-nk5sz Jan 19 01:12:49.787: INFO: Creating log watcher for controller calico-system/csi-node-driver-r4sf4, container calico-csi Jan 19 01:12:49.788: INFO: Creating log watcher for controller calico-system/calico-node-windows-7hqss, container calico-node-startup Jan 19 01:12:49.788: INFO: Collecting events for Pod calico-system/calico-typha-76b484848-nhtv6 Jan 19 01:12:49.788: INFO: Collecting events for Pod calico-system/calico-node-fwz4z Jan 19 01:12:49.788: INFO: Creating log watcher for controller calico-system/calico-typha-76b484848-vvd87, container calico-typha Jan 19 01:12:49.788: INFO: Creating log watcher for controller calico-system/calico-node-windows-7hqss, container calico-node-felix Jan 19 01:12:49.788: INFO: Collecting events for Pod calico-system/calico-typha-76b484848-vvd87 Jan 19 01:12:49.788: INFO: Creating log watcher for controller calico-system/csi-node-driver-r4sf4, container csi-node-driver-registrar Jan 19 01:12:49.788: INFO: Creating log watcher for controller calico-system/csi-node-driver-4xgmx, container calico-csi Jan 19 01:12:49.788: INFO: Collecting events for Pod calico-system/csi-node-driver-r4sf4 Jan 19 01:12:49.788: INFO: Collecting events for Pod calico-system/csi-node-driver-4xgmx Jan 19 01:12:49.789: INFO: Collecting events for Pod calico-system/calico-typha-76b484848-lqmzx Jan 19 01:12:49.789: INFO: Creating log watcher for controller calico-system/calico-typha-76b484848-nhtv6, container calico-typha Jan 19 01:12:49.792: INFO: Collecting events for Pod calico-system/calico-node-h9q8q Jan 19 01:12:49.792: INFO: Creating log watcher for controller calico-system/calico-node-windows-m2cww, container calico-node-startup Jan 19 01:12:49.792: INFO: Creating log watcher for controller calico-system/calico-node-windows-m2cww, container calico-node-felix Jan 19 01:12:49.792: INFO: Creating log watcher for controller calico-system/calico-node-h9q8q, container calico-node Jan 19 01:12:49.792: INFO: Collecting events for Pod calico-system/calico-node-windows-7hqss Jan 19 01:12:49.792: INFO: Creating log watcher for controller calico-system/calico-node-qsf5l, container calico-node Jan 19 01:12:49.792: INFO: Collecting events for Pod calico-system/calico-node-windows-m2cww Jan 19 01:12:49.872: INFO: Creating log watcher for controller kube-system/cloud-controller-manager-7f87dc989b-4b88s, container cloud-controller-manager Jan 19 01:12:49.872: INFO: Collecting events for Pod kube-system/cloud-node-manager-g9jns Jan 19 01:12:49.872: INFO: Collecting events for Pod kube-system/cloud-controller-manager-7f87dc989b-4b88s Jan 19 01:12:49.872: INFO: Creating log watcher for controller kube-system/cloud-node-manager-g9jns, container cloud-node-manager Jan 19 01:12:49.872: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-dzqx5, container azuredisk Jan 19 01:12:49.872: INFO: Collecting events for Pod kube-system/kube-apiserver-capz-e2e-g1zio0-oot-control-plane-9dpvw Jan 19 01:12:49.872: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-8ldzm, container coredns Jan 19 01:12:49.872: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-dzqx5 Jan 19 01:12:49.873: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-jsrd5, container liveness-probe Jan 19 01:12:49.873: INFO: Collecting events for Pod kube-system/cloud-node-manager-windows-xdnjz Jan 19 01:12:49.873: INFO: Creating log watcher for controller kube-system/cloud-node-manager-msqrw, container cloud-node-manager Jan 19 01:12:49.873: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-8ldzm Jan 19 01:12:49.873: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-g1zio0-oot-control-plane-9dpvw, container kube-controller-manager Jan 19 01:12:49.874: INFO: Collecting events for Pod kube-system/kube-proxy-mjh86 Jan 19 01:12:49.874: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-mpcwz, container node-driver-registrar Jan 19 01:12:49.874: INFO: Creating log watcher for controller kube-system/kube-proxy-windows-j6ggg, container kube-proxy Jan 19 01:12:49.874: INFO: Creating log watcher for controller kube-system/cloud-node-manager-z6fv9, container cloud-node-manager Jan 19 01:12:49.874: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-mpcwz, container azuredisk Jan 19 01:12:49.880: INFO: Collecting events for Pod kube-system/kube-proxy-windows-j6ggg Jan 19 01:12:49.881: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-n99kw, container coredns Jan 19 01:12:49.881: INFO: Collecting events for Pod kube-system/cloud-node-manager-z6fv9 Jan 19 01:12:49.881: INFO: Collecting events for Pod kube-system/kube-controller-manager-capz-e2e-g1zio0-oot-control-plane-9dpvw Jan 19 01:12:49.881: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-mpcwz Jan 19 01:12:49.881: INFO: Creating log watcher for controller kube-system/kube-proxy-windows-8gg7l, container kube-proxy Jan 19 01:12:49.881: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-jsrd5, container node-driver-registrar Jan 19 01:12:49.881: INFO: Collecting events for Pod kube-system/cloud-node-manager-msqrw Jan 19 01:12:49.882: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-n99kw Jan 19 01:12:49.882: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-g1zio0-oot-control-plane-9dpvw, container kube-scheduler Jan 19 01:12:49.882: INFO: Creating log watcher for controller kube-system/containerd-logger-77tl9, container containerd-logger Jan 19 01:12:49.882: INFO: Creating log watcher for controller kube-system/kube-proxy-4m4s6, container kube-proxy Jan 19 01:12:49.882: INFO: Collecting events for Pod kube-system/kube-proxy-windows-8gg7l Jan 19 01:12:49.882: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-jsrd5, container azuredisk Jan 19 01:12:49.882: INFO: Creating log watcher for controller kube-system/etcd-capz-e2e-g1zio0-oot-control-plane-9dpvw, container etcd Jan 19 01:12:49.882: INFO: Creating log watcher for controller kube-system/cloud-node-manager-windows-fx4hr, container cloud-node-manager Jan 19 01:12:49.882: INFO: Collecting events for Pod kube-system/kube-scheduler-capz-e2e-g1zio0-oot-control-plane-9dpvw Jan 19 01:12:49.882: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-fknkk, container csi-provisioner Jan 19 01:12:49.883: INFO: Collecting events for Pod kube-system/containerd-logger-77tl9 Jan 19 01:12:49.883: INFO: Collecting events for Pod kube-system/kube-proxy-4m4s6 Jan 19 01:12:49.883: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-jsrd5 Jan 19 01:12:49.883: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-mpcwz, container liveness-probe Jan 19 01:12:49.883: INFO: Collecting events for Pod kube-system/etcd-capz-e2e-g1zio0-oot-control-plane-9dpvw Jan 19 01:12:49.883: INFO: Creating log watcher for controller kube-system/containerd-logger-8ftd9, container containerd-logger Jan 19 01:12:49.883: INFO: Creating log watcher for controller kube-system/kube-proxy-j97kr, container kube-proxy Jan 19 01:12:49.883: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-fknkk, container liveness-probe Jan 19 01:12:49.883: INFO: Collecting events for Pod kube-system/cloud-node-manager-windows-fx4hr Jan 19 01:12:49.884: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-fknkk, container azuredisk Jan 19 01:12:49.884: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-g1zio0-oot-control-plane-9dpvw, container kube-apiserver Jan 19 01:12:49.884: INFO: Collecting events for Pod kube-system/containerd-logger-8ftd9 Jan 19 01:12:49.884: INFO: Creating log watcher for controller kube-system/cloud-node-manager-windows-xdnjz, container cloud-node-manager Jan 19 01:12:49.884: INFO: Collecting events for Pod kube-system/kube-proxy-j97kr Jan 19 01:12:49.884: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-fknkk, container csi-attacher Jan 19 01:12:49.884: INFO: Collecting events for Pod kube-system/csi-azuredisk-controller-545d478dbf-fknkk Jan 19 01:12:49.884: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-fknkk, container csi-snapshotter Jan 19 01:12:49.884: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-fknkk, container csi-resizer Jan 19 01:12:49.884: INFO: Creating log watcher for controller kube-system/kube-proxy-mjh86, container kube-proxy Jan 19 01:12:49.885: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-dzqx5, container liveness-probe Jan 19 01:12:49.885: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-dzqx5, container node-driver-registrar Jan 19 01:12:50.033: INFO: Fetching kube-system pod logs took 968.633248ms Jan 19 01:12:50.033: INFO: Dumping workload cluster capz-e2e-g1zio0/capz-e2e-g1zio0-oot Azure activity log Jan 19 01:12:50.033: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-lnqp7, container tigera-operator Jan 19 01:12:50.034: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-lnqp7 Jan 19 01:12:56.637: INFO: Fetching activity logs took 6.603574279s Jan 19 01:12:56.637: INFO: Dumping all the Cluster API resources in the "capz-e2e-g1zio0" namespace Jan 19 01:12:57.050: INFO: Deleting all clusters in the capz-e2e-g1zio0 namespace STEP: Deleting cluster capz-e2e-g1zio0-oot - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/19/23 01:12:57.068 INFO: Waiting for the Cluster capz-e2e-g1zio0/capz-e2e-g1zio0-oot to be deleted STEP: Waiting for cluster capz-e2e-g1zio0-oot to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/19/23 01:12:57.083 Jan 19 01:17:37.493: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-g1zio0 Jan 19 01:17:37.515: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:212 @ 01/19/23 01:17:38.115 INFO: "with a 1 control plane nodes and 2 worker nodes" started at Thu, 19 Jan 2023 01:20:02 UTC on Ginkgo node 9 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 01/19/23 01:20:02.051 (14m47.825s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sclusters\susing\sclusterclass\s\[OPTIONAL\]\swith\sa\ssingle\scontrol\splane\snode\,\sone\slinux\sworker\snode\,\sand\sone\swindows\sworker\snode$'
[FAILED] Timed out after 1500.001s. Timed out waiting for 1 nodes to be created for MachineDeployment capz-e2e-9hb37s/capz-e2e-9hb37s-cc-md-win-qh7nc Expected <int>: 0 to equal <int>: 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:131 @ 01/19/23 00:55:48.674from junit.e2e_suite.1.xml
2023/01/19 00:24:02 failed trying to get namespace (capz-e2e-9hb37s):namespaces "capz-e2e-9hb37s" not found clusterclass.cluster.x-k8s.io/ci-default created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/ci-default-kubeadm-control-plane created azureclustertemplate.infrastructure.cluster.x-k8s.io/ci-default-azure-cluster created azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-control-plane created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ci-default-worker created azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-worker created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ci-default-worker-win created azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-worker-win created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created cluster.cluster.x-k8s.io/capz-e2e-9hb37s-cc created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-9hb37s-cc-calico created clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created configmap/cni-capz-e2e-9hb37s-cc-calico-windows created configmap/csi-proxy-addon created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine capz-e2e-9hb37s-cc-hrgqp-7ghwc, Cluster capz-e2e-9hb37s/capz-e2e-9hb37s-cc: dialing public load balancer at capz-e2e-9hb37s-cc-63e5171f.westus3.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain Failed to get logs for Machine capz-e2e-9hb37s-cc-md-0-whwsq-5b757f5cd-xmdkk, Cluster capz-e2e-9hb37s/capz-e2e-9hb37s-cc: dialing public load balancer at capz-e2e-9hb37s-cc-63e5171f.westus3.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain Failed to get logs for Machine capz-e2e-9hb37s-cc-md-win-qh7nc-5866c7cc48-5fbkq, Cluster capz-e2e-9hb37s/capz-e2e-9hb37s-cc: [dialing public load balancer at capz-e2e-9hb37s-cc-63e5171f.westus3.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain, Unable to collect VM Boot Diagnostic logs: AzureMachine provider ID is nil] > Enter [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/19/23 00:24:02.854 INFO: "" started at Thu, 19 Jan 2023 00:24:02 UTC on Ginkgo node 8 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml STEP: Creating namespace "capz-e2e-9hb37s" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:24:02.854 Jan 19 00:24:02.854: INFO: starting to create namespace for hosting the "capz-e2e-9hb37s" test spec INFO: Creating namespace capz-e2e-9hb37s INFO: Creating event watcher for namespace "capz-e2e-9hb37s" Jan 19 00:24:03.127: INFO: Creating cluster identity secret "cluster-identity-secret" < Exit [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/19/23 00:24:03.205 (351ms) > Enter [It] with a single control plane node, one linux worker node, and one windows worker node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:792 @ 01/19/23 00:24:03.205 INFO: Cluster name is capz-e2e-9hb37s-cc INFO: Creating the workload cluster with name "capz-e2e-9hb37s-cc" using the "topology" template (Kubernetes v1.24.9, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-9hb37s-cc --infrastructure (default) --kubernetes-version v1.24.9 --control-plane-machine-count 1 --worker-machine-count 1 --flavor topology INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 01/19/23 00:24:12.652 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/19/23 00:26:12.891 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:131 @ 01/19/23 00:26:12.891 Jan 19 00:28:28.550: INFO: getting history for release projectcalico Jan 19 00:28:28.619: INFO: Release projectcalico does not exist, installing it Jan 19 00:28:29.621: INFO: creating 1 resource(s) Jan 19 00:28:29.706: INFO: creating 1 resource(s) Jan 19 00:28:29.778: INFO: creating 1 resource(s) Jan 19 00:28:29.855: INFO: creating 1 resource(s) Jan 19 00:28:29.940: INFO: creating 1 resource(s) Jan 19 00:28:30.018: INFO: creating 1 resource(s) Jan 19 00:28:30.183: INFO: creating 1 resource(s) Jan 19 00:28:30.351: INFO: creating 1 resource(s) Jan 19 00:28:30.423: INFO: creating 1 resource(s) Jan 19 00:28:30.495: INFO: creating 1 resource(s) Jan 19 00:28:30.572: INFO: creating 1 resource(s) Jan 19 00:28:30.646: INFO: creating 1 resource(s) Jan 19 00:28:30.715: INFO: creating 1 resource(s) Jan 19 00:28:30.792: INFO: creating 1 resource(s) Jan 19 00:28:30.861: INFO: creating 1 resource(s) Jan 19 00:28:30.940: INFO: creating 1 resource(s) Jan 19 00:28:31.040: INFO: creating 1 resource(s) Jan 19 00:28:31.123: INFO: creating 1 resource(s) Jan 19 00:28:31.231: INFO: creating 1 resource(s) Jan 19 00:28:31.372: INFO: creating 1 resource(s) Jan 19 00:28:31.760: INFO: creating 1 resource(s) Jan 19 00:28:31.836: INFO: Clearing discovery cache Jan 19 00:28:31.836: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 19 00:28:35.455: INFO: creating 1 resource(s) Jan 19 00:28:36.078: INFO: creating 6 resource(s) Jan 19 00:28:36.890: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/19/23 00:28:37.354 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:28:37.604 Jan 19 00:28:37.604: INFO: starting to wait for deployment to become available Jan 19 00:28:47.723: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.119099517s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/19/23 00:28:48.581 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:28:48.881 Jan 19 00:28:48.881: INFO: starting to wait for deployment to become available Jan 19 00:29:39.442: INFO: Deployment calico-system/calico-kube-controllers is now available, took 50.560493595s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:29:40.031 Jan 19 00:29:40.031: INFO: starting to wait for deployment to become available Jan 19 00:29:40.090: INFO: Deployment calico-system/calico-typha is now available, took 58.97872ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/19/23 00:29:40.09 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:29:40.516 Jan 19 00:29:40.516: INFO: starting to wait for deployment to become available Jan 19 00:29:50.931: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 10.41564282s STEP: Waiting for Ready calico-node daemonset pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:84 @ 01/19/23 00:29:50.931 STEP: waiting for daemonset calico-system/calico-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:29:52.328 Jan 19 00:29:52.328: INFO: waiting for daemonset calico-system/calico-node to be complete Jan 19 00:29:52.387: INFO: 1 daemonset calico-system/calico-node pods are running, took 58.992166ms STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:91 @ 01/19/23 00:29:52.387 STEP: waiting for daemonset calico-system/calico-node-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:29:52.685 Jan 19 00:29:52.685: INFO: waiting for daemonset calico-system/calico-node-windows to be complete Jan 19 00:29:52.745: INFO: 0 daemonset calico-system/calico-node-windows pods are running, took 59.484605ms STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:97 @ 01/19/23 00:29:52.745 STEP: waiting for daemonset kube-system/kube-proxy-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:29:53.041 Jan 19 00:29:53.041: INFO: waiting for daemonset kube-system/kube-proxy-windows to be complete Jan 19 00:29:53.100: INFO: 0 daemonset kube-system/kube-proxy-windows pods are running, took 58.704002ms INFO: Waiting for the first control plane machine managed by capz-e2e-9hb37s/capz-e2e-9hb37s-cc-hrgqp to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:133 @ 01/19/23 00:29:53.126 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:71 @ 01/19/23 00:29:53.133 Jan 19 00:29:53.218: INFO: getting history for release azuredisk-csi-driver-oot Jan 19 00:29:53.277: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 19 00:29:56.141: INFO: creating 1 resource(s) Jan 19 00:29:56.403: INFO: creating 18 resource(s) Jan 19 00:29:56.902: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:81 @ 01/19/23 00:29:56.92 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:29:57.169 Jan 19 00:29:57.169: INFO: starting to wait for deployment to become available Jan 19 00:30:27.470: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 30.300808448s STEP: Waiting for Running azure-disk-csi node pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:86 @ 01/19/23 00:30:27.47 STEP: waiting for daemonset kube-system/csi-azuredisk-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:30:28.058 Jan 19 00:30:28.058: INFO: waiting for daemonset kube-system/csi-azuredisk-node to be complete Jan 19 00:30:48.239: INFO: 2 daemonset kube-system/csi-azuredisk-node pods are running, took 20.18088625s STEP: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:30:48.539 Jan 19 00:30:48.539: INFO: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete Jan 19 00:30:48.598: INFO: 0 daemonset kube-system/csi-azuredisk-node-win pods are running, took 59.005366ms INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-9hb37s/capz-e2e-9hb37s-cc-hrgqp to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:165 @ 01/19/23 00:30:48.611 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:196 @ 01/19/23 00:30:48.618 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:102 @ 01/19/23 00:30:48.646 STEP: Checking all the machines controlled by capz-e2e-9hb37s-cc-md-0-whwsq are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/19/23 00:30:48.659 STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:102 @ 01/19/23 00:30:48.673 [FAILED] Timed out after 1500.001s. Timed out waiting for 1 nodes to be created for MachineDeployment capz-e2e-9hb37s/capz-e2e-9hb37s-cc-md-win-qh7nc Expected <int>: 0 to equal <int>: 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:131 @ 01/19/23 00:55:48.674 < Exit [It] with a single control plane node, one linux worker node, and one windows worker node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:792 @ 01/19/23 00:55:48.674 (31m45.469s) > Enter [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 01/19/23 00:55:48.674 Jan 19 00:55:48.674: INFO: FAILED! Jan 19 00:55:48.674: INFO: Cleaning up after "Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node" spec STEP: Dumping logs from the "capz-e2e-9hb37s-cc" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/19/23 00:55:48.674 Jan 19 00:55:48.674: INFO: Dumping workload cluster capz-e2e-9hb37s/capz-e2e-9hb37s-cc logs Jan 19 00:55:48.721: INFO: Collecting logs for Linux node capz-e2e-9hb37s-cc-control-plane-f8xff-6m8pj in cluster capz-e2e-9hb37s-cc in namespace capz-e2e-9hb37s Jan 19 00:56:50.190: INFO: Collecting boot logs for AzureMachine capz-e2e-9hb37s-cc-control-plane-f8xff-6m8pj Jan 19 00:56:51.304: INFO: Collecting logs for Linux node capz-e2e-9hb37s-cc-md-0-infra-qkql7-4pnc9 in cluster capz-e2e-9hb37s-cc in namespace capz-e2e-9hb37s Jan 19 00:57:52.774: INFO: Collecting boot logs for AzureMachine capz-e2e-9hb37s-cc-md-0-infra-qkql7-4pnc9 Jan 19 00:57:53.280: INFO: Unable to collect logs as node doesn't have addresses Jan 19 00:57:53.280: INFO: Collecting logs for Windows node capz-e2e-9hb37s-cc-md-win-infra-dprt6-wxdhx in cluster capz-e2e-9hb37s-cc in namespace capz-e2e-9hb37s Jan 19 01:01:59.060: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-e2e-9hb37s-cc-md-win-infra-dprt6-wxdhx to /logs/artifacts/clusters/capz-e2e-9hb37s-cc/machines/capz-e2e-9hb37s-cc-md-win-qh7nc-5866c7cc48-5fbkq/crashdumps.tar Jan 19 01:01:59.522: INFO: Collecting boot logs for AzureMachine capz-e2e-9hb37s-cc-md-win-infra-dprt6-wxdhx Jan 19 01:01:59.542: INFO: Dumping workload cluster capz-e2e-9hb37s/capz-e2e-9hb37s-cc kube-system pod logs Jan 19 01:02:00.150: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-dc764cf48-7pdwv, container calico-apiserver Jan 19 01:02:00.150: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-dc764cf48-7pdwv Jan 19 01:02:00.151: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-dc764cf48-lnzx4, container calico-apiserver Jan 19 01:02:00.151: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-dc764cf48-lnzx4 Jan 19 01:02:00.307: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-594d54f99-dgl9d, container calico-kube-controllers Jan 19 01:02:00.307: INFO: Collecting events for Pod calico-system/calico-node-dwc2j Jan 19 01:02:00.307: INFO: Creating log watcher for controller calico-system/calico-node-5vktn, container calico-node Jan 19 01:02:00.307: INFO: Collecting events for Pod calico-system/calico-kube-controllers-594d54f99-dgl9d Jan 19 01:02:00.307: INFO: Collecting events for Pod calico-system/calico-node-5vktn Jan 19 01:02:00.307: INFO: Creating log watcher for controller calico-system/calico-node-dwc2j, container calico-node Jan 19 01:02:00.307: INFO: Creating log watcher for controller calico-system/calico-typha-6d744b8845-pthz7, container calico-typha Jan 19 01:02:00.307: INFO: Collecting events for Pod calico-system/calico-node-windows-trjfl Jan 19 01:02:00.307: INFO: Creating log watcher for controller calico-system/calico-typha-6d744b8845-2jvhh, container calico-typha Jan 19 01:02:00.307: INFO: Creating log watcher for controller calico-system/calico-node-windows-trjfl, container calico-node-startup Jan 19 01:02:00.308: INFO: Creating log watcher for controller calico-system/csi-node-driver-25p8v, container csi-node-driver-registrar Jan 19 01:02:00.308: INFO: Creating log watcher for controller calico-system/calico-node-windows-trjfl, container calico-node-felix Jan 19 01:02:00.308: INFO: Collecting events for Pod calico-system/calico-typha-6d744b8845-pthz7 Jan 19 01:02:00.310: INFO: Collecting events for Pod calico-system/csi-node-driver-25p8v Jan 19 01:02:00.310: INFO: Creating log watcher for controller calico-system/csi-node-driver-lxrhf, container calico-csi Jan 19 01:02:00.311: INFO: Creating log watcher for controller calico-system/csi-node-driver-lxrhf, container csi-node-driver-registrar Jan 19 01:02:00.312: INFO: Creating log watcher for controller calico-system/csi-node-driver-25p8v, container calico-csi Jan 19 01:02:00.312: INFO: Collecting events for Pod calico-system/calico-typha-6d744b8845-2jvhh Jan 19 01:02:00.313: INFO: Collecting events for Pod calico-system/csi-node-driver-lxrhf Jan 19 01:02:00.409: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-jnr7q Jan 19 01:02:00.409: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-r295g, container coredns Jan 19 01:02:00.409: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-jnr7q, container coredns Jan 19 01:02:00.410: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-r295g Jan 19 01:02:00.411: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-8s7j6, container csi-provisioner Jan 19 01:02:00.411: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-5f9wm, container azuredisk Jan 19 01:02:00.411: INFO: Collecting events for Pod kube-system/kube-controller-manager-capz-e2e-9hb37s-cc-control-plane-f8xff-6m8pj Jan 19 01:02:00.411: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-8s7j6, container liveness-probe Jan 19 01:02:00.411: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-d642w, container node-driver-registrar Jan 19 01:02:00.411: INFO: Collecting events for Pod kube-system/csi-azuredisk-controller-545d478dbf-8s7j6 Jan 19 01:02:00.411: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-win-5f9wm Jan 19 01:02:00.411: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-8s7j6, container azuredisk Jan 19 01:02:00.411: INFO: Creating log watcher for controller kube-system/csi-proxy-26xjs, container csi-proxy Jan 19 01:02:00.411: INFO: Collecting events for Pod kube-system/etcd-capz-e2e-9hb37s-cc-control-plane-f8xff-6m8pj Jan 19 01:02:00.411: INFO: Collecting events for Pod kube-system/csi-proxy-26xjs Jan 19 01:02:00.411: INFO: Creating log watcher for controller kube-system/etcd-capz-e2e-9hb37s-cc-control-plane-f8xff-6m8pj, container etcd Jan 19 01:02:00.412: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-d642w, container azuredisk Jan 19 01:02:00.412: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-d642w, container liveness-probe Jan 19 01:02:00.412: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-8s7j6, container csi-attacher Jan 19 01:02:00.412: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-8s7j6, container csi-snapshotter Jan 19 01:02:00.412: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-9hb37s-cc-control-plane-f8xff-6m8pj, container kube-apiserver Jan 19 01:02:00.412: INFO: Collecting events for Pod kube-system/kube-proxy-w7nf6 Jan 19 01:02:00.412: INFO: Creating log watcher for controller kube-system/kube-proxy-g8tpj, container kube-proxy Jan 19 01:02:00.413: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-d642w Jan 19 01:02:00.413: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-8s7j6, container csi-resizer Jan 19 01:02:00.413: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-kn284, container liveness-probe Jan 19 01:02:00.413: INFO: Collecting events for Pod kube-system/kube-proxy-windows-nsbcm Jan 19 01:02:00.413: INFO: Collecting events for Pod kube-system/kube-proxy-g8tpj Jan 19 01:02:00.413: INFO: Collecting events for Pod kube-system/kube-apiserver-capz-e2e-9hb37s-cc-control-plane-f8xff-6m8pj Jan 19 01:02:00.413: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-9hb37s-cc-control-plane-f8xff-6m8pj, container kube-controller-manager Jan 19 01:02:00.413: INFO: Creating log watcher for controller kube-system/kube-proxy-windows-nsbcm, container kube-proxy Jan 19 01:02:00.413: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-kn284 Jan 19 01:02:00.413: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-kn284, container node-driver-registrar Jan 19 01:02:00.414: INFO: Creating log watcher for controller kube-system/kube-proxy-w7nf6, container kube-proxy Jan 19 01:02:00.414: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-5f9wm, container liveness-probe Jan 19 01:02:00.414: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-5f9wm, container node-driver-registrar Jan 19 01:02:00.414: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-kn284, container azuredisk Jan 19 01:02:00.414: INFO: Collecting events for Pod kube-system/kube-scheduler-capz-e2e-9hb37s-cc-control-plane-f8xff-6m8pj Jan 19 01:02:00.414: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-9hb37s-cc-control-plane-f8xff-6m8pj, container kube-scheduler Jan 19 01:02:00.517: INFO: Fetching kube-system pod logs took 974.59484ms Jan 19 01:02:00.517: INFO: Dumping workload cluster capz-e2e-9hb37s/capz-e2e-9hb37s-cc Azure activity log Jan 19 01:02:00.517: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-9g9hz, container tigera-operator Jan 19 01:02:00.517: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-9g9hz Jan 19 01:02:00.551: INFO: Error fetching activity logs for cluster capz-e2e-9hb37s-cc in namespace capz-e2e-9hb37s. Not able to find the AzureManagedControlPlane on the management cluster: azuremanagedcontrolplanes.infrastructure.cluster.x-k8s.io "capz-e2e-9hb37s-cc" not found Jan 19 01:02:00.551: INFO: Fetching activity logs took 33.919736ms Jan 19 01:02:00.551: INFO: Dumping all the Cluster API resources in the "capz-e2e-9hb37s" namespace Jan 19 01:02:00.557: INFO: Error starting logs stream for pod kube-system/csi-proxy-26xjs, container csi-proxy: container "csi-proxy" in pod "csi-proxy-26xjs" is waiting to start: ContainerCreating Jan 19 01:02:00.557: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-5f9wm, container node-driver-registrar: container "node-driver-registrar" in pod "csi-azuredisk-node-win-5f9wm" is waiting to start: PodInitializing Jan 19 01:02:00.558: INFO: Error starting logs stream for pod calico-system/calico-node-windows-trjfl, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-trjfl" is waiting to start: PodInitializing Jan 19 01:02:00.558: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-5f9wm, container liveness-probe: container "liveness-probe" in pod "csi-azuredisk-node-win-5f9wm" is waiting to start: PodInitializing Jan 19 01:02:00.558: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-5f9wm, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-win-5f9wm" is waiting to start: PodInitializing Jan 19 01:02:00.558: INFO: Error starting logs stream for pod calico-system/calico-node-windows-trjfl, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-trjfl" is waiting to start: PodInitializing Jan 19 01:02:01.026: INFO: Deleting all clusters in the capz-e2e-9hb37s namespace STEP: Deleting cluster capz-e2e-9hb37s-cc - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/19/23 01:02:01.058 INFO: Waiting for the Cluster capz-e2e-9hb37s/capz-e2e-9hb37s-cc to be deleted STEP: Waiting for cluster capz-e2e-9hb37s-cc to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/19/23 01:02:01.074 Jan 19 01:07:01.406: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-9hb37s Jan 19 01:07:01.460: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:212 @ 01/19/23 01:07:02.038 INFO: "with a single control plane node, one linux worker node, and one windows worker node" started at Thu, 19 Jan 2023 01:08:05 UTC on Ginkgo node 8 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 @ 01/19/23 01:08:05.672 (12m16.998s)
Filter through log files | View test history on testgrid
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Conformance Tests conformance-tests
capz-e2e [It] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with out-of-tree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider