Recent runs || View in Spyglass
Result | FAILURE |
Tests | 5 failed / 4 succeeded |
Started | |
Elapsed | 1h52m |
Revision | release-0.5 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377 Timed out after 1200.000s. Expected <bool>: false to be true /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/controlplane_helpers.go:145from junit.e2e_suite.2.xml
INFO: "with a single control plane node and 1 node" started at Wed, 20 Apr 2022 20:13:11 UTC on Ginkgo node 2 of 3 �[1mSTEP�[0m: Creating namespace "capz-e2e-xzxxaj" for hosting the cluster Apr 20 20:13:11.315: INFO: starting to create namespace for hosting the "capz-e2e-xzxxaj" test spec 2022/04/20 20:13:11 failed trying to get namespace (capz-e2e-xzxxaj):namespaces "capz-e2e-xzxxaj" not found INFO: Creating namespace capz-e2e-xzxxaj INFO: Creating event watcher for namespace "capz-e2e-xzxxaj" Apr 20 20:13:11.360: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-xzxxaj-gpu INFO: Creating the workload cluster with name "capz-e2e-xzxxaj-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-xzxxaj-gpu --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor nvidia-gpu INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-xzxxaj-gpu serverside-applied azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-xzxxaj-gpu serverside-applied kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-xzxxaj-gpu-control-plane serverside-applied azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-xzxxaj-gpu-control-plane serverside-applied azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity serverside-applied machinedeployment.cluster.x-k8s.io/capz-e2e-xzxxaj-gpu-md-0 serverside-applied azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-xzxxaj-gpu-md-0 serverside-applied kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-xzxxaj-gpu-md-0 serverside-applied clusterresourceset.addons.cluster.x-k8s.io/crs-gpu-operator serverside-applied configmap/nvidia-clusterpolicy-crd serverside-applied configmap/nvidia-gpu-operator-components serverside-applied clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-xzxxaj-gpu-calico serverside-applied configmap/cni-capz-e2e-xzxxaj-gpu-calico serverside-applied INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-e2e-xzxxaj/capz-e2e-xzxxaj-gpu-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist �[1mSTEP�[0m: Dumping logs from the "capz-e2e-xzxxaj-gpu" workload cluster �[1mSTEP�[0m: Dumping workload cluster capz-e2e-xzxxaj/capz-e2e-xzxxaj-gpu logs Apr 20 20:34:12.679: INFO: INFO: Collecting logs for node capz-e2e-xzxxaj-gpu-control-plane-27hgm in cluster capz-e2e-xzxxaj-gpu in namespace capz-e2e-xzxxaj Apr 20 20:36:22.852: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-xzxxaj-gpu-control-plane-27hgm Failed to get logs for machine capz-e2e-xzxxaj-gpu-control-plane-xmflf, cluster capz-e2e-xzxxaj/capz-e2e-xzxxaj-gpu: dialing public load balancer at capz-e2e-xzxxaj-gpu-8f4392f5.eastus.cloudapp.azure.com: dial tcp 20.121.176.4:22: connect: connection timed out Apr 20 20:36:23.629: INFO: INFO: Collecting logs for node capz-e2e-xzxxaj-gpu-md-0-2cn6s in cluster capz-e2e-xzxxaj-gpu in namespace capz-e2e-xzxxaj Apr 20 20:38:33.928: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-xzxxaj-gpu-md-0-2cn6s �[1mSTEP�[0m: Redacting sensitive information from logs
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sWindows\sEnabled\scluster\sWith\s3\scontrol\-plane\snodes\sand\s1\sLinux\sworker\snode\sand\s1\sWindows\sworker\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496 Timed out after 1200.000s. Expected <int>: 2 to equal <int>: 3 /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/controlplane_helpers.go:108from junit.e2e_suite.2.xml
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Wed, 20 Apr 2022 20:39:01 UTC on Ginkgo node 2 of 3 �[1mSTEP�[0m: Creating namespace "capz-e2e-gqngo1" for hosting the cluster Apr 20 20:39:01.620: INFO: starting to create namespace for hosting the "capz-e2e-gqngo1" test spec 2022/04/20 20:39:01 failed trying to get namespace (capz-e2e-gqngo1):namespaces "capz-e2e-gqngo1" not found INFO: Creating namespace capz-e2e-gqngo1 INFO: Creating event watcher for namespace "capz-e2e-gqngo1" Apr 20 20:39:01.657: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-gqngo1-win-ha INFO: Creating the workload cluster with name "capz-e2e-gqngo1-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-gqngo1-win-ha --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 3 --worker-machine-count 1 --flavor windows INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-gqngo1-win-ha created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-gqngo1-win-ha created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-gqngo1-win-ha-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-gqngo1-win-ha-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-gqngo1-win-ha-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-gqngo1-win-ha-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-gqngo1-win-ha-md-0 created machinedeployment.cluster.x-k8s.io/capz-e2e-gqngo1-win-ha-md-win created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-gqngo1-win-ha-md-win created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-gqngo1-win-ha-md-win created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-gqngo1-win-ha-flannel created configmap/cni-capz-e2e-gqngo1-win-ha-flannel created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-e2e-gqngo1/capz-e2e-gqngo1-win-ha-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for the remaining control plane machines managed by capz-e2e-gqngo1/capz-e2e-gqngo1-win-ha-control-plane to be provisioned �[1mSTEP�[0m: Waiting for all control plane nodes to exist �[1mSTEP�[0m: Dumping logs from the "capz-e2e-gqngo1-win-ha" workload cluster �[1mSTEP�[0m: Dumping workload cluster capz-e2e-gqngo1/capz-e2e-gqngo1-win-ha logs Apr 20 21:02:12.784: INFO: INFO: Collecting logs for node capz-e2e-gqngo1-win-ha-control-plane-v8mwr in cluster capz-e2e-gqngo1-win-ha in namespace capz-e2e-gqngo1 Apr 20 21:02:18.015: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-gqngo1-win-ha-control-plane-v8mwr Failed to get logs for machine capz-e2e-gqngo1-win-ha-control-plane-j6k7n, cluster capz-e2e-gqngo1/capz-e2e-gqngo1-win-ha: dialing from control plane to target node at capz-e2e-gqngo1-win-ha-control-plane-v8mwr: ssh: rejected: connect failed (Temporary failure in name resolution) Apr 20 21:02:18.729: INFO: INFO: Collecting logs for node capz-e2e-gqngo1-win-ha-control-plane-q4t68 in cluster capz-e2e-gqngo1-win-ha in namespace capz-e2e-gqngo1 Apr 20 21:02:28.009: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-gqngo1-win-ha-control-plane-q4t68 Apr 20 21:02:28.633: INFO: INFO: Collecting logs for node capz-e2e-gqngo1-win-ha-control-plane-dcfzv in cluster capz-e2e-gqngo1-win-ha in namespace capz-e2e-gqngo1 Apr 20 21:02:34.577: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-gqngo1-win-ha-control-plane-dcfzv Apr 20 21:02:34.851: INFO: INFO: Collecting logs for node capz-e2e-gqngo1-win-ha-md-0-mkgnk in cluster capz-e2e-gqngo1-win-ha in namespace capz-e2e-gqngo1 Apr 20 21:02:41.608: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-gqngo1-win-ha-md-0-mkgnk Apr 20 21:02:41.855: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster capz-e2e-gqngo1-win-ha in namespace capz-e2e-gqngo1 Apr 20 21:03:12.220: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-gqngo1-win-ha-md-win-xhmdk �[1mSTEP�[0m: Dumping workload cluster capz-e2e-gqngo1/capz-e2e-gqngo1-win-ha kube-system pod logs �[1mSTEP�[0m: Fetching kube-system pod logs took 337.646805ms �[1mSTEP�[0m: Dumping workload cluster capz-e2e-gqngo1/capz-e2e-gqngo1-win-ha Azure activity log �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-gqngo1-win-ha-control-plane-q4t68, container kube-apiserver �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-gqngo1-win-ha-control-plane-q4t68, container etcd �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-gqngo1-win-ha-control-plane-q4t68, container kube-controller-manager �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-lvn2v, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-qbp4s, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-kmnfb, container kube-flannel �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-gb6vf, container kube-flannel �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-gqngo1-win-ha-control-plane-dcfzv, container kube-apiserver �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-windows-8z74d, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-tbcrk, container kube-flannel �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-6f27m, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-gqngo1-win-ha-control-plane-q4t68, container kube-scheduler �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-gqngo1-win-ha-control-plane-dcfzv, container kube-scheduler �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-8j2gk, container kube-flannel �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-2d46d, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-tlhhf, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-gqngo1-win-ha-control-plane-dcfzv, container etcd �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-gqngo1-win-ha-control-plane-dcfzv, container kube-controller-manager �[1mSTEP�[0m: Got error while iterating over activity logs for resource group capz-e2e-gqngo1-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded �[1mSTEP�[0m: Fetching activity logs took 30.000970942s �[1mSTEP�[0m: Dumping all the Cluster API resources in the "capz-e2e-gqngo1" namespace �[1mSTEP�[0m: Deleting all clusters in the capz-e2e-gqngo1 namespace �[1mSTEP�[0m: Deleting cluster capz-e2e-gqngo1-win-ha INFO: Waiting for the Cluster capz-e2e-gqngo1/capz-e2e-gqngo1-win-ha to be deleted �[1mSTEP�[0m: Waiting for cluster capz-e2e-gqngo1-win-ha to be deleted �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-gqngo1-win-ha-control-plane-q4t68, container kube-controller-manager: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-8j2gk, container kube-flannel: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-tbcrk, container kube-flannel: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-proxy-lvn2v, container kube-proxy: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-gqngo1-win-ha-control-plane-q4t68, container etcd: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-gqngo1-win-ha-control-plane-q4t68, container kube-apiserver: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-proxy-windows-8z74d, container kube-proxy: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-gqngo1-win-ha-control-plane-q4t68, container kube-scheduler: http2: client connection lost �[1mSTEP�[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-gqngo1 �[1mSTEP�[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" �[1mSTEP�[0m: Redacting sensitive information from logs INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 39m55s on Ginkgo node 2 of 3
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sipv6\scontrol\-plane\scluster\sWith\sipv6\sworker\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269 Timed out after 1200.004s. Expected <int>: 1 to equal <int>: 3 /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/controlplane_helpers.go:108from junit.e2e_suite.2.xml
INFO: "With ipv6 worker node" started at Wed, 20 Apr 2022 19:43:03 UTC on Ginkgo node 2 of 3 �[1mSTEP�[0m: Creating namespace "capz-e2e-8id5c9" for hosting the cluster Apr 20 19:43:03.082: INFO: starting to create namespace for hosting the "capz-e2e-8id5c9" test spec 2022/04/20 19:43:03 failed trying to get namespace (capz-e2e-8id5c9):namespaces "capz-e2e-8id5c9" not found INFO: Creating namespace capz-e2e-8id5c9 INFO: Creating event watcher for namespace "capz-e2e-8id5c9" Apr 20 19:43:03.185: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-8id5c9-ipv6 INFO: Creating the workload cluster with name "capz-e2e-8id5c9-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-8id5c9-ipv6 --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 3 --worker-machine-count 1 --flavor ipv6 INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-8id5c9-ipv6 created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-8id5c9-ipv6 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-8id5c9-ipv6-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-8id5c9-ipv6-control-plane created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created machinedeployment.cluster.x-k8s.io/capz-e2e-8id5c9-ipv6-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-8id5c9-ipv6-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-8id5c9-ipv6-md-0 created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-8id5c9-ipv6-calico created configmap/cni-capz-e2e-8id5c9-ipv6-calico created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-e2e-8id5c9/capz-e2e-8id5c9-ipv6-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for the remaining control plane machines managed by capz-e2e-8id5c9/capz-e2e-8id5c9-ipv6-control-plane to be provisioned �[1mSTEP�[0m: Waiting for all control plane nodes to exist �[1mSTEP�[0m: Dumping logs from the "capz-e2e-8id5c9-ipv6" workload cluster �[1mSTEP�[0m: Dumping workload cluster capz-e2e-8id5c9/capz-e2e-8id5c9-ipv6 logs Apr 20 20:06:55.306: INFO: INFO: Collecting logs for node capz-e2e-8id5c9-ipv6-control-plane-xknb8 in cluster capz-e2e-8id5c9-ipv6 in namespace capz-e2e-8id5c9 Apr 20 20:07:08.630: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-8id5c9-ipv6-control-plane-xknb8 Apr 20 20:07:09.498: INFO: INFO: Collecting logs for node capz-e2e-8id5c9-ipv6-control-plane-ptwbk in cluster capz-e2e-8id5c9-ipv6 in namespace capz-e2e-8id5c9 Apr 20 20:07:13.314: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-8id5c9-ipv6-control-plane-ptwbk Failed to get logs for machine capz-e2e-8id5c9-ipv6-control-plane-vlvsv, cluster capz-e2e-8id5c9/capz-e2e-8id5c9-ipv6: dialing from control plane to target node at capz-e2e-8id5c9-ipv6-control-plane-ptwbk: ssh: rejected: connect failed (Name or service not known) Apr 20 20:07:13.597: INFO: INFO: Collecting logs for node capz-e2e-8id5c9-ipv6-md-0-26xsr in cluster capz-e2e-8id5c9-ipv6 in namespace capz-e2e-8id5c9 Apr 20 20:07:23.839: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-8id5c9-ipv6-md-0-26xsr �[1mSTEP�[0m: Dumping workload cluster capz-e2e-8id5c9/capz-e2e-8id5c9-ipv6 kube-system pod logs �[1mSTEP�[0m: Fetching kube-system pod logs took 322.537947ms �[1mSTEP�[0m: Dumping workload cluster capz-e2e-8id5c9/capz-e2e-8id5c9-ipv6 Azure activity log �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-kgnt4, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-gpwvd, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-8id5c9-ipv6-control-plane-xknb8, container kube-controller-manager �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-8id5c9-ipv6-control-plane-xknb8, container kube-apiserver �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-8id5c9-ipv6-control-plane-xknb8, container etcd �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-jdmx5, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-gsg9f, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-8id5c9-ipv6-control-plane-xknb8, container kube-scheduler �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-drjq2, container calico-kube-controllers �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-jkd2m, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-gt9z9, container coredns �[1mSTEP�[0m: Fetching activity logs took 1.010979598s �[1mSTEP�[0m: Dumping all the Cluster API resources in the "capz-e2e-8id5c9" namespace �[1mSTEP�[0m: Deleting all clusters in the capz-e2e-8id5c9 namespace �[1mSTEP�[0m: Deleting cluster capz-e2e-8id5c9-ipv6 INFO: Waiting for the Cluster capz-e2e-8id5c9/capz-e2e-8id5c9-ipv6 to be deleted �[1mSTEP�[0m: Waiting for cluster capz-e2e-8id5c9-ipv6 to be deleted �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-drjq2, container calico-kube-controllers: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-proxy-gsg9f, container kube-proxy: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-proxy-jdmx5, container kube-proxy: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/calico-node-gpwvd, container calico-node: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-8id5c9-ipv6-control-plane-xknb8, container etcd: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-8id5c9-ipv6-control-plane-xknb8, container kube-controller-manager: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-8id5c9-ipv6-control-plane-xknb8, container kube-scheduler: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kgnt4, container coredns: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/calico-node-jkd2m, container calico-node: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-8id5c9-ipv6-control-plane-xknb8, container kube-apiserver: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-gt9z9, container coredns: http2: client connection lost �[1mSTEP�[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-8id5c9 �[1mSTEP�[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" �[1mSTEP�[0m: Redacting sensitive information from logs INFO: "With ipv6 worker node" ran for 30m8s on Ginkgo node 2 of 3
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sprivate\scluster\sCreates\sa\spublic\smanagement\scluster\sin\sthe\ssame\svnet$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141 Timed out after 1200.000s. Expected <int>: 1 to equal <int>: 3 /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/controlplane_helpers.go:108from junit.e2e_suite.3.xml
INFO: "Creates a public management cluster in the same vnet" started at Wed, 20 Apr 2022 19:43:03 UTC on Ginkgo node 3 of 3 �[1mSTEP�[0m: Creating namespace "capz-e2e-4fz65d" for hosting the cluster Apr 20 19:43:03.055: INFO: starting to create namespace for hosting the "capz-e2e-4fz65d" test spec 2022/04/20 19:43:03 failed trying to get namespace (capz-e2e-4fz65d):namespaces "capz-e2e-4fz65d" not found INFO: Creating namespace capz-e2e-4fz65d INFO: Creating event watcher for namespace "capz-e2e-4fz65d" Apr 20 19:43:03.149: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-4fz65d-public-custom-vnet �[1mSTEP�[0m: creating Azure clients with the workload cluster's subscription �[1mSTEP�[0m: creating a resource group �[1mSTEP�[0m: creating a network security group �[1mSTEP�[0m: creating a node security group �[1mSTEP�[0m: creating a node routetable �[1mSTEP�[0m: creating a virtual network INFO: Creating the workload cluster with name "capz-e2e-4fz65d-public-custom-vnet" using the "custom-vnet" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-4fz65d-public-custom-vnet --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor custom-vnet INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-4fz65d-public-custom-vnet created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-4fz65d-public-custom-vnet created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-4fz65d-public-custom-vnet-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-4fz65d-public-custom-vnet-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-4fz65d-public-custom-vnet-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-4fz65d-public-custom-vnet-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-4fz65d-public-custom-vnet-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created machinehealthcheck.cluster.x-k8s.io/capz-e2e-4fz65d-public-custom-vnet-mhc-0 created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-4fz65d-public-custom-vnet-calico created configmap/cni-capz-e2e-4fz65d-public-custom-vnet-calico created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-e2e-4fz65d/capz-e2e-4fz65d-public-custom-vnet-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-4fz65d/capz-e2e-4fz65d-public-custom-vnet-control-plane to be ready (implies underlying nodes to be ready as well) �[1mSTEP�[0m: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned �[1mSTEP�[0m: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned �[1mSTEP�[0m: checking that time synchronization is healthy on capz-e2e-4fz65d-public-custom-vnet-control-plane-sdkgl �[1mSTEP�[0m: checking that time synchronization is healthy on capz-e2e-4fz65d-public-custom-vnet-md-0-4lxk7 �[1mSTEP�[0m: creating a Kubernetes client to the workload cluster �[1mSTEP�[0m: Creating a namespace for hosting the azure-private-cluster test spec Apr 20 19:47:47.405: INFO: starting to create namespace for hosting the azure-private-cluster test spec INFO: Creating namespace capz-e2e-4fz65d INFO: Creating event watcher for namespace "capz-e2e-4fz65d" �[1mSTEP�[0m: Initializing the workload cluster INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure azure INFO: Waiting for provider controllers to be running �[1mSTEP�[0m: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-75467796c5-cb295, container manager �[1mSTEP�[0m: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-688b75d88d-zv8fv, container manager �[1mSTEP�[0m: Waiting for deployment capi-system/capi-controller-manager to be available INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-58757dd9b4-zptks, container manager �[1mSTEP�[0m: Waiting for deployment capz-system/capz-controller-manager to be available INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-84649cf55b-xr487, container manager �[1mSTEP�[0m: Ensure public API server is stable before creating private cluster �[1mSTEP�[0m: Creating a private workload cluster INFO: Creating the workload cluster with name "capz-e2e-4r749k-private" using the "private" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-4r749k-private --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 3 --worker-machine-count 1 --flavor private INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-4r749k-private created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-4r749k-private created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-4r749k-private-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-4r749k-private-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-4r749k-private-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-4r749k-private-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-4r749k-private-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-4r749k-private-calico created configmap/cni-capz-e2e-4r749k-private-calico created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-e2e-4fz65d/capz-e2e-4r749k-private-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for the remaining control plane machines managed by capz-e2e-4fz65d/capz-e2e-4r749k-private-control-plane to be provisioned �[1mSTEP�[0m: Waiting for all control plane nodes to exist �[1mSTEP�[0m: Dumping logs from the "capz-e2e-4fz65d-public-custom-vnet" workload cluster �[1mSTEP�[0m: Dumping workload cluster capz-e2e-4fz65d/capz-e2e-4fz65d-public-custom-vnet logs Apr 20 20:19:11.699: INFO: INFO: Collecting logs for node capz-e2e-4fz65d-public-custom-vnet-control-plane-sdkgl in cluster capz-e2e-4fz65d-public-custom-vnet in namespace capz-e2e-4fz65d Apr 20 20:19:23.448: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-4fz65d-public-custom-vnet-control-plane-sdkgl Apr 20 20:19:24.348: INFO: INFO: Collecting logs for node capz-e2e-4fz65d-public-custom-vnet-md-0-4lxk7 in cluster capz-e2e-4fz65d-public-custom-vnet in namespace capz-e2e-4fz65d Apr 20 20:19:37.246: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-4fz65d-public-custom-vnet-md-0-4lxk7 �[1mSTEP�[0m: Dumping workload cluster capz-e2e-4fz65d/capz-e2e-4fz65d-public-custom-vnet kube-system pod logs �[1mSTEP�[0m: Fetching kube-system pod logs took 174.938478ms �[1mSTEP�[0m: Dumping workload cluster capz-e2e-4fz65d/capz-e2e-4fz65d-public-custom-vnet Azure activity log �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-4fz65d-public-custom-vnet-control-plane-sdkgl, container etcd �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-dsh98, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-4fz65d-public-custom-vnet-control-plane-sdkgl, container kube-controller-manager �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-mthgr, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-4fz65d-public-custom-vnet-control-plane-sdkgl, container kube-scheduler �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-x455h, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-cdxws, container calico-kube-controllers �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-m6ckt, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-4fz65d-public-custom-vnet-control-plane-sdkgl, container kube-apiserver �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-2cqtb, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-prn98, container coredns �[1mSTEP�[0m: Got error while iterating over activity logs for resource group capz-e2e-4fz65d-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded �[1mSTEP�[0m: Fetching activity logs took 30.000940758s �[1mSTEP�[0m: Dumping all the Cluster API resources in the "capz-e2e-4fz65d" namespace �[1mSTEP�[0m: Deleting all clusters in the capz-e2e-4fz65d namespace �[1mSTEP�[0m: Deleting cluster capz-e2e-4fz65d-public-custom-vnet INFO: Waiting for the Cluster capz-e2e-4fz65d/capz-e2e-4fz65d-public-custom-vnet to be deleted �[1mSTEP�[0m: Waiting for cluster capz-e2e-4fz65d-public-custom-vnet to be deleted W0420 20:25:06.733852 24139 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding I0420 20:25:37.792577 24139 trace.go:205] Trace[250121179]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (20-Apr-2022 20:25:07.789) (total time: 30003ms): Trace[250121179]: [30.003223663s] [30.003223663s] END E0420 20:25:37.792653 24139 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-4fz65d-public-custom-vnet-4450fcc5.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-4fz65d/events?resourceVersion=8880": dial tcp 20.124.63.8:6443: i/o timeout I0420 20:26:10.573723 24139 trace.go:205] Trace[2113313457]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (20-Apr-2022 20:25:40.572) (total time: 30001ms): Trace[2113313457]: [30.001150094s] [30.001150094s] END E0420 20:26:10.573813 24139 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-4fz65d-public-custom-vnet-4450fcc5.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-4fz65d/events?resourceVersion=8880": dial tcp 20.124.63.8:6443: i/o timeout I0420 20:26:45.157418 24139 trace.go:205] Trace[224795123]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (20-Apr-2022 20:26:15.156) (total time: 30001ms): Trace[224795123]: [30.001127321s] [30.001127321s] END E0420 20:26:45.157549 24139 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-4fz65d-public-custom-vnet-4450fcc5.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-4fz65d/events?resourceVersion=8880": dial tcp 20.124.63.8:6443: i/o timeout I0420 20:27:25.137819 24139 trace.go:205] Trace[1663887171]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (20-Apr-2022 20:26:55.136) (total time: 30000ms): Trace[1663887171]: [30.000860183s] [30.000860183s] END E0420 20:27:25.137898 24139 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-4fz65d-public-custom-vnet-4450fcc5.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-4fz65d/events?resourceVersion=8880": dial tcp 20.124.63.8:6443: i/o timeout I0420 20:28:20.455583 24139 trace.go:205] Trace[2067567054]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (20-Apr-2022 20:27:50.454) (total time: 30000ms): Trace[2067567054]: [30.000700948s] [30.000700948s] END E0420 20:28:20.455652 24139 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-4fz65d-public-custom-vnet-4450fcc5.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-4fz65d/events?resourceVersion=8880": dial tcp 20.124.63.8:6443: i/o timeout I0420 20:29:37.978364 24139 trace.go:205] Trace[1034403980]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (20-Apr-2022 20:29:07.977) (total time: 30001ms): Trace[1034403980]: [30.001086452s] [30.001086452s] END E0420 20:29:37.978444 24139 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-4fz65d-public-custom-vnet-4450fcc5.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-4fz65d/events?resourceVersion=8880": dial tcp 20.124.63.8:6443: i/o timeout E0420 20:30:20.931267 24139 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-4fz65d-public-custom-vnet-4450fcc5.eastus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-4fz65d/events?resourceVersion=8880": dial tcp: lookup capz-e2e-4fz65d-public-custom-vnet-4450fcc5.eastus.cloudapp.azure.com on 10.63.240.10:53: no such host �[1mSTEP�[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-4fz65d �[1mSTEP�[0m: Running additional cleanup for the "create-workload-cluster" test spec Apr 20 20:30:38.413: INFO: deleting an existing virtual network "custom-vnet" �[1mSTEP�[0m: Redacting sensitive information from logs
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sWith\s3\scontrol\-plane\snodes\sand\s2\sworker\snodes$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203 Timed out after 1200.002s. Expected <int>: 1 to equal <int>: 3 /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/controlplane_helpers.go:108from junit.e2e_suite.1.xml
INFO: "With 3 control-plane nodes and 2 worker nodes" started at Wed, 20 Apr 2022 19:43:03 UTC on Ginkgo node 1 of 3 �[1mSTEP�[0m: Creating namespace "capz-e2e-3fcqir" for hosting the cluster Apr 20 19:43:03.062: INFO: starting to create namespace for hosting the "capz-e2e-3fcqir" test spec 2022/04/20 19:43:03 failed trying to get namespace (capz-e2e-3fcqir):namespaces "capz-e2e-3fcqir" not found INFO: Creating namespace capz-e2e-3fcqir INFO: Creating event watcher for namespace "capz-e2e-3fcqir" Apr 20 19:43:03.139: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-3fcqir-ha INFO: Creating the workload cluster with name "capz-e2e-3fcqir-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-3fcqir-ha --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 3 --worker-machine-count 2 --flavor (default) INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-3fcqir-ha created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-3fcqir-ha created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-3fcqir-ha-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-3fcqir-ha-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-3fcqir-ha-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-3fcqir-ha-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-3fcqir-ha-md-0 created machinehealthcheck.cluster.x-k8s.io/capz-e2e-3fcqir-ha-mhc-0 created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-3fcqir-ha-calico created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created configmap/cni-capz-e2e-3fcqir-ha-calico created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-e2e-3fcqir/capz-e2e-3fcqir-ha-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for the remaining control plane machines managed by capz-e2e-3fcqir/capz-e2e-3fcqir-ha-control-plane to be provisioned �[1mSTEP�[0m: Waiting for all control plane nodes to exist �[1mSTEP�[0m: Dumping logs from the "capz-e2e-3fcqir-ha" workload cluster �[1mSTEP�[0m: Dumping workload cluster capz-e2e-3fcqir/capz-e2e-3fcqir-ha logs Apr 20 20:06:35.359: INFO: INFO: Collecting logs for node capz-e2e-3fcqir-ha-control-plane-98hs8 in cluster capz-e2e-3fcqir-ha in namespace capz-e2e-3fcqir Apr 20 20:06:48.156: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-3fcqir-ha-control-plane-98hs8 Apr 20 20:06:49.003: INFO: INFO: Collecting logs for node capz-e2e-3fcqir-ha-control-plane-w8jdh in cluster capz-e2e-3fcqir-ha in namespace capz-e2e-3fcqir Apr 20 20:06:52.481: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-3fcqir-ha-control-plane-w8jdh Failed to get logs for machine capz-e2e-3fcqir-ha-control-plane-hnzn4, cluster capz-e2e-3fcqir/capz-e2e-3fcqir-ha: dialing from control plane to target node at capz-e2e-3fcqir-ha-control-plane-w8jdh: ssh: rejected: connect failed (Temporary failure in name resolution) Apr 20 20:06:52.761: INFO: INFO: Collecting logs for node capz-e2e-3fcqir-ha-md-0-srfdh in cluster capz-e2e-3fcqir-ha in namespace capz-e2e-3fcqir Apr 20 20:07:02.744: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-3fcqir-ha-md-0-srfdh Apr 20 20:07:03.292: INFO: INFO: Collecting logs for node capz-e2e-3fcqir-ha-md-0-jnrhs in cluster capz-e2e-3fcqir-ha in namespace capz-e2e-3fcqir Apr 20 20:07:11.377: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-3fcqir-ha-md-0-jnrhs �[1mSTEP�[0m: Dumping workload cluster capz-e2e-3fcqir/capz-e2e-3fcqir-ha kube-system pod logs �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-vml8b, container coredns �[1mSTEP�[0m: Fetching kube-system pod logs took 320.859399ms �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-3fcqir-ha-control-plane-98hs8, container kube-scheduler �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-3fcqir-ha-control-plane-98hs8, container kube-controller-manager �[1mSTEP�[0m: Dumping workload cluster capz-e2e-3fcqir/capz-e2e-3fcqir-ha Azure activity log �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-3fcqir-ha-control-plane-98hs8, container etcd �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-3fcqir-ha-control-plane-98hs8, container kube-apiserver �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-mj8sh, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-pf65b, container calico-kube-controllers �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-jwz7p, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-kzzpq, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-wg88d, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-jdc9v, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-v2chk, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-f6xlc, container coredns �[1mSTEP�[0m: Fetching activity logs took 1.115247037s �[1mSTEP�[0m: Dumping all the Cluster API resources in the "capz-e2e-3fcqir" namespace �[1mSTEP�[0m: Deleting all clusters in the capz-e2e-3fcqir namespace �[1mSTEP�[0m: Deleting cluster capz-e2e-3fcqir-ha INFO: Waiting for the Cluster capz-e2e-3fcqir/capz-e2e-3fcqir-ha to be deleted �[1mSTEP�[0m: Waiting for cluster capz-e2e-3fcqir-ha to be deleted �[1mSTEP�[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-3fcqir �[1mSTEP�[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" �[1mSTEP�[0m: Redacting sensitive information from logs INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 29m57s on Ginkgo node 1 of 3
Filter through log files | View test history on testgrid
capz-e2e Workload cluster creation Creating a VMSS cluster with a single control plane node and an AzureMachinePool with 2 nodes
capz-e2e Workload cluster creation Creating a Windows enabled VMSS cluster with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster using scale in rollout Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a single control plane cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time