Recent runs || View in Spyglass
Result | FAILURE |
Tests | 2 failed / 7 succeeded |
Started | |
Elapsed | 1h46m |
Revision | release-0.5 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377 Timed out after 1200.001s. Expected <bool>: false to be true /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:76from junit.e2e_suite.2.xml
INFO: "with a single control plane node and 1 node" started at Sun, 08 May 2022 20:18:59 UTC on Ginkgo node 2 of 3 �[1mSTEP�[0m: Creating namespace "capz-e2e-1na6wc" for hosting the cluster May 8 20:18:59.466: INFO: starting to create namespace for hosting the "capz-e2e-1na6wc" test spec 2022/05/08 20:18:59 failed trying to get namespace (capz-e2e-1na6wc):namespaces "capz-e2e-1na6wc" not found INFO: Creating namespace capz-e2e-1na6wc INFO: Creating event watcher for namespace "capz-e2e-1na6wc" May 8 20:18:59.524: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-1na6wc-gpu INFO: Creating the workload cluster with name "capz-e2e-1na6wc-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-1na6wc-gpu --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor nvidia-gpu INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-1na6wc-gpu serverside-applied azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-1na6wc-gpu serverside-applied kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-1na6wc-gpu-control-plane serverside-applied azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-1na6wc-gpu-control-plane serverside-applied azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity serverside-applied machinedeployment.cluster.x-k8s.io/capz-e2e-1na6wc-gpu-md-0 serverside-applied azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-1na6wc-gpu-md-0 serverside-applied kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-1na6wc-gpu-md-0 serverside-applied clusterresourceset.addons.cluster.x-k8s.io/crs-gpu-operator serverside-applied configmap/nvidia-clusterpolicy-crd serverside-applied configmap/nvidia-gpu-operator-components serverside-applied clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-1na6wc-gpu-calico serverside-applied configmap/cni-capz-e2e-1na6wc-gpu-calico serverside-applied INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-e2e-1na6wc/capz-e2e-1na6wc-gpu-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-1na6wc/capz-e2e-1na6wc-gpu-control-plane to be ready (implies underlying nodes to be ready as well) �[1mSTEP�[0m: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned �[1mSTEP�[0m: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned �[1mSTEP�[0m: creating a Kubernetes client to the workload cluster �[1mSTEP�[0m: Waiting for a node to have an "nvidia.com/gpu" allocatable resource �[1mSTEP�[0m: Dumping logs from the "capz-e2e-1na6wc-gpu" workload cluster �[1mSTEP�[0m: Dumping workload cluster capz-e2e-1na6wc/capz-e2e-1na6wc-gpu logs May 8 20:44:11.593: INFO: INFO: Collecting logs for node capz-e2e-1na6wc-gpu-control-plane-lgwjw in cluster capz-e2e-1na6wc-gpu in namespace capz-e2e-1na6wc May 8 20:44:36.884: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-1na6wc-gpu-control-plane-lgwjw May 8 20:44:37.503: INFO: INFO: Collecting logs for node capz-e2e-1na6wc-gpu-md-0-d2pf4 in cluster capz-e2e-1na6wc-gpu in namespace capz-e2e-1na6wc May 8 20:44:49.709: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-1na6wc-gpu-md-0-d2pf4 �[1mSTEP�[0m: Dumping workload cluster capz-e2e-1na6wc/capz-e2e-1na6wc-gpu kube-system pod logs �[1mSTEP�[0m: Fetching kube-system pod logs took 109.325472ms �[1mSTEP�[0m: Dumping workload cluster capz-e2e-1na6wc/capz-e2e-1na6wc-gpu Azure activity log �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-8mfb8, container calico-kube-controllers �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-1na6wc-gpu-control-plane-lgwjw, container etcd �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-1na6wc-gpu-control-plane-lgwjw, container kube-apiserver �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-5lzlj, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-1na6wc-gpu-control-plane-lgwjw, container kube-controller-manager �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-bdq9g, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-lrs7d, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-2phrn, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-xq2ws, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-pljjt, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-1na6wc-gpu-control-plane-lgwjw, container kube-scheduler �[1mSTEP�[0m: Fetching activity logs took 997.92413ms �[1mSTEP�[0m: Dumping all the Cluster API resources in the "capz-e2e-1na6wc" namespace �[1mSTEP�[0m: Deleting all clusters in the capz-e2e-1na6wc namespace �[1mSTEP�[0m: Deleting cluster capz-e2e-1na6wc-gpu INFO: Waiting for the Cluster capz-e2e-1na6wc/capz-e2e-1na6wc-gpu to be deleted �[1mSTEP�[0m: Waiting for cluster capz-e2e-1na6wc-gpu to be deleted �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lrs7d, container coredns: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xq2ws, container coredns: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/calico-node-bdq9g, container calico-node: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-proxy-2phrn, container kube-proxy: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-1na6wc-gpu-control-plane-lgwjw, container kube-apiserver: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-8mfb8, container calico-kube-controllers: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-1na6wc-gpu-control-plane-lgwjw, container etcd: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-1na6wc-gpu-control-plane-lgwjw, container kube-controller-manager: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-1na6wc-gpu-control-plane-lgwjw, container kube-scheduler: http2: client connection lost �[1mSTEP�[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-1na6wc �[1mSTEP�[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" �[1mSTEP�[0m: Redacting sensitive information from logs INFO: "with a single control plane node and 1 node" ran for 33m50s on Ginkgo node 2 of 3
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\scluster\sthat\suses\sthe\sexternal\scloud\sprovider\swith\sa\s1\scontrol\splane\snodes\sand\s2\sworker\snodes$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419 Timed out after 1200.002s. Expected <bool>: false to be true /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/controlplane_helpers.go:145from junit.e2e_suite.3.xml
INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sun, 08 May 2022 20:23:28 UTC on Ginkgo node 3 of 3 �[1mSTEP�[0m: Creating namespace "capz-e2e-s5wiug" for hosting the cluster May 8 20:23:28.002: INFO: starting to create namespace for hosting the "capz-e2e-s5wiug" test spec 2022/05/08 20:23:28 failed trying to get namespace (capz-e2e-s5wiug):namespaces "capz-e2e-s5wiug" not found INFO: Creating namespace capz-e2e-s5wiug INFO: Creating event watcher for namespace "capz-e2e-s5wiug" May 8 20:23:28.046: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-s5wiug-oot INFO: Creating the workload cluster with name "capz-e2e-s5wiug-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-s5wiug-oot --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 2 --flavor external-cloud-provider INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-s5wiug-oot created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-s5wiug-oot created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-s5wiug-oot-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-s5wiug-oot-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-s5wiug-oot-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-s5wiug-oot-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-s5wiug-oot-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created clusterresourceset.addons.cluster.x-k8s.io/crs-ccm created clusterresourceset.addons.cluster.x-k8s.io/crs-node-manager created configmap/cloud-controller-manager-addon created configmap/cloud-node-manager-addon created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-s5wiug-oot-calico created configmap/cni-capz-e2e-s5wiug-oot-calico created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-e2e-s5wiug/capz-e2e-s5wiug-oot-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist �[1mSTEP�[0m: Dumping logs from the "capz-e2e-s5wiug-oot" workload cluster �[1mSTEP�[0m: Dumping workload cluster capz-e2e-s5wiug/capz-e2e-s5wiug-oot logs May 8 20:44:19.275: INFO: INFO: Collecting logs for node capz-e2e-s5wiug-oot-control-plane-q5dqs in cluster capz-e2e-s5wiug-oot in namespace capz-e2e-s5wiug May 8 20:44:32.343: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-s5wiug-oot-control-plane-q5dqs May 8 20:44:32.988: INFO: INFO: Collecting logs for node capz-e2e-s5wiug-oot-md-0-xwv2j in cluster capz-e2e-s5wiug-oot in namespace capz-e2e-s5wiug May 8 20:44:37.141: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-s5wiug-oot-md-0-xwv2j �[1mSTEP�[0m: Redacting sensitive information from logs
Filter through log files | View test history on testgrid
capz-e2e Workload cluster creation Creating a VMSS cluster with a single control plane node and an AzureMachinePool with 2 nodes
capz-e2e Workload cluster creation Creating a Windows Enabled cluster With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
capz-e2e Workload cluster creation Creating a Windows enabled VMSS cluster with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster Creates a public management cluster in the same vnet
capz-e2e Workload cluster creation Creating an AKS cluster with a single control plane node and 1 node
capz-e2e Workload cluster creation With 3 control-plane nodes and 2 worker nodes
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster using scale in rollout Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a single control plane cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time