Recent runs || View in Spyglass
PR | dependabot[bot]: build(deps): bump github.com/onsi/ginkgo/v2 from 2.6.1 to 2.7.0 |
Result | FAILURE |
Tests | 2 failed / 64 succeeded |
Started | |
Elapsed | 1h18m |
Revision | dc0e55a595b00c8c472695507d54b564f1004e71 |
Refs |
3970 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capa\-e2e\s\[It\]\s\[unmanaged\]\s\[functional\]\sCSI\=external\sCCM\=in\-tree\sAWSCSIMigration\=on\:\supgrade\sto\sv1\.23\sshould\screate\svolumes\sdynamically\swith\sexternal\scloud\sprovider$'
[FAILED] Timed out after 1200.001s. Expected <bool>: false to be true In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:176 @ 01/10/23 12:35:24.579from junit.e2e_suite.xml
cluster.cluster.x-k8s.io/only-csi-external-upgrade-us00l0 created awscluster.infrastructure.cluster.x-k8s.io/only-csi-external-upgrade-us00l0 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/only-csi-external-upgrade-us00l0-control-plane created awsmachinetemplate.infrastructure.cluster.x-k8s.io/only-csi-external-upgrade-us00l0-control-plane created machinedeployment.cluster.x-k8s.io/only-csi-external-upgrade-us00l0-md-0 created awsmachinetemplate.infrastructure.cluster.x-k8s.io/only-csi-external-upgrade-us00l0-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/only-csi-external-upgrade-us00l0-md-0 created configmap/cni-only-csi-external-upgrade-us00l0-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/only-csi-external-upgrade-us00l0-crs-0 created cluster.cluster.x-k8s.io/only-csi-external-upgrade-us00l0 configured awscluster.infrastructure.cluster.x-k8s.io/only-csi-external-upgrade-us00l0 configured kubeadmcontrolplane.controlplane.cluster.x-k8s.io/only-csi-external-upgrade-us00l0-control-plane configured awsmachinetemplate.infrastructure.cluster.x-k8s.io/only-csi-external-upgrade-us00l0-control-plane unchanged machinedeployment.cluster.x-k8s.io/only-csi-external-upgrade-us00l0-md-0 configured awsmachinetemplate.infrastructure.cluster.x-k8s.io/only-csi-external-upgrade-us00l0-md-0 unchanged kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/only-csi-external-upgrade-us00l0-md-0 unchanged configmap/cni-only-csi-external-upgrade-us00l0-crs-0 unchanged clusterresourceset.addons.cluster.x-k8s.io/only-csi-external-upgrade-us00l0-crs-0 unchanged clusterresourceset.addons.cluster.x-k8s.io/crs-csi created configmap/aws-ebs-csi-driver-addon created > Enter [BeforeEach] [unmanaged] [functional] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:55 @ 01/10/23 11:36:32.739 < Exit [BeforeEach] [unmanaged] [functional] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:55 @ 01/10/23 11:36:32.739 (0s) > Enter [It] should create volumes dynamically with external cloud provider - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:326 @ 01/10/23 11:36:32.739 STEP: Node 13 acquiring resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:4} - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/resource.go:187 @ 01/10/23 11:36:32.743 STEP: Node 13 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:4} - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/resource.go:216 @ 01/10/23 11:36:33.743 STEP: Creating a namespace for hosting the "only-csi-external-upgrade" test spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:43 @ 01/10/23 11:36:33.743 INFO: Creating namespace only-csi-external-upgrade-cxo73t STEP: Creating first cluster with single control plane - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:334 @ 01/10/23 11:36:34.067 INFO: Creating the workload cluster with name "only-csi-external-upgrade-us00l0" using the "(default)" template (Kubernetes v1.22.17, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster only-csi-external-upgrade-us00l0 --infrastructure (default) --kubernetes-version v1.22.17 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 01/10/23 11:36:40.977 INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by only-csi-external-upgrade-cxo73t/only-csi-external-upgrade-us00l0-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 01/10/23 11:44:31.267 INFO: Waiting for control plane to be ready INFO: Waiting for control plane only-csi-external-upgrade-cxo73t/only-csi-external-upgrade-us00l0-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:165 @ 01/10/23 11:46:41.388 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:196 @ 01/10/23 11:47:11.413 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:102 @ 01/10/23 11:47:11.442 STEP: Checking all the machines controlled by only-csi-external-upgrade-us00l0-md-0 are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/10/23 11:47:41.485 INFO: Waiting for the machine pools to be provisioned STEP: Deploying StatefulSet on infra - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:345 @ 01/10/23 11:47:41.525 STEP: Creating statefulset - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:252 @ 01/10/23 11:47:41.977 STEP: Creating StorageClass object with name: intree-aws-ebs-volumes - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:272 @ 01/10/23 11:47:42.06 STEP: Creating PodTemplateSpec config object - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:173 @ 01/10/23 11:47:42.566 STEP: Creating PersistentVolumeClaim config object - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:205 @ 01/10/23 11:47:42.566 STEP: Deploying Statefulset with name: intree-nginx-statefulset under namespace: default - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:367 @ 01/10/23 11:47:42.566 STEP: Ensuring Statefulset(intree-nginx-statefulset) is running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:628 @ 01/10/23 11:47:42.696 STEP: Retrieving IDs of dynamically provisioned volumes. - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:392 @ 01/10/23 11:48:43.077 STEP: Ensuring dynamically provisioned volumes exists - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:618 @ 01/10/23 11:48:43.353 INFO: Creating the workload cluster with name "only-csi-external-upgrade-us00l0" using the "external-csi" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster only-csi-external-upgrade-us00l0 --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor external-csi INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 01/10/23 11:48:44.751 INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by only-csi-external-upgrade-cxo73t/only-csi-external-upgrade-us00l0-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 01/10/23 11:48:44.813 INFO: Waiting for control plane to be ready INFO: Waiting for control plane only-csi-external-upgrade-cxo73t/only-csi-external-upgrade-us00l0-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:165 @ 01/10/23 11:48:44.856 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:196 @ 01/10/23 11:48:44.862 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:102 @ 01/10/23 11:48:44.924 STEP: Checking all the machines controlled by only-csi-external-upgrade-us00l0-md-0 are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/10/23 11:49:35.039 INFO: Waiting for the machine pools to be provisioned STEP: Waiting for control-plane machines to have the upgraded kubernetes version - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:359 @ 01/10/23 11:49:35.093 STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.23.15 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/10/23 11:49:35.104 STEP: Creating the LB service - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:367 @ 01/10/23 11:53:05.301 STEP: Creating service of type Load Balancer with name: test-svc-941eo8 under namespace: default - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:127 @ 01/10/23 11:53:05.301 STEP: Created Load Balancer service and ELB name is: a47e784b35c3446c8bdc8a5012fc1e52 - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:151 @ 01/10/23 11:53:20.655 STEP: Verifying ELB with name a47e784b35c3446c8bdc8a5012fc1e52 present - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:597 @ 01/10/23 11:53:20.655 STEP: ELB with name a47e784b35c3446c8bdc8a5012fc1e52 exists - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:608 @ 01/10/23 11:53:20.982 STEP: Checking v1.22 StatefulSet still healthy after the upgrade - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:372 @ 01/10/23 11:53:20.982 STEP: Ensuring Statefulset(intree-nginx-statefulset) is running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:628 @ 01/10/23 11:53:20.982 STEP: Deploying StatefulSet on infra when K8s >= 1.23 - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:377 @ 01/10/23 11:54:21.177 STEP: Creating statefulset - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:252 @ 01/10/23 11:54:21.177 STEP: Creating StorageClass object with name: postupgrade-aws-ebs-volumes - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:272 @ 01/10/23 11:54:21.245 STEP: Creating PodTemplateSpec config object - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:173 @ 01/10/23 11:54:21.703 STEP: Creating PersistentVolumeClaim config object - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:205 @ 01/10/23 11:54:21.703 STEP: Deploying Statefulset with name: postupgrade-nginx-statefulset under namespace: default - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:367 @ 01/10/23 11:54:21.703 STEP: Ensuring Statefulset(postupgrade-nginx-statefulset) is running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:628 @ 01/10/23 11:54:21.772 STEP: Retrieving IDs of dynamically provisioned volumes. - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:392 @ 01/10/23 11:55:21.975 STEP: Ensuring dynamically provisioned volumes exists - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/helpers_test.go:618 @ 01/10/23 11:55:22.241 STEP: Deleting LB service - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:382 @ 01/10/23 11:55:22.37 STEP: Deleting the Clusters - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:385 @ 01/10/23 11:55:22.438 STEP: Deleting cluster only-csi-external-upgrade-us00l0 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/10/23 11:55:22.458 STEP: Waiting for cluster only-csi-external-upgrade-us00l0 to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/10/23 11:55:22.491 STEP: Dumping all the Cluster API resources in the "only-csi-external-upgrade-cxo73t" namespace - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:68 @ 01/10/23 12:15:22.492 STEP: Dumping all EC2 instances in the "only-csi-external-upgrade-cxo73t" namespace - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:72 @ 01/10/23 12:15:22.839 STEP: Deleting all clusters in the "only-csi-external-upgrade-cxo73t" namespace with intervals ["20m" "10s"] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:76 @ 01/10/23 12:15:23.533 STEP: Deleting cluster only-csi-external-upgrade-us00l0 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/10/23 12:15:23.559 INFO: Waiting for the Cluster only-csi-external-upgrade-cxo73t/only-csi-external-upgrade-us00l0 to be deleted STEP: Waiting for cluster only-csi-external-upgrade-us00l0 to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/10/23 12:15:23.577 STEP: Node 13 released resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:4} - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/resource.go:269 @ 01/10/23 12:35:24.579 [FAILED] Timed out after 1200.001s. Expected <bool>: false to be true In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:176 @ 01/10/23 12:35:24.579 < Exit [It] should create volumes dynamically with external cloud provider - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:326 @ 01/10/23 12:35:24.579 (58m51.84s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capa\-e2e\s\[It\]\s\[unmanaged\]\s\[functional\]\sGPU\-enabled\scluster\stest\sshould\screate\scluster\swith\ssingle\sworker$'
[FAILED] Timed out after 600.000s. Job default/cuda-vector-add failed Job: { "metadata": { "name": "cuda-vector-add", "namespace": "default", "uid": "8aac65e5-4bd5-4e4a-8ac9-f3165e13dac7", "resourceVersion": "686", "generation": 1, "creationTimestamp": "2023-01-10T11:57:55Z", "labels": { "controller-uid": "8aac65e5-4bd5-4e4a-8ac9-f3165e13dac7", "job-name": "cuda-vector-add" }, "annotations": { "batch.kubernetes.io/job-tracking": "" }, "managedFields": [ { "manager": "cluster-api-e2e", "operation": "Update", "apiVersion": "batch/v1", "time": "2023-01-10T11:57:55Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { "f:backoffLimit": {}, "f:completionMode": {}, "f:completions": {}, "f:parallelism": {}, "f:suspend": {}, "f:template": { "f:spec": { "f:containers": { "k:{\"name\":\"cuda-vector-add\"}": { ".": {}, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": { ".": {}, "f:limits": { ".": {}, "f:nvidia.com/gpu": {} } }, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {} } }, "f:dnsPolicy": {}, "f:restartPolicy": {}, "f:schedulerName": {}, "f:securityContext": {}, "f:terminationGracePeriodSeconds": {} } } } } }, { "manager": "kube-controller-manager", "operation": "Update", "apiVersion": "batch/v1", "time": "2023-01-10T11:57:55Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:status": { "f:active": {}, "f:ready": {}, "f:startTime": {}, "f:uncountedTerminatedPods": {} } }, "subresource": "status" } ] }, "spec": { "parallelism": 1, "completions": 1, "backoffLimit": 6, "selector": { "matchLabels": { "controller-uid": "8aac65e5-4bd5-4e4a-8ac9-f3165e13dac7" } }, "template": { "metadata": { "creationTimestamp": null, "labels": { "controller-uid": "8aac65e5-4bd5-4e4a-8ac9-f3165e13dac7", "job-name": "cuda-vector-add" } }, "spec": { "containers": [ { "name": "cuda-vector-add", "image": "nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda11.1-ubuntu18.04", "resources": { "limits": { "nvidia.com/gpu": "1" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" } ], "restartPolicy": "OnFailure", "terminationGracePeriodSeconds": 30, "dnsPolicy": "ClusterFirst", "securityContext": {}, "schedulerName": "default-scheduler" } }, "completionMode": "NonIndexed", "suspend": false }, "status": { "startTime": "2023-01-10T11:57:55Z", "active": 1, "uncountedTerminatedPods": {}, "ready": 0 } } LAST SEEN TYPE REASON OBJECT MESSAGE 2023-01-10 11:57:55 +0000 UTC Normal SuccessfulCreate job/cuda-vector-add Created pod: cuda-vector-add-7k6rk Expected <bool>: false to be true In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/gpu.go:134 @ 01/10/23 12:07:56.42from junit.e2e_suite.xml
cluster.cluster.x-k8s.io/functional-gpu-cluster-rd1rtp serverside-applied awscluster.infrastructure.cluster.x-k8s.io/functional-gpu-cluster-rd1rtp serverside-applied kubeadmcontrolplane.controlplane.cluster.x-k8s.io/functional-gpu-cluster-rd1rtp-control-plane serverside-applied awsmachinetemplate.infrastructure.cluster.x-k8s.io/functional-gpu-cluster-rd1rtp-control-plane serverside-applied clusterresourceset.addons.cluster.x-k8s.io/crs-gpu-operator serverside-applied machinedeployment.cluster.x-k8s.io/functional-gpu-cluster-rd1rtp-md serverside-applied awsmachinetemplate.infrastructure.cluster.x-k8s.io/functional-gpu-cluster-rd1rtp-md serverside-applied kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/functional-gpu-cluster-rd1rtp-md serverside-applied configmap/cni-functional-gpu-cluster-rd1rtp-crs-0 serverside-applied clusterresourceset.addons.cluster.x-k8s.io/functional-gpu-cluster-rd1rtp-crs-0 serverside-applied configmap/nvidia-clusterpolicy-crd serverside-applied configmap/nvidia-gpu-operator-components serverside-applied > Enter [BeforeEach] [unmanaged] [functional] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:55 @ 01/10/23 11:36:32.737 < Exit [BeforeEach] [unmanaged] [functional] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:55 @ 01/10/23 11:36:32.737 (0s) > Enter [It] should create cluster with single worker - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:114 @ 01/10/23 11:36:32.738 STEP: Creating a namespace for hosting the "functional-gpu-cluster" test spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:52 @ 01/10/23 11:36:32.739 INFO: Creating namespace functional-gpu-cluster-p64r58 INFO: Creating event watcher for namespace "functional-gpu-cluster-p64r58" STEP: Node 4 acquiring resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:4, volume-gp2:0} - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/resource.go:187 @ 01/10/23 11:36:32.795 STEP: Node 4 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:4, volume-gp2:0} - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/resource.go:216 @ 01/10/23 11:50:22.796 STEP: Creating cluster with a single worker - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:123 @ 01/10/23 11:50:22.796 INFO: Creating the workload cluster with name "functional-gpu-cluster-rd1rtp" using the "gpu" template (Kubernetes v1.25.3, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster functional-gpu-cluster-rd1rtp --infrastructure (default) --kubernetes-version v1.25.3 --control-plane-machine-count 1 --worker-machine-count 1 --flavor gpu INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 01/10/23 11:50:24.634 INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by functional-gpu-cluster-p64r58/functional-gpu-cluster-rd1rtp-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 01/10/23 11:55:24.832 INFO: Waiting for control plane to be ready INFO: Waiting for control plane functional-gpu-cluster-p64r58/functional-gpu-cluster-rd1rtp-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:165 @ 01/10/23 11:57:24.942 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:196 @ 01/10/23 11:57:44.959 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:102 @ 01/10/23 11:57:44.987 STEP: Checking all the machines controlled by functional-gpu-cluster-rd1rtp-md are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/10/23 11:57:55.008 INFO: Waiting for the machine pools to be provisioned STEP: creating a Kubernetes client to the workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/gpu.go:57 @ 01/10/23 11:57:55.063 STEP: running a CUDA vector calculation job - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/gpu.go:63 @ 01/10/23 11:57:55.096 STEP: Node 4 released resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:4, volume-gp2:0} - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/resource.go:269 @ 01/10/23 12:07:56.42 [FAILED] Timed out after 600.000s. Job default/cuda-vector-add failed Job: { "metadata": { "name": "cuda-vector-add", "namespace": "default", "uid": "8aac65e5-4bd5-4e4a-8ac9-f3165e13dac7", "resourceVersion": "686", "generation": 1, "creationTimestamp": "2023-01-10T11:57:55Z", "labels": { "controller-uid": "8aac65e5-4bd5-4e4a-8ac9-f3165e13dac7", "job-name": "cuda-vector-add" }, "annotations": { "batch.kubernetes.io/job-tracking": "" }, "managedFields": [ { "manager": "cluster-api-e2e", "operation": "Update", "apiVersion": "batch/v1", "time": "2023-01-10T11:57:55Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { "f:backoffLimit": {}, "f:completionMode": {}, "f:completions": {}, "f:parallelism": {}, "f:suspend": {}, "f:template": { "f:spec": { "f:containers": { "k:{\"name\":\"cuda-vector-add\"}": { ".": {}, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": { ".": {}, "f:limits": { ".": {}, "f:nvidia.com/gpu": {} } }, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {} } }, "f:dnsPolicy": {}, "f:restartPolicy": {}, "f:schedulerName": {}, "f:securityContext": {}, "f:terminationGracePeriodSeconds": {} } } } } }, { "manager": "kube-controller-manager", "operation": "Update", "apiVersion": "batch/v1", "time": "2023-01-10T11:57:55Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:status": { "f:active": {}, "f:ready": {}, "f:startTime": {}, "f:uncountedTerminatedPods": {} } }, "subresource": "status" } ] }, "spec": { "parallelism": 1, "completions": 1, "backoffLimit": 6, "selector": { "matchLabels": { "controller-uid": "8aac65e5-4bd5-4e4a-8ac9-f3165e13dac7" } }, "template": { "metadata": { "creationTimestamp": null, "labels": { "controller-uid": "8aac65e5-4bd5-4e4a-8ac9-f3165e13dac7", "job-name": "cuda-vector-add" } }, "spec": { "containers": [ { "name": "cuda-vector-add", "image": "nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda11.1-ubuntu18.04", "resources": { "limits": { "nvidia.com/gpu": "1" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" } ], "restartPolicy": "OnFailure", "terminationGracePeriodSeconds": 30, "dnsPolicy": "ClusterFirst", "securityContext": {}, "schedulerName": "default-scheduler" } }, "completionMode": "NonIndexed", "suspend": false }, "status": { "startTime": "2023-01-10T11:57:55Z", "active": 1, "uncountedTerminatedPods": {}, "ready": 0 } } LAST SEEN TYPE REASON OBJECT MESSAGE 2023-01-10 11:57:55 +0000 UTC Normal SuccessfulCreate job/cuda-vector-add Created pod: cuda-vector-add-7k6rk Expected <bool>: false to be true In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/gpu.go:134 @ 01/10/23 12:07:56.42 < Exit [It] should create cluster with single worker - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:114 @ 01/10/23 12:07:56.42 (31m23.682s)
Filter through log files | View test history on testgrid
capa-e2e [It] [unmanaged] [Cluster API Framework] Cluster Upgrade Spec - HA Control Plane Cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] Cluster Upgrade Spec - HA control plane with scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] Cluster Upgrade Spec - Single control plane with workers [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] Clusterctl Upgrade Spec [from latest v1beta1 release to v1beta2] Should create a management cluster and then upgrade all the providers
capa-e2e [It] [unmanaged] [Cluster API Framework] Machine Pool Spec Should successfully create a cluster with machine pool machines
capa-e2e [It] [unmanaged] [Cluster API Framework] Machine Remediation Spec Should successfully trigger KCP remediation
capa-e2e [It] [unmanaged] [Cluster API Framework] Machine Remediation Spec Should successfully trigger machine deployment remediation
capa-e2e [It] [unmanaged] [Cluster API Framework] Self Hosted Spec Should pivot the bootstrap cluster to a self-hosted cluster
capa-e2e [It] [unmanaged] [Cluster API Framework] [ClusterClass] Cluster Upgrade Spec - HA control plane with workers [K8s-Upgrade] [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] [ClusterClass] ClusterClass Changes Spec - SSA immutability checks [ClusterClass] Should successfully rollout the managed topology upon changes to the ClusterClass
capa-e2e [It] [unmanaged] [Cluster API Framework] [ClusterClass] Self Hosted Spec [ClusterClass] Should pivot the bootstrap cluster to a self-hosted cluster
capa-e2e [It] [unmanaged] [Cluster API Framework] [smoke] [PR-Blocking] Running the quick-start spec Should create a workload cluster
capa-e2e [It] [unmanaged] [Cluster API Framework] [smoke] [PR-Blocking] Running the quick-start spec with ClusterClass Should create a workload cluster
capa-e2e [It] [unmanaged] [functional] CSI=external CCM=external AWSCSIMigration=on: upgrade to v1.23 should create volumes dynamically with external cloud provider
capa-e2e [It] [unmanaged] [functional] CSI=in-tree CCM=in-tree AWSCSIMigration=off: upgrade to v1.23 should create volumes dynamically with external cloud provider
capa-e2e [It] [unmanaged] [functional] MachineDeployment misconfigurations MachineDeployment misconfigurations
capa-e2e [It] [unmanaged] [functional] Multitenancy test should create cluster with nested assumed role
capa-e2e [It] [unmanaged] [functional] Workload cluster with AWS S3 and Ignition parameter It should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] Workload cluster with AWS SSM Parameter as the Secret Backend should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] Workload cluster with EFS driver should pass dynamic provisioning test
capa-e2e [It] [unmanaged] [functional] Workload cluster with spot instances should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] [ClusterClass] Multitenancy test [ClusterClass] should create cluster with nested assumed role
capa-e2e [It] [unmanaged] [functional] [ClusterClass] Workload cluster with AWS SSM Parameter as the Secret Backend [ClusterClass] should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] [ClusterClass] Workload cluster with external infrastructure [ClusterClass] should create workload cluster in external VPC
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [It] [unmanaged] [functional] External infrastructure, external security groups, VPC peering, internal ELB and private subnet use only should create external clusters in peered VPC and with an internal ELB and only utilize a private subnet
capa-e2e [It] [unmanaged] [functional] Multiple workload clusters Defining clusters in the same namespace should create the clusters
capa-e2e [It] [unmanaged] [functional] Multiple workload clusters in different namespaces with machine failures should setup namespaces correctly for the two clusters
capa-e2e [It] [unmanaged] [functional] [Serial] Upgrade to main branch Kubernetes in same namespace should create the clusters