This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 21 succeeded
Started2022-08-11 00:51
Elapsed1h26m
Revisionrelease-1.5

Test Failures


capa-e2e [unmanaged] [functional] GPU-enabled cluster test should create cluster with single worker 19m50s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capa\-e2e\s\[unmanaged\]\s\[functional\]\sGPU\-enabled\scluster\stest\sshould\screate\scluster\swith\ssingle\sworker$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:116
Timed out after 600.076s.
Job default/cuda-vector-add failed
Job:
{
  "metadata": {
    "name": "cuda-vector-add",
    "namespace": "default",
    "uid": "e412cdae-a8d1-4c95-8c10-c206c751486c",
    "resourceVersion": "820",
    "generation": 1,
    "creationTimestamp": "2022-08-11T01:11:00Z",
    "labels": {
      "controller-uid": "e412cdae-a8d1-4c95-8c10-c206c751486c",
      "job-name": "cuda-vector-add"
    },
    "managedFields": [
      {
        "manager": "cluster-api-e2e",
        "operation": "Update",
        "apiVersion": "batch/v1",
        "time": "2022-08-11T01:11:00Z",
        "fieldsType": "FieldsV1",
        "fieldsV1": {
          "f:spec": {
            "f:backoffLimit": {},
            "f:completionMode": {},
            "f:completions": {},
            "f:parallelism": {},
            "f:suspend": {},
            "f:template": {
              "f:spec": {
                "f:containers": {
                  "k:{\"name\":\"cuda-vector-add\"}": {
                    ".": {},
                    "f:image": {},
                    "f:imagePullPolicy": {},
                    "f:name": {},
                    "f:resources": {
                      ".": {},
                      "f:limits": {
                        ".": {},
                        "f:nvidia.com/gpu": {}
                      }
                    },
                    "f:terminationMessagePath": {},
                    "f:terminationMessagePolicy": {}
                  }
                },
                "f:dnsPolicy": {},
                "f:restartPolicy": {},
                "f:schedulerName": {},
                "f:securityContext": {},
                "f:terminationGracePeriodSeconds": {}
              }
            }
          }
        }
      },
      {
        "manager": "kube-controller-manager",
        "operation": "Update",
        "apiVersion": "batch/v1",
        "time": "2022-08-11T01:11:00Z",
        "fieldsType": "FieldsV1",
        "fieldsV1": {
          "f:status": {
            "f:active": {},
            "f:ready": {},
            "f:startTime": {}
          }
        },
        "subresource": "status"
      }
    ]
  },
  "spec": {
    "parallelism": 1,
    "completions": 1,
    "backoffLimit": 6,
    "selector": {
      "matchLabels": {
        "controller-uid": "e412cdae-a8d1-4c95-8c10-c206c751486c"
      }
    },
    "template": {
      "metadata": {
        "creationTimestamp": null,
        "labels": {
          "controller-uid": "e412cdae-a8d1-4c95-8c10-c206c751486c",
          "job-name": "cuda-vector-add"
        }
      },
      "spec": {
        "containers": [
          {
            "name": "cuda-vector-add",
            "image": "nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda11.1-ubuntu18.04",
            "resources": {
              "limits": {
                "nvidia.com/gpu": "1"
              }
            },
            "terminationMessagePath": "/dev/termination-log",
            "terminationMessagePolicy": "File",
            "imagePullPolicy": "IfNotPresent"
          }
        ],
        "restartPolicy": "OnFailure",
        "terminationGracePeriodSeconds": 30,
        "dnsPolicy": "ClusterFirst",
        "securityContext": {},
        "schedulerName": "default-scheduler"
      }
    },
    "completionMode": "NonIndexed",
    "suspend": false
  },
  "status": {
    "startTime": "2022-08-11T01:11:00Z",
    "active": 1,
    "ready": 0
  }
}
LAST SEEN  TYPE  REASON  OBJECT  MESSAGE

Expected
    <bool>: false
to be true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/gpu.go:134
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 21 Passed Tests

Show 6 Skipped Tests

Error lines from build-log.txt

... skipping 21 lines ...
  Downloading certifi-2022.6.15-py3-none-any.whl (160 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 160.2/160.2 kB 10.3 MB/s eta 0:00:00
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests) (1.26.11)
Installing collected packages: idna, charset-normalizer, certifi, requests
Successfully installed certifi-2022.6.15 charset-normalizer-2.1.0 idna-3.3 requests-2.28.1
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
--- Logging error ---
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/pip/_internal/utils/logging.py", line 177, in emit
    self.console.print(renderable, overflow="ignore", crop=False, style=style)
  File "/usr/local/lib/python3.7/dist-packages/pip/_vendor/rich/console.py", line 1752, in print
    extend(render(renderable, render_options))
  File "/usr/local/lib/python3.7/dist-packages/pip/_vendor/rich/console.py", line 1390, in render
... skipping 584 lines ...
[1]  ✓ Installing CNI 🔌
[1]  • Installing StorageClass 💾  ...
[1]  ✓ Installing StorageClass 💾
[1] INFO: The kubeconfig file for the kind cluster is /tmp/e2e-kind3906820010
[1] INFO: Loading image: "gcr.io/k8s-staging-cluster-api/capa-manager:e2e"
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-cainjector:v1.7.2"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-cainjector:v1.7.2" into the kind cluster "test-bp5h8j": error saving image "quay.io/jetstack/cert-manager-cainjector:v1.7.2" to "/tmp/image-tar3240782405/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-webhook:v1.7.2"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-webhook:v1.7.2" into the kind cluster "test-bp5h8j": error saving image "quay.io/jetstack/cert-manager-webhook:v1.7.2" to "/tmp/image-tar2646493520/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-controller:v1.7.2"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-controller:v1.7.2" into the kind cluster "test-bp5h8j": error saving image "quay.io/jetstack/cert-manager-controller:v1.7.2" to "/tmp/image-tar873875207/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.5"
[1] INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.5" into the kind cluster "test-bp5h8j": error saving image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.5" to "/tmp/image-tar2464343580/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.5"
[1] INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.5" into the kind cluster "test-bp5h8j": error saving image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.5" to "/tmp/image-tar2069188735/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.5"
[1] INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.5" into the kind cluster "test-bp5h8j": error saving image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.5" to "/tmp/image-tar951609083/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] STEP: Setting environment variable: key=AWS_B64ENCODED_CREDENTIALS, value=*******
[1] STEP: Writing AWS service quotas to a file for parallel tests
[1] STEP: Initializing the bootstrap cluster
[1] INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure aws
[1] STEP: Waiting for provider controllers to be running
[1] STEP: Waiting for deployment capa-system/capa-controller-manager to be available
... skipping 1095 lines ...
[12]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:163
[12] ------------------------------
[12] 
[12] JUnit report was created: /logs/artifacts/junit.e2e_suite.12.xml
[12] 
[12] Ran 1 of 1 Specs in 1291.318 seconds
[12] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[12] PASS
[13] STEP: Node 13 acquired resources: {ec2-normal:0, vpc:1, eip:3, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0}
[13] [BeforeEach] Running the quick-start spec with ClusterClass
[13]   /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/e2e/quick_start.go:62
[13] STEP: Creating a namespace for hosting the "quick-start" test spec
[13] INFO: Creating namespace quick-start-h6p6en
... skipping 30 lines ...
[11]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:709
[11] ------------------------------
[11] 
[11] JUnit report was created: /logs/artifacts/junit.e2e_suite.11.xml
[11] 
[11] Ran 1 of 3 Specs in 1394.765 seconds
[11] SUCCESS! -- 1 Passed | 0 Failed | 2 Pending | 0 Skipped
[11] PASS
[15] STEP: Node 15 acquired resources: {ec2-normal:0, vpc:1, eip:3, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0}
[15] [BeforeEach] Running the quick-start spec
[15]   /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/e2e/quick_start.go:62
[15] STEP: Creating a namespace for hosting the "quick-start" test spec
[15] INFO: Creating namespace quick-start-wfyac0
... skipping 45 lines ...
[1]   GPU-enabled cluster test
[1]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:115
[1]     should create cluster with single worker [It]
[1]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:116
[1] 
[1]     Timed out after 600.076s.
[1]     Job default/cuda-vector-add failed
[1]     Job:
[1]     {
[1]       "metadata": {
[1]         "name": "cuda-vector-add",
[1]         "namespace": "default",
[1]         "uid": "e412cdae-a8d1-4c95-8c10-c206c751486c",
... skipping 200 lines ...
[10] INFO: Getting the cluster template yaml
[10] INFO: clusterctl config cluster k8s-upgrade-and-conformance-6fvaw5 --infrastructure (default) --kubernetes-version v1.23.6 --control-plane-machine-count 3 --worker-machine-count 0 --flavor kcp-scale-in
[3] 
[3] JUnit report was created: /logs/artifacts/junit.e2e_suite.3.xml
[3] 
[3] Ran 1 of 1 Specs in 1559.593 seconds
[3] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[3] PASS
[10] INFO: Applying the cluster template yaml to the cluster
[10] cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-6fvaw5 created
[10] awscluster.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-6fvaw5 created
[10] kubeadmcontrolplane.controlplane.cluster.x-k8s.io/k8s-upgrade-and-conformance-6fvaw5-control-plane created
[10] awsmachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-6fvaw5-control-plane created
... skipping 20 lines ...
[8]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/e2e/mhc_remediations.go:82
[8] ------------------------------
[8] 
[8] JUnit report was created: /logs/artifacts/junit.e2e_suite.8.xml
[8] 
[8] Ran 1 of 1 Specs in 1572.414 seconds
[8] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[8] PASS
[5] STEP: Node 5 acquired resources: {ec2-normal:0, vpc:1, eip:3, ngw:3, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0}
[5] [BeforeEach] Machine Remediation Spec
[5]   /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/e2e/mhc_remediations.go:68
[5] STEP: Creating a namespace for hosting the "mhc-remediation" test spec
[5] INFO: Creating namespace mhc-remediation-8xrur7
... skipping 75 lines ...
[9]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:551
[9] ------------------------------
[9] 
[9] JUnit report was created: /logs/artifacts/junit.e2e_suite.9.xml
[9] 
[9] Ran 1 of 1 Specs in 1752.402 seconds
[9] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[9] PASS
[6] INFO: Waiting for the machine deployments to be provisioned
[6] STEP: Waiting for the workload nodes to exist
[6] INFO: Waiting for the machine pools to be provisioned
[6] STEP: Deploying StatefulSet on infra
[6] STEP: Creating statefulset
... skipping 121 lines ...
[7]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:328
[7] ------------------------------
[7] 
[7] JUnit report was created: /logs/artifacts/junit.e2e_suite.7.xml
[7] 
[7] Ran 1 of 2 Specs in 1994.265 seconds
[7] SUCCESS! -- 1 Passed | 0 Failed | 1 Pending | 0 Skipped
[7] PASS
[16] STEP: Node 16 acquired resources: {ec2-normal:0, vpc:2, eip:2, ngw:2, igw:2, classiclb:2, ec2-GPU:0, volume-gp2:0}
[16] [BeforeEach] Cluster Upgrade Spec - HA Control Plane Cluster [K8s-Upgrade]
[16]   /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/e2e/cluster_upgrade.go:81
[16] STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec
[16] INFO: Creating namespace k8s-upgrade-and-conformance-12bm6j
... skipping 150 lines ...
[2]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/e2e/self_hosted.go:80
[2] ------------------------------
[2] 
[2] JUnit report was created: /logs/artifacts/junit.e2e_suite.2.xml
[2] 
[2] Ran 1 of 1 Specs in 2390.080 seconds
[2] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[2] PASS
[6] STEP: Retrieving IDs of dynamically provisioned volumes.
[6] STEP: Ensuring dynamically provisioned volumes exists
[6] STEP: Deleting LB service
[6] STEP: Deleting the Clusters
[6] STEP: Deleting cluster csimigration-off-upgrade-xflhda
... skipping 23 lines ...
[15]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/e2e/quick_start.go:77
[15] ------------------------------
[15] 
[15] JUnit report was created: /logs/artifacts/junit.e2e_suite.15.xml
[15] 
[15] Ran 1 of 1 Specs in 2450.182 seconds
[15] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[15] PASS
[13] INFO: Waiting for the machine deployments to be provisioned
[13] STEP: Waiting for the workload nodes to exist
[13] INFO: Waiting for the machine pools to be provisioned
[13] STEP: PASSED!
[13] [AfterEach] Running the quick-start spec with ClusterClass
... skipping 52 lines ...
[19]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/e2e/clusterctl_upgrade.go:147
[19] ------------------------------
[19] 
[19] JUnit report was created: /logs/artifacts/junit.e2e_suite.19.xml
[19] 
[19] Ran 1 of 1 Specs in 2574.673 seconds
[19] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[19] PASS
[17] STEP: Deleting namespace used for hosting the "" test spec
[17] INFO: Deleting namespace functional-test-ssm-parameter-store-0e4aqv
[17] STEP: Node 17 released resources: {ec2-normal:0, vpc:1, eip:3, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0}
[17] 
[17] • [SLOW TEST:2274.534 seconds]
... skipping 5 lines ...
[17]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:466
[17] ------------------------------
[17] 
[17] JUnit report was created: /logs/artifacts/junit.e2e_suite.17.xml
[17] 
[17] Ran 1 of 1 Specs in 2585.819 seconds
[17] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[17] PASS
[5] INFO: Waiting for control plane mhc-remediation-8xrur7/mhc-remediation-ma52f2-control-plane to be ready (implies underlying nodes to be ready as well)
[5] STEP: Waiting for the control plane to be ready
[5] INFO: Waiting for the machine deployments to be provisioned
[5] STEP: Waiting for the workload nodes to exist
[5] INFO: Waiting for the machine pools to be provisioned
... skipping 70 lines ...
[6]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:260
[6] ------------------------------
[6] 
[6] JUnit report was created: /logs/artifacts/junit.e2e_suite.6.xml
[6] 
[6] Ran 1 of 1 Specs in 2903.779 seconds
[6] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[6] PASS
[16] INFO: Waiting for control plane k8s-upgrade-and-conformance-12bm6j/k8s-upgrade-and-conformance-dsvk7x-control-plane to be ready (implies underlying nodes to be ready as well)
[16] STEP: Waiting for the control plane to be ready
[16] INFO: Waiting for the machine deployments to be provisioned
[16] STEP: Waiting for the workload nodes to exist
[16] INFO: Waiting for the machine pools to be provisioned
... skipping 22 lines ...
[14]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:397
[14] ------------------------------
[14] 
[14] JUnit report was created: /logs/artifacts/junit.e2e_suite.14.xml
[14] 
[14] Ran 1 of 1 Specs in 2994.404 seconds
[14] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[14] PASS
[13] STEP: Deleting namespace used for hosting the "quick-start" test spec
[13] INFO: Deleting namespace quick-start-h6p6en
[13] [AfterEach] Running the quick-start spec with ClusterClass
[13]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_CAPI_quick_clusterclass_test.go:67
[13] STEP: Node 13 released resources: {ec2-normal:0, vpc:1, eip:3, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0}
... skipping 14 lines ...
[13]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/e2e/quick_start.go:77
[13] ------------------------------
[13] 
[13] JUnit report was created: /logs/artifacts/junit.e2e_suite.13.xml
[13] 
[13] Ran 2 of 2 Specs in 2997.055 seconds
[13] SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 0 Skipped
[13] PASS
[20] STEP: Deleting namespace used for hosting the "machine-pool" test spec
[20] INFO: Deleting namespace machine-pool-c6ekb2
[20] [AfterEach] Machine Pool Spec
[20]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_CAPI_test.go:85
[20] STEP: Node 20 released resources: {ec2-normal:0, vpc:1, eip:3, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0}
... skipping 7 lines ...
[20]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/e2e/machine_pool.go:76
[20] ------------------------------
[20] 
[20] JUnit report was created: /logs/artifacts/junit.e2e_suite.20.xml
[20] 
[20] Ran 1 of 2 Specs in 3033.405 seconds
[20] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 1 Skipped
[20] PASS
[18] STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
[18] INFO: Deleting namespace clusterctl-upgrade-a0kr5s
[18] [AfterEach] Clusterctl Upgrade Spec [from v1alpha4]
[18]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_CAPI_test.go:158
[18] STEP: Node 18 released resources: {ec2-normal:0, vpc:2, eip:2, ngw:2, igw:2, classiclb:2, ec2-GPU:0, volume-gp2:0}
... skipping 7 lines ...
[18]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/e2e/clusterctl_upgrade.go:147
[18] ------------------------------
[18] 
[18] JUnit report was created: /logs/artifacts/junit.e2e_suite.18.xml
[18] 
[18] Ran 1 of 1 Specs in 3209.803 seconds
[18] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[18] PASS
[5] STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
[5] INFO: Deleting namespace mhc-remediation-8xrur7
[5] [AfterEach] Machine Remediation Spec
[5]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_CAPI_test.go:63
[5] STEP: Node 5 released resources: {ec2-normal:0, vpc:1, eip:3, ngw:3, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0}
... skipping 7 lines ...
[5]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/e2e/mhc_remediations.go:114
[5] ------------------------------
[5] 
[5] JUnit report was created: /logs/artifacts/junit.e2e_suite.5.xml
[5] 
[5] Ran 1 of 2 Specs in 3257.306 seconds
[5] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 1 Skipped
[5] PASS
[4] STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec
[4] INFO: Deleting namespace k8s-upgrade-and-conformance-zaguy6
[4] [AfterEach] Cluster Upgrade Spec - Single control plane with workers [K8s-Upgrade]
[4]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_CAPI_test.go:183
[4] STEP: Node 4 released resources: {ec2-normal:0, vpc:2, eip:2, ngw:2, igw:2, classiclb:2, ec2-GPU:0, volume-gp2:0}
... skipping 7 lines ...
[4]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/e2e/cluster_upgrade.go:115
[4] ------------------------------
[4] 
[4] JUnit report was created: /logs/artifacts/junit.e2e_suite.4.xml
[4] 
[4] Ran 1 of 1 Specs in 3309.996 seconds
[4] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[4] PASS
[10] INFO: Waiting for kube-proxy to have the upgraded kubernetes version
[10] STEP: Ensuring kube-proxy has the correct image
[10] INFO: Waiting for CoreDNS to have the upgraded image tag
[10] STEP: Ensuring CoreDNS has the correct image
[10] INFO: Waiting for etcd to have the upgraded image tag
... skipping 34 lines ...
[10]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/e2e/cluster_upgrade.go:115
[10] ------------------------------
[10] 
[10] JUnit report was created: /logs/artifacts/junit.e2e_suite.10.xml
[10] 
[10] Ran 1 of 1 Specs in 3873.736 seconds
[10] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[10] PASS
[16] STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec
[16] INFO: Deleting namespace k8s-upgrade-and-conformance-12bm6j
[16] [AfterEach] Cluster Upgrade Spec - HA Control Plane Cluster [K8s-Upgrade]
[16]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_CAPI_test.go:233
[16] STEP: Node 16 released resources: {ec2-normal:0, vpc:2, eip:2, ngw:2, igw:2, classiclb:2, ec2-GPU:0, volume-gp2:0}
... skipping 15 lines ...
[16] STEP: Deleting namespace used for hosting the "" test spec
[16] INFO: Deleting namespace functional-test-ignition-z63y70
[16] 
[16] JUnit report was created: /logs/artifacts/junit.e2e_suite.16.xml
[16] 
[16] Ran 2 of 3 Specs in 4390.601 seconds
[16] SUCCESS! -- 2 Passed | 0 Failed | 1 Pending | 0 Skipped
[16] PASS
[1] folder created for eks clusters: /logs/artifacts/clusters/bootstrap/aws-resources
[1] STEP: Tearing down the management cluster
[1] STEP: Deleting cluster-api-provider-aws-sigs-k8s-io CloudFormation stack
[1] 
[1] JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml
[1] 
[1] 
[1] Summarizing 1 Failure:
[1] 
[1] [Fail] [unmanaged] [functional] GPU-enabled cluster test [It] should create cluster with single worker 
[1] /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/gpu.go:134
[1] 
[1] Ran 1 of 1 Specs in 4918.993 seconds
[1] FAIL! -- 0 Passed | 1 Failed | 0 Pending | 0 Skipped
[1] --- FAIL: TestE2E (4919.05s)
[1] FAIL

Ginkgo ran 1 suite in 1h23m35.686313759s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 3 lines ...
To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc

real	83m35.696s
user	25m42.575s
sys	6m48.733s
make: *** [Makefile:404: test-e2e] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...