This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 22 succeeded
Started2022-07-29 12:47
Elapsed1h53m
Revisionrelease-1.5

Test Failures


capa-e2e [unmanaged] [functional] CSI=in-tree CCM=in-tree AWSCSIMigration=off: upgrade to v1.23 should create volumes dynamically with external cloud provider 1h31m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capa\-e2e\s\[unmanaged\]\s\[functional\]\sCSI\=in\-tree\sCCM\=in\-tree\sAWSCSIMigration\=off\:\supgrade\sto\sv1\.23\sshould\screate\svolumes\sdynamically\swith\sexternal\scloud\sprovider$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:260
Timed out after 1200.001s.
Expected
    <bool>: false
to be true
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/framework/cluster_helpers.go:166
				
				Click to see stdout/stderrfrom junit.e2e_suite.7.xml

Filter through log files | View test history on testgrid


Show 22 Passed Tests

Show 6 Skipped Tests

Error lines from build-log.txt

... skipping 21 lines ...
Collecting certifi>=2017.4.17
  Downloading certifi-2022.6.15-py3-none-any.whl (160 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 160.2/160.2 kB 9.9 MB/s eta 0:00:00
Installing collected packages: idna, charset-normalizer, certifi, requests
Successfully installed certifi-2022.6.15 charset-normalizer-2.1.0 idna-3.3 requests-2.28.1
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
--- Logging error ---
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/pip/_internal/utils/logging.py", line 177, in emit
    self.console.print(renderable, overflow="ignore", crop=False, style=style)
  File "/usr/local/lib/python3.7/dist-packages/pip/_vendor/rich/console.py", line 1752, in print
    extend(render(renderable, render_options))
  File "/usr/local/lib/python3.7/dist-packages/pip/_vendor/rich/console.py", line 1390, in render
... skipping 571 lines ...
[1]  ✓ Installing CNI 🔌
[1]  • Installing StorageClass 💾  ...
[1]  ✓ Installing StorageClass 💾
[1] INFO: The kubeconfig file for the kind cluster is /tmp/e2e-kind4028626473
[1] INFO: Loading image: "gcr.io/k8s-staging-cluster-api/capa-manager:e2e"
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-cainjector:v1.7.2"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-cainjector:v1.7.2" into the kind cluster "test-6j4xse": error saving image "quay.io/jetstack/cert-manager-cainjector:v1.7.2" to "/tmp/image-tar1194433438/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-webhook:v1.7.2"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-webhook:v1.7.2" into the kind cluster "test-6j4xse": error saving image "quay.io/jetstack/cert-manager-webhook:v1.7.2" to "/tmp/image-tar34120310/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-controller:v1.7.2"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-controller:v1.7.2" into the kind cluster "test-6j4xse": error saving image "quay.io/jetstack/cert-manager-controller:v1.7.2" to "/tmp/image-tar2977063176/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.2"
[1] INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.2" into the kind cluster "test-6j4xse": error saving image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.2" to "/tmp/image-tar2425056891/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2"
[1] INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2" into the kind cluster "test-6j4xse": error saving image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2" to "/tmp/image-tar2207099949/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.2"
[1] INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.2" into the kind cluster "test-6j4xse": error saving image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.2" to "/tmp/image-tar3313563890/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] STEP: Setting environment variable: key=AWS_B64ENCODED_CREDENTIALS, value=*******
[1] STEP: Writing AWS service quotas to a file for parallel tests
[1] STEP: Initializing the bootstrap cluster
[1] INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure aws
[1] STEP: Waiting for provider controllers to be running
[1] STEP: Waiting for deployment capa-system/capa-controller-manager to be available
... skipping 1155 lines ...
[15]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/mhc_remediations.go:82
[15] ------------------------------
[15] SS
[15] JUnit report was created: /logs/artifacts/junit.e2e_suite.15.xml
[15] 
[15] Ran 1 of 3 Specs in 1676.695 seconds
[15] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 2 Skipped
[15] PASS
[16] STEP: Node 16 acquired resources: {ec2-normal:0, vpc:2, eip:2, ngw:2, igw:2, classiclb:2, ec2-GPU:0, volume-gp2:0}
[16] [BeforeEach] Cluster Upgrade Spec - HA control plane with scale in rollout [K8s-Upgrade]
[16]   /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/cluster_upgrade.go:81
[16] STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec
[18] STEP: Node 18 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0}
... skipping 115 lines ...
[10]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/mhc_remediations.go:114
[10] ------------------------------
[10] 
[10] JUnit report was created: /logs/artifacts/junit.e2e_suite.10.xml
[10] 
[10] Ran 1 of 1 Specs in 1919.997 seconds
[10] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[10] PASS
[4] STEP: Node 4 acquired resources: {ec2-normal:0, vpc:1, eip:3, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0}
[4] STEP: Creating a namespace for hosting the "functional-test-ssm-parameter-store" test spec
[11] STEP: Node 11 acquired resources: {ec2-normal:0, vpc:1, eip:3, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0}
[11] STEP: Creating a namespace for hosting the "functional-test-md-misconfigurations" test spec
[14] STEP: Node 14 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:4}
... skipping 215 lines ...
[9]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:551
[9] ------------------------------
[9] 
[9] JUnit report was created: /logs/artifacts/junit.e2e_suite.9.xml
[9] 
[9] Ran 2 of 2 Specs in 2422.910 seconds
[9] SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 0 Skipped
[9] PASS
[7] STEP: Node 7 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:4}
[7] STEP: Creating a namespace for hosting the "csimigration-off-upgrade" test spec
[7] INFO: Creating namespace csimigration-off-upgrade-q0ze2x
[7] STEP: Creating first cluster with single control plane
[7] INFO: Creating the workload cluster with name "csimigration-off-upgrade-gpeb3q" using the "(default)" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
... skipping 45 lines ...
[3] STEP: Deleting namespace used for hosting the "" test spec
[3] INFO: Deleting namespace functional-test-ignition-zih3xm
[3] 
[3] JUnit report was created: /logs/artifacts/junit.e2e_suite.3.xml
[3] 
[3] Ran 2 of 3 Specs in 2537.881 seconds
[3] SUCCESS! -- 2 Passed | 0 Failed | 1 Pending | 0 Skipped
[3] PASS
[13] STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
[13] INFO: Deleting namespace clusterctl-upgrade-b5bspl
[13] [AfterEach] Clusterctl Upgrade Spec [from latest v1beta1 release to main]
[13]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_CAPI_test.go:133
[8] STEP: Upgrading the machinepool instances
... skipping 8 lines ...
[13]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:147
[13] ------------------------------
[13] 
[13] JUnit report was created: /logs/artifacts/junit.e2e_suite.13.xml
[13] 
[13] Ran 1 of 1 Specs in 2560.587 seconds
[13] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[13] PASS
[8] INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-and-conformance-5acser/k8s-upgrade-and-conformance-6632zi-mp-0
[8] INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-and-conformance-5acser/k8s-upgrade-and-conformance-6632zi-mp-0 to be upgraded from v1.22.4 to v1.23.3
[5] STEP: Node 5 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:4}
[5] STEP: Creating a namespace for hosting the "only-csi-external-upgrade" test spec
[5] INFO: Creating namespace only-csi-external-upgrade-t7ic6v
... skipping 54 lines ...
[20]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/machine_pool.go:76
[20] ------------------------------
[20] 
[20] JUnit report was created: /logs/artifacts/junit.e2e_suite.20.xml
[20] 
[20] Ran 1 of 1 Specs in 2570.887 seconds
[20] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[20] PASS
[1] STEP: Deleting namespace used for hosting the "" test spec
[1] INFO: Deleting namespace functional-test-spot-instances-aalj92
[1] STEP: Node 1 released resources: {ec2-normal:4, vpc:1, eip:3, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0}
[1] 
[1] • [SLOW TEST:1092.123 seconds]
... skipping 41 lines ...
[2]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:147
[2] ------------------------------
[2] 
[2] JUnit report was created: /logs/artifacts/junit.e2e_suite.2.xml
[2] 
[2] Ran 1 of 1 Specs in 2657.168 seconds
[2] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[2] PASS
[19] STEP: Node 19 released resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0}
[19] 
[19] • [SLOW TEST:2261.792 seconds]
[19] [unmanaged] [functional]
[19] /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:50
... skipping 8 lines ...
[19] STEP: Deleting namespace used for hosting the "" test spec
[19] INFO: Deleting namespace functional-efs-support-jd99pg
[19] 
[19] JUnit report was created: /logs/artifacts/junit.e2e_suite.19.xml
[19] 
[19] Ran 1 of 1 Specs in 2691.370 seconds
[19] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[19] PASS
[14] STEP: Retrieving IDs of dynamically provisioned volumes.
[14] STEP: Ensuring dynamically provisioned volumes exists
[14] INFO: Creating the workload cluster with name "csi-ccm-external-upgrade-t8kr7u" using the "external-cloud-provider" template (Kubernetes v1.23.3, 1 control-plane machines, 1 worker machines)
[14] INFO: Getting the cluster template yaml
[14] INFO: clusterctl config cluster csi-ccm-external-upgrade-t8kr7u --infrastructure (default) --kubernetes-version v1.23.3 --control-plane-machine-count 1 --worker-machine-count 1 --flavor external-cloud-provider
... skipping 38 lines ...
[18]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:163
[18] ------------------------------
[18] 
[18] JUnit report was created: /logs/artifacts/junit.e2e_suite.18.xml
[18] 
[18] Ran 1 of 1 Specs in 2738.264 seconds
[18] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[18] PASS
[11] INFO: Waiting for control plane to be initialized
[11] INFO: Waiting for the first control plane machine managed by functional-test-md-misconfigurations-glc8ao/functional-test-md-misconfigurations-2ypdjw-control-plane to be provisioned
[11] STEP: Waiting for one control plane node to exist
[17] STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
[17] INFO: Deleting namespace clusterctl-upgrade-obsoz1
... skipping 10 lines ...
[17]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:147
[17] ------------------------------
[17] 
[17] JUnit report was created: /logs/artifacts/junit.e2e_suite.17.xml
[17] 
[17] Ran 1 of 1 Specs in 2757.853 seconds
[17] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[17] PASS
[7] INFO: Waiting for control plane to be initialized
[7] INFO: Waiting for the first control plane machine managed by csimigration-off-upgrade-q0ze2x/csimigration-off-upgrade-gpeb3q-control-plane to be provisioned
[7] STEP: Waiting for one control plane node to exist
[5] INFO: Waiting for control plane to be initialized
[5] INFO: Waiting for the first control plane machine managed by only-csi-external-upgrade-t7ic6v/only-csi-external-upgrade-248blr-control-plane to be provisioned
... skipping 26 lines ...
[12]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/cluster_upgrade.go:115
[12] ------------------------------
[12] 
[12] JUnit report was created: /logs/artifacts/junit.e2e_suite.12.xml
[12] 
[12] Ran 1 of 1 Specs in 2882.524 seconds
[12] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[12] PASS
[5] INFO: Waiting for control plane to be ready
[5] INFO: Waiting for control plane only-csi-external-upgrade-t7ic6v/only-csi-external-upgrade-248blr-control-plane to be ready (implies underlying nodes to be ready as well)
[5] STEP: Waiting for the control plane to be ready
[7] INFO: Waiting for control plane to be ready
[7] INFO: Waiting for control plane csimigration-off-upgrade-q0ze2x/csimigration-off-upgrade-gpeb3q-control-plane to be ready (implies underlying nodes to be ready as well)
... skipping 65 lines ...
[4]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:466
[4] ------------------------------
[4] 
[4] JUnit report was created: /logs/artifacts/junit.e2e_suite.4.xml
[4] 
[4] Ran 1 of 1 Specs in 3079.379 seconds
[4] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[4] PASS
[14] STEP: Creating the LB service
[14] STEP: Creating service of type Load Balancer with name: test-svc-hqj9mt under namespace: default
[5] STEP: Retrieving IDs of dynamically provisioned volumes.
[5] STEP: Ensuring dynamically provisioned volumes exists
[5] INFO: Creating the workload cluster with name "only-csi-external-upgrade-248blr" using the "external-csi" template (Kubernetes v1.23.3, 1 control-plane machines, 1 worker machines)
... skipping 100 lines ...
[8]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/cluster_upgrade.go:115
[8] ------------------------------
[8] 
[8] JUnit report was created: /logs/artifacts/junit.e2e_suite.8.xml
[8] 
[8] Ran 1 of 1 Specs in 3378.703 seconds
[8] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[8] PASS
[7] STEP: Creating the LB service
[7] STEP: Creating service of type Load Balancer with name: test-svc-6mxl14 under namespace: default
[7] STEP: Created Load Balancer service and ELB name is: af05bb732911a43738da51310ca85f9a
[7] STEP: Verifying ELB with name af05bb732911a43738da51310ca85f9a present
[7] STEP: ELB with name af05bb732911a43738da51310ca85f9a exists
... skipping 53 lines ...
[14]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:397
[14] ------------------------------
[14] 
[14] JUnit report was created: /logs/artifacts/junit.e2e_suite.14.xml
[14] 
[14] Ran 1 of 1 Specs in 3595.302 seconds
[14] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[14] PASS
[16] INFO: Waiting for kube-proxy to have the upgraded kubernetes version
[16] STEP: Ensuring kube-proxy has the correct image
[16] INFO: Waiting for CoreDNS to have the upgraded image tag
[16] STEP: Ensuring CoreDNS has the correct image
[16] INFO: Waiting for etcd to have the upgraded image tag
... skipping 20 lines ...
[11]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:500
[11] ------------------------------
[11] 
[11] JUnit report was created: /logs/artifacts/junit.e2e_suite.11.xml
[11] 
[11] Ran 1 of 1 Specs in 3762.082 seconds
[11] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[11] PASS
[6] STEP: Deleting namespace used for hosting the "" test spec
[6] INFO: Deleting namespace functional-gpu-cluster-isan4e
[6] 
[6] JUnit report was created: /logs/artifacts/junit.e2e_suite.6.xml
[6] 
[6] Ran 1 of 1 Specs in 3886.541 seconds
[6] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[6] PASS
[16] STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec
[16] INFO: Deleting namespace k8s-upgrade-and-conformance-945yun
[16] 
[16] • [SLOW TEST:3514.053 seconds]
[16] [unmanaged] [Cluster API Framework]
... skipping 4 lines ...
[16]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/cluster_upgrade.go:115
[16] ------------------------------
[16] 
[16] JUnit report was created: /logs/artifacts/junit.e2e_suite.16.xml
[16] 
[16] Ran 1 of 1 Specs in 3942.133 seconds
[16] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[16] PASS
[5] STEP: Deleting retained dynamically provisioned volumes
[5] STEP: Deleting dynamically provisioned volumes
[5] STEP: Deleted dynamically provisioned volume with ID: vol-05aeba41ca52620d7
[5] STEP: Deleted dynamically provisioned volume with ID: vol-0ff5d6992d2613859
[5] STEP: PASSED!
... skipping 13 lines ...
[5]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:328
[5] ------------------------------
[5] 
[5] JUnit report was created: /logs/artifacts/junit.e2e_suite.5.xml
[5] 
[5] Ran 1 of 1 Specs in 3985.782 seconds
[5] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[5] PASS
[7] STEP: Dumping all the Cluster API resources in the "csimigration-off-upgrade-q0ze2x" namespace
[7] STEP: Dumping all EC2 instances in the "csimigration-off-upgrade-q0ze2x" namespace
[7] STEP: Deleting all clusters in the "csimigration-off-upgrade-q0ze2x" namespace with intervals ["20m" "10s"]
[7] STEP: Deleting cluster csimigration-off-upgrade-gpeb3q
[7] INFO: Waiting for the Cluster csimigration-off-upgrade-q0ze2x/csimigration-off-upgrade-gpeb3q to be deleted
... skipping 54 lines ...
[7] 
[7] JUnit report was created: /logs/artifacts/junit.e2e_suite.7.xml
[7] 
[7] 
[7] Summarizing 1 Failure:
[7] 
[7] [Fail] [unmanaged] [functional] CSI=in-tree CCM=in-tree AWSCSIMigration=off: upgrade to v1.23 [It] should create volumes dynamically with external cloud provider 
[7] /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/framework/cluster_helpers.go:166
[7] 
[7] Ran 1 of 2 Specs in 5907.635 seconds
[7] FAIL! -- 0 Passed | 1 Failed | 1 Pending | 0 Skipped
[7] --- FAIL: TestE2E (5907.68s)
[7] FAIL
[1] folder created for eks clusters: /logs/artifacts/clusters/bootstrap/aws-resources
[1] STEP: Tearing down the management cluster
[1] STEP: Deleting cluster-api-provider-aws-sigs-k8s-io CloudFormation stack
[1] 
[1] JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml
[1] 
[1] Ran 2 of 4 Specs in 6536.052 seconds
[1] SUCCESS! -- 2 Passed | 0 Failed | 2 Pending | 0 Skipped
[1] PASS

Ginkgo ran 1 suite in 1h50m36.407611096s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 3 lines ...
To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc

real	110m36.427s
user	23m16.105s
sys	6m54.375s
make: *** [Makefile:404: test-e2e] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...