This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 6 succeeded
Started2022-09-05 09:55
Elapsed48m45s
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 6 Passed Tests

Show 8 Skipped Tests

Error lines from build-log.txt

... skipping 19 lines ...
Collecting charset-normalizer<3,>=2
  Downloading charset_normalizer-2.1.1-py3-none-any.whl (39 kB)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests) (1.26.11)
Installing collected packages: idna, charset-normalizer, certifi, requests
Successfully installed certifi-2022.6.15 charset-normalizer-2.1.1 idna-3.3 requests-2.28.1
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
--- Logging error ---
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/pip/_internal/utils/logging.py", line 177, in emit
    self.console.print(renderable, overflow="ignore", crop=False, style=style)
  File "/usr/local/lib/python3.7/dist-packages/pip/_vendor/rich/console.py", line 1752, in print
    extend(render(renderable, render_options))
  File "/usr/local/lib/python3.7/dist-packages/pip/_vendor/rich/console.py", line 1390, in render
... skipping 446 lines ...
[1] STEP: Reading the ClusterResourceSet manifest ../../data/cni/calico.yaml
[1] STEP: Setting up the bootstrap cluster
[1] INFO: Creating a kind cluster with name "test-0rn4a7"
[1] INFO: The kubeconfig file for the kind cluster is /tmp/e2e-kind4167272204
[1] INFO: Loading image: "gcr.io/k8s-staging-cluster-api/capa-manager:e2e"
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-cainjector:v1.1.0"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-cainjector:v1.1.0" into the kind cluster "test-0rn4a7": error saving image "quay.io/jetstack/cert-manager-cainjector:v1.1.0" to "/tmp/image-tar8790574/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-webhook:v1.1.0"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-webhook:v1.1.0" into the kind cluster "test-0rn4a7": error saving image "quay.io/jetstack/cert-manager-webhook:v1.1.0" to "/tmp/image-tar2727615352/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-controller:v1.1.0"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-controller:v1.1.0" into the kind cluster "test-0rn4a7": error saving image "quay.io/jetstack/cert-manager-controller:v1.1.0" to "/tmp/image-tar488631598/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v0.4.7"
[1] INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v0.4.7" into the kind cluster "test-0rn4a7": error saving image "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v0.4.7" to "/tmp/image-tar1358501274/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v0.4.7"
[1] INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v0.4.7" into the kind cluster "test-0rn4a7": error saving image "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v0.4.7" to "/tmp/image-tar3302994762/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "k8s.gcr.io/cluster-api/cluster-api-controller:v0.4.7"
[1] INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/cluster-api-controller:v0.4.7" into the kind cluster "test-0rn4a7": error saving image "k8s.gcr.io/cluster-api/cluster-api-controller:v0.4.7" to "/tmp/image-tar3209283736/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] STEP: Setting environment variable: key=AWS_B64ENCODED_CREDENTIALS, value=*******
[1] STEP: Writing AWS service quotas to a file for parallel tests
[1] STEP: Initializing the bootstrap cluster
[1] INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure aws
[1] STEP: Waiting for provider controllers to be running
[1] STEP: Waiting for deployment capa-system/capa-controller-manager to be available
... skipping 101 lines ...
[7] STEP: Setting environment variable: key=AWS_REGION, value=us-west-2
[7] STEP: Setting environment variable: key=AWS_SSH_KEY_NAME, value=cluster-api-provider-aws-sigs-k8s-io
[7] 
[7] JUnit report was created: /logs/artifacts/junit.e2e_suite.7.xml
[7] 
[7] Ran 0 of 0 Specs in 291.817 seconds
[7] SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 0 Skipped
[7] PASS
[1] STEP: Node 1 acquired resources: {ec2:2, vpc:1, eip:3, ngw:1, igw:1, classiclb:1}
[1] [BeforeEach] Running the quick-start spec
[1]   /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.7/e2e/quick_start.go:62
[1] STEP: Creating a namespace for hosting the "quick-start" test spec
[1] INFO: Creating namespace quick-start-9r8vii
... skipping 301 lines ...
[2]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.7/e2e/mhc_remediations.go:83
[2] ------------------------------
[2] 
[2] JUnit report was created: /logs/artifacts/junit.e2e_suite.2.xml
[2] 
[2] Ran 1 of 1 Specs in 1284.847 seconds
[2] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[2] PASS
[4] STEP: Scaling the machine pool up
[4] INFO: Patching the replica count in Machine Pool machine-pool-xz7m5h/machine-pool-c4utd6-mp-0
[4] STEP: Waiting for the machine pool workload nodes to exist
[3] STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
[3] INFO: Deleting namespace mhc-remediation-oas3xz
... skipping 10 lines ...
[3]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.7/e2e/mhc_remediations.go:115
[3] ------------------------------
[3] 
[3] JUnit report was created: /logs/artifacts/junit.e2e_suite.3.xml
[3] 
[3] Ran 1 of 1 Specs in 1505.238 seconds
[3] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[3] PASS
[5] STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
[5] INFO: Deleting namespace kcp-upgrade-e5mb2n
[5] 
[5] • [SLOW TEST:1216.348 seconds]
[5] [unmanaged] [Cluster API Framework]
... skipping 4 lines ...
[5]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.7/e2e/kcp_upgrade.go:75
[5] ------------------------------
[5] 
[5] JUnit report was created: /logs/artifacts/junit.e2e_suite.5.xml
[5] 
[5] Ran 1 of 1 Specs in 1508.028 seconds
[5] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[5] PASS
[4] STEP: Scaling the machine pool down
[4] INFO: Patching the replica count in Machine Pool machine-pool-xz7m5h/machine-pool-c4utd6-mp-0
[4] STEP: Waiting for the machine pool workload nodes to exist
[4] STEP: PASSED!
[4] [AfterEach] Machine Pool Spec
... skipping 33 lines ...
[4]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.7/e2e/machine_pool.go:76
[4] ------------------------------
[4] 
[4] JUnit report was created: /logs/artifacts/junit.e2e_suite.4.xml
[4] 
[4] Ran 1 of 1 Specs in 2450.571 seconds
[4] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[4] PASS
[6] STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
[6] INFO: Deleting namespace kcp-upgrade-1oij7w
[6] [AfterEach] KCP Upgrade Spec - HA Control Plane Cluster using Scale-In
[6]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_CAPI_test.go:81
[6] STEP: Node 6 released resources: {ec2:4, vpc:1, eip:3, ngw:1, igw:1, classiclb:1}
... skipping 7 lines ...
[6]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.7/e2e/kcp_upgrade.go:75
[6] ------------------------------
[6] 
[6] JUnit report was created: /logs/artifacts/junit.e2e_suite.6.xml
[6] 
[6] Ran 1 of 1 Specs in 2530.994 seconds
[6] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[6] PASS
[1] folder created for eks clusters: /logs/artifacts/clusters/bootstrap/aws-resources
[1] STEP: Tearing down the management cluster
[1] STEP: Deleting cluster-api-provider-aws-sigs-k8s-io CloudFormation stack
[1] 
[1] JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml
[1] 
[1] Ran 1 of 9 Specs in 2650.378 seconds
[1] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 8 Skipped
[1] PASS

Ginkgo ran 1 suite in 45m40.1991421s
Test Suite Passed

real	45m40.207s
... skipping 12 lines ...