This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 5 succeeded
Started2022-09-19 09:58
Elapsed49m49s
Revisionrelease-0.7

Test Failures


capa-e2e [unmanaged] [Cluster API Framework] KCP Upgrade Spec - Single Control Plane Cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd 25m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capa\-e2e\s\[unmanaged\]\s\[Cluster\sAPI\sFramework\]\sKCP\sUpgrade\sSpec\s\-\sSingle\sControl\sPlane\sCluster\sShould\ssuccessfully\supgrade\sKubernetes\,\sDNS\,\skube\-proxy\,\sand\setcd$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.7/e2e/kcp_upgrade.go:75
Timed out after 1500.000s.
Expected
    <string>: Provisioning
to equal
    <string>: Provisioned
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.7/framework/cluster_helpers.go:134
				
				Click to see stdout/stderrfrom junit.e2e_suite.6.xml

Filter through log files


Show 5 Passed Tests

Show 8 Skipped Tests

Error lines from build-log.txt

... skipping 18 lines ...
Collecting idna<4,>=2.5
  Downloading idna-3.4-py3-none-any.whl (61 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.5/61.5 kB 8.2 MB/s eta 0:00:00
Installing collected packages: idna, charset-normalizer, certifi, requests
Successfully installed certifi-2022.9.14 charset-normalizer-2.1.1 idna-3.4 requests-2.28.1
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
--- Logging error ---
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/pip/_internal/utils/logging.py", line 177, in emit
    self.console.print(renderable, overflow="ignore", crop=False, style=style)
  File "/usr/local/lib/python3.7/dist-packages/pip/_vendor/rich/console.py", line 1752, in print
    extend(render(renderable, render_options))
  File "/usr/local/lib/python3.7/dist-packages/pip/_vendor/rich/console.py", line 1390, in render
... skipping 450 lines ...
[1] STEP: Reading the ClusterResourceSet manifest ../../data/cni/calico.yaml
[1] STEP: Setting up the bootstrap cluster
[1] INFO: Creating a kind cluster with name "test-fwzues"
[1] INFO: The kubeconfig file for the kind cluster is /tmp/e2e-kind511962695
[1] INFO: Loading image: "gcr.io/k8s-staging-cluster-api/capa-manager:e2e"
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-cainjector:v1.1.0"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-cainjector:v1.1.0" into the kind cluster "test-fwzues": error saving image "quay.io/jetstack/cert-manager-cainjector:v1.1.0" to "/tmp/image-tar3982887678/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-webhook:v1.1.0"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-webhook:v1.1.0" into the kind cluster "test-fwzues": error saving image "quay.io/jetstack/cert-manager-webhook:v1.1.0" to "/tmp/image-tar597517896/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-controller:v1.1.0"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-controller:v1.1.0" into the kind cluster "test-fwzues": error saving image "quay.io/jetstack/cert-manager-controller:v1.1.0" to "/tmp/image-tar484544137/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v0.4.7"
[1] INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v0.4.7" into the kind cluster "test-fwzues": error saving image "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v0.4.7" to "/tmp/image-tar3505288742/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v0.4.7"
[1] INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v0.4.7" into the kind cluster "test-fwzues": error saving image "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v0.4.7" to "/tmp/image-tar1502936651/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "k8s.gcr.io/cluster-api/cluster-api-controller:v0.4.7"
[1] INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/cluster-api-controller:v0.4.7" into the kind cluster "test-fwzues": error saving image "k8s.gcr.io/cluster-api/cluster-api-controller:v0.4.7" to "/tmp/image-tar1033044851/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] STEP: Setting environment variable: key=AWS_B64ENCODED_CREDENTIALS, value=*******
[1] STEP: Writing AWS service quotas to a file for parallel tests
[1] STEP: Initializing the bootstrap cluster
[1] INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure aws
[1] STEP: Waiting for provider controllers to be running
[1] STEP: Waiting for deployment capa-system/capa-controller-manager to be available
... skipping 361 lines ...
[7]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.7/e2e/quick_start.go:77
[7] ------------------------------
[7] 
[7] JUnit report was created: /logs/artifacts/junit.e2e_suite.7.xml
[7] 
[7] Ran 1 of 1 Specs in 1100.768 seconds
[7] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[7] PASS
[2] STEP: PASSED!
[2] [AfterEach] Machine Remediation Spec
[2]   /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.7/e2e/mhc_remediations.go:147
[2] STEP: Dumping logs from the "mhc-remediation-a3mt74" workload cluster
[2] STEP: Dumping all the Cluster API resources in the "mhc-remediation-buyy63" namespace
... skipping 16 lines ...
[3]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.7/e2e/mhc_remediations.go:83
[3] ------------------------------
[3] 
[3] JUnit report was created: /logs/artifacts/junit.e2e_suite.3.xml
[3] 
[3] Ran 1 of 1 Specs in 1256.691 seconds
[3] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[3] PASS
[4] STEP: Scaling the machine pool up
[4] INFO: Patching the replica count in Machine Pool machine-pool-emozy8/machine-pool-7gzznf-mp-0
[4] STEP: Waiting for the machine pool workload nodes to exist
[4] STEP: Scaling the machine pool down
[4] INFO: Patching the replica count in Machine Pool machine-pool-emozy8/machine-pool-7gzznf-mp-0
... skipping 13 lines ...
[2]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.7/e2e/mhc_remediations.go:115
[2] ------------------------------
[2] 
[2] JUnit report was created: /logs/artifacts/junit.e2e_suite.2.xml
[2] 
[2] Ran 1 of 1 Specs in 1610.473 seconds
[2] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[2] PASS
[6] [AfterEach] KCP Upgrade Spec - Single Control Plane Cluster
[6]   /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.7/e2e/kcp_upgrade.go:112
[6] 
[6] • Failure [1503.977 seconds]
[6] [unmanaged] [Cluster API Framework]
... skipping 52 lines ...
[6] 
[6] JUnit report was created: /logs/artifacts/junit.e2e_suite.6.xml
[6] 
[6] 
[6] Summarizing 1 Failure:
[6] 
[6] [Fail] [unmanaged] [Cluster API Framework] KCP Upgrade Spec - Single Control Plane Cluster [It] Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd 
[6] /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.7/framework/cluster_helpers.go:134
[6] 
[6] Ran 1 of 6 Specs in 1798.257 seconds
[6] FAIL! -- 0 Passed | 1 Failed | 0 Pending | 5 Skipped
[6] --- FAIL: TestE2E (1798.27s)
[6] FAIL
[5] INFO: Waiting for kube-proxy to have the upgraded kubernetes version
[5] STEP: Ensuring kube-proxy has the correct image
[5] INFO: Waiting for CoreDNS to have the upgraded image tag
[5] STEP: Ensuring CoreDNS has the correct image
[5] INFO: Waiting for etcd to have the upgraded image tag
[5] STEP: PASSED!
... skipping 29 lines ...
[5]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.7/e2e/kcp_upgrade.go:75
[5] ------------------------------
[5] 
[5] JUnit report was created: /logs/artifacts/junit.e2e_suite.5.xml
[5] 
[5] Ran 1 of 4 Specs in 2492.999 seconds
[5] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 3 Skipped
[5] PASS
[4] STEP: Deleting namespace used for hosting the "machine-pool" test spec
[4] INFO: Deleting namespace machine-pool-emozy8
[4] [AfterEach] Machine Pool Spec
[4]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_CAPI_test.go:125
[4] STEP: Node 4 released resources: {ec2:4, vpc:1, eip:3, ngw:1, igw:1, classiclb:1}
... skipping 7 lines ...
[4]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.7/e2e/machine_pool.go:76
[4] ------------------------------
[4] 
[4] JUnit report was created: /logs/artifacts/junit.e2e_suite.4.xml
[4] 
[4] Ran 1 of 1 Specs in 2573.439 seconds
[4] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[4] PASS
[1] folder created for eks clusters: /logs/artifacts/clusters/bootstrap/aws-resources
[1] STEP: Tearing down the management cluster
[1] STEP: Deleting cluster-api-provider-aws-sigs-k8s-io CloudFormation stack
[1] 
[1] JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml
[1] 
[1] Ran 0 of 0 Specs in 2719.145 seconds
[1] SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 0 Skipped
[1] PASS

Ginkgo ran 1 suite in 46m57.126090901s
Test Suite Failed

real	46m57.135s
user	10m53.924s
sys	3m2.258s
make: *** [Makefile:170: test-e2e] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...