This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 1 succeeded
Started2023-03-07 09:42
Elapsed2m12s
Revisionmaster

Test Failures


kubetest2 Up 6.06s

exit status 1
				from junit_runner.xml

Filter through log files


Show 1 Passed Tests

Error lines from build-log.txt

... skipping 135 lines ...
I0307 09:43:49.216234    6150 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
I0307 09:43:49.252597    6150 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/1.27.0-alpha.2+v1.27.0-alpha.1-425-g5e78321a91/linux/amd64/kops
I0307 09:43:50.216890    6150 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io/id_ed25519
I0307 09:43:50.227461    6150 up.go:44] Cleaning up any leaked resources from previous cluster
I0307 09:43:50.227602    6150 dumplogs.go:45] /home/prow/go/src/k8s.io/kops/_rundir/4cd7cffd-bccc-11ed-9b3c-ee7550d70ed3/kops toolbox dump --name e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ec2-user
I0307 09:43:50.227624    6150 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/4cd7cffd-bccc-11ed-9b3c-ee7550d70ed3/kops toolbox dump --name e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ec2-user
Error: Cluster.kops.k8s.io "e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io" not found
W0307 09:43:50.777462    6150 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0307 09:43:50.777515    6150 down.go:48] /home/prow/go/src/k8s.io/kops/_rundir/4cd7cffd-bccc-11ed-9b3c-ee7550d70ed3/kops delete cluster --name e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io --yes
I0307 09:43:50.777527    6150 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/4cd7cffd-bccc-11ed-9b3c-ee7550d70ed3/kops delete cluster --name e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io --yes
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io" not found
I0307 09:43:51.288712    6150 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2023/03/07 09:43:51 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0307 09:43:51.301805    6150 http.go:37] curl https://ip.jsb.workers.dev
I0307 09:43:51.387797    6150 up.go:167] /home/prow/go/src/k8s.io/kops/_rundir/4cd7cffd-bccc-11ed-9b3c-ee7550d70ed3/kops create cluster --name e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.23.17 --ssh-public-key /tmp/kops/e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io/id_ed25519.pub --set cluster.spec.nodePortAccess=0.0.0.0/0 --image=309956199498/RHEL-8.1.0_HVM-20230216-x86_64-0-Hourly2-GP2 --channel=alpha --networking=calico --container-runtime=docker --admin-access 104.197.165.109/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-northeast-1a --master-size c5.large
I0307 09:43:51.387839    6150 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/4cd7cffd-bccc-11ed-9b3c-ee7550d70ed3/kops create cluster --name e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.23.17 --ssh-public-key /tmp/kops/e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io/id_ed25519.pub --set cluster.spec.nodePortAccess=0.0.0.0/0 --image=309956199498/RHEL-8.1.0_HVM-20230216-x86_64-0-Hourly2-GP2 --channel=alpha --networking=calico --container-runtime=docker --admin-access 104.197.165.109/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-northeast-1a --master-size c5.large
I0307 09:43:51.443878    6192 create_cluster.go:882] Using SSH public key: /tmp/kops/e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io/id_ed25519.pub
I0307 09:43:51.928607    6192 new_cluster.go:1347] Cloud Provider ID: "aws"
I0307 09:43:53.149265    6192 subnets.go:185] Assigned CIDR 172.20.32.0/19 to subnet ap-northeast-1a
... skipping 16 lines ...
Upgrading is recommended (try kops upgrade cluster)

More information: https://github.com/kubernetes/kops/blob/master/permalinks/upgrade_k8s.md#1.23.16

*********************************************************************************

Error: control-plane-ap-northeast-1a.spec.image: Invalid value: "309956199498/RHEL-8.1.0_HVM-20230216-x86_64-0-Hourly2-GP2": specified image "309956199498/RHEL-8.1.0_HVM-20230216-x86_64-0-Hourly2-GP2" is invalid: could not find Image for "309956199498/RHEL-8.1.0_HVM-20230216-x86_64-0-Hourly2-GP2"
I0307 09:43:55.199031    6150 dumplogs.go:45] /home/prow/go/src/k8s.io/kops/_rundir/4cd7cffd-bccc-11ed-9b3c-ee7550d70ed3/kops toolbox dump --name e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ec2-user
I0307 09:43:55.199099    6150 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/4cd7cffd-bccc-11ed-9b3c-ee7550d70ed3/kops toolbox dump --name e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ec2-user
W0307 09:44:16.090180    6199 toolbox_dump.go:161] cannot load kubeconfig settings for "e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io": context "e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io" does not exist
I0307 09:44:16.097772    6150 dumplogs.go:79] /home/prow/go/src/k8s.io/kops/_rundir/4cd7cffd-bccc-11ed-9b3c-ee7550d70ed3/kops get cluster --name e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io -o yaml
I0307 09:44:16.097813    6150 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/4cd7cffd-bccc-11ed-9b3c-ee7550d70ed3/kops get cluster --name e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io -o yaml
I0307 09:44:16.621977    6150 dumplogs.go:79] /home/prow/go/src/k8s.io/kops/_rundir/4cd7cffd-bccc-11ed-9b3c-ee7550d70ed3/kops get instancegroups --name e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io -o yaml
... skipping 2 lines ...
I0307 09:44:17.388068    6150 local.go:42] ⚙️ kubectl cluster-info dump --all-namespaces -o yaml --output-directory /logs/artifacts/cluster-info
I0307 09:44:17.465037    6150 dumplogs.go:198] /home/prow/go/src/k8s.io/kops/_rundir/4cd7cffd-bccc-11ed-9b3c-ee7550d70ed3/kops toolbox dump --name e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io --private-key /tmp/kops/e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ec2-user -o yaml
I0307 09:44:17.465090    6150 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/4cd7cffd-bccc-11ed-9b3c-ee7550d70ed3/kops toolbox dump --name e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io --private-key /tmp/kops/e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ec2-user -o yaml
W0307 09:44:36.735799    6150 dumplogs.go:281] ControlPlane instance not found from kops toolbox dump
I0307 09:44:36.736092    6150 dumplogs.go:128] kubectl --request-timeout 5s get csinodes --all-namespaces --show-managed-fields -o yaml
I0307 09:44:36.736123    6150 local.go:42] ⚙️ kubectl --request-timeout 5s get csinodes --all-namespaces --show-managed-fields -o yaml
W0307 09:44:36.822401    6150 dumplogs.go:134] Failed to get csinodes: exit status 1
I0307 09:44:36.822582    6150 dumplogs.go:128] kubectl --request-timeout 5s get csidrivers --all-namespaces --show-managed-fields -o yaml
I0307 09:44:36.822595    6150 local.go:42] ⚙️ kubectl --request-timeout 5s get csidrivers --all-namespaces --show-managed-fields -o yaml
W0307 09:44:36.903668    6150 dumplogs.go:134] Failed to get csidrivers: exit status 1
I0307 09:44:36.903848    6150 dumplogs.go:128] kubectl --request-timeout 5s get storageclasses --all-namespaces --show-managed-fields -o yaml
I0307 09:44:36.903862    6150 local.go:42] ⚙️ kubectl --request-timeout 5s get storageclasses --all-namespaces --show-managed-fields -o yaml
W0307 09:44:36.988577    6150 dumplogs.go:134] Failed to get storageclasses: exit status 1
I0307 09:44:36.988736    6150 dumplogs.go:128] kubectl --request-timeout 5s get persistentvolumes --all-namespaces --show-managed-fields -o yaml
I0307 09:44:36.988749    6150 local.go:42] ⚙️ kubectl --request-timeout 5s get persistentvolumes --all-namespaces --show-managed-fields -o yaml
W0307 09:44:37.077189    6150 dumplogs.go:134] Failed to get persistentvolumes: exit status 1
I0307 09:44:37.077357    6150 dumplogs.go:128] kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces --show-managed-fields -o yaml
I0307 09:44:37.077370    6150 local.go:42] ⚙️ kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces --show-managed-fields -o yaml
W0307 09:44:37.165827    6150 dumplogs.go:134] Failed to get mutatingwebhookconfigurations: exit status 1
I0307 09:44:37.166052    6150 dumplogs.go:128] kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces --show-managed-fields -o yaml
I0307 09:44:37.166072    6150 local.go:42] ⚙️ kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces --show-managed-fields -o yaml
W0307 09:44:37.250157    6150 dumplogs.go:134] Failed to get validatingwebhookconfigurations: exit status 1
I0307 09:44:37.250218    6150 local.go:42] ⚙️ kubectl --request-timeout 5s get namespaces --no-headers -o custom-columns=name:.metadata.name
W0307 09:44:37.338058    6150 down.go:34] Dumping cluster logs at the start of Down() failed: failed to get namespaces: exit status 1
I0307 09:44:37.338101    6150 down.go:48] /home/prow/go/src/k8s.io/kops/_rundir/4cd7cffd-bccc-11ed-9b3c-ee7550d70ed3/kops delete cluster --name e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io --yes
I0307 09:44:37.338115    6150 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/4cd7cffd-bccc-11ed-9b3c-ee7550d70ed3/kops delete cluster --name e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io --yes
I0307 09:44:38.546227    6314 delete_cluster.go:128] Looking for cloud resources to delete
No cloud resources to delete

Deleted cluster: "e2e-e2e-kops-grid-calico-rhel8-k23-docker.test-cncf-aws.k8s.io"
Error: exit status 1
+ EXIT_VALUE=1
+ set +o xtrace