This job view page is being replaced by Spyglass soon. Check out the new job view.
PRkishorj: set CCM 1.24 k8s deps to 1.24.9
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2022-12-28 23:18
Elapsed6m31s
Revisionf4a6916a75a90ebe686fc574b6bdb5c2ea80f73a
Refs 551

No Test Failures!


Error lines from build-log.txt

... skipping 439 lines ...
I1228 23:23:53.926892   27244 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
I1228 23:23:53.986503   27244 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/1.27.0-alpha.2+v1.27.0-alpha.1-27-gb47289178d/linux/amd64/kops
I1228 23:23:54.973383   27244 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/test-cluster-20221228232339.k8s/id_ed25519
I1228 23:23:54.985441   27244 up.go:44] Cleaning up any leaked resources from previous cluster
I1228 23:23:54.985638   27244 dumplogs.go:45] /home/prow/go/src/k8s.io/cloud-provider-aws/_output/test/20221228232339/20221228232339/kops toolbox dump --name test-cluster-20221228232339.k8s --dir /home/prow/go/src/k8s.io/cloud-provider-aws/_output/test --private-key /tmp/kops/test-cluster-20221228232339.k8s/id_ed25519 --ssh-user 
I1228 23:23:54.985660   27244 local.go:42] ⚙️ /home/prow/go/src/k8s.io/cloud-provider-aws/_output/test/20221228232339/20221228232339/kops toolbox dump --name test-cluster-20221228232339.k8s --dir /home/prow/go/src/k8s.io/cloud-provider-aws/_output/test --private-key /tmp/kops/test-cluster-20221228232339.k8s/id_ed25519 --ssh-user 
W1228 23:23:55.653805   27244 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1228 23:23:55.653933   27244 down.go:48] /home/prow/go/src/k8s.io/cloud-provider-aws/_output/test/20221228232339/20221228232339/kops delete cluster --name test-cluster-20221228232339.k8s --yes
I1228 23:23:55.653985   27244 local.go:42] ⚙️ /home/prow/go/src/k8s.io/cloud-provider-aws/_output/test/20221228232339/20221228232339/kops delete cluster --name test-cluster-20221228232339.k8s --yes
I1228 23:23:55.741724   27266 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "test-cluster-20221228232339.k8s" not found
I1228 23:23:56.366006   27244 up.go:167] /home/prow/go/src/k8s.io/cloud-provider-aws/_output/test/20221228232339/20221228232339/kops create cluster --name test-cluster-20221228232339.k8s --cloud aws --kubernetes-version v1.24 --ssh-public-key /tmp/kops/test-cluster-20221228232339.k8s/id_ed25519.pub --set cluster.spec.nodePortAccess=0.0.0.0/0 --dns=none --zones=us-west-2a,us-west-2b,us-west-2c --node-size=m5.large --master-size=m5.large --override=cluster.spec.kubeAPIServer.cloudProvider=external --override=cluster.spec.kubeControllerManager.cloudProvider=external --override=cluster.spec.kubelet.cloudProvider=external --override=cluster.spec.cloudControllerManager.cloudProvider=aws --override=cluster.spec.cloudControllerManager.image=607362164682.dkr.ecr.us-west-2.amazonaws.com/amazon/cloud-controller-manager:v1.24.3-3-gec59301-20221228232339 --override=spec.cloudProvider.aws.ebsCSIDriver.enabled=true --admin-access 0.0.0.0/0 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48
I1228 23:23:56.366085   27244 local.go:42] ⚙️ /home/prow/go/src/k8s.io/cloud-provider-aws/_output/test/20221228232339/20221228232339/kops create cluster --name test-cluster-20221228232339.k8s --cloud aws --kubernetes-version v1.24 --ssh-public-key /tmp/kops/test-cluster-20221228232339.k8s/id_ed25519.pub --set cluster.spec.nodePortAccess=0.0.0.0/0 --dns=none --zones=us-west-2a,us-west-2b,us-west-2c --node-size=m5.large --master-size=m5.large --override=cluster.spec.kubeAPIServer.cloudProvider=external --override=cluster.spec.kubeControllerManager.cloudProvider=external --override=cluster.spec.kubelet.cloudProvider=external --override=cluster.spec.cloudControllerManager.cloudProvider=aws --override=cluster.spec.cloudControllerManager.image=607362164682.dkr.ecr.us-west-2.amazonaws.com/amazon/cloud-controller-manager:v1.24.3-3-gec59301-20221228232339 --override=spec.cloudProvider.aws.ebsCSIDriver.enabled=true --admin-access 0.0.0.0/0 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48
I1228 23:23:56.444118   27276 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1228 23:23:56.478460   27276 create_cluster.go:878] Using SSH public key: /tmp/kops/test-cluster-20221228232339.k8s/id_ed25519.pub
I1228 23:23:57.127547   27276 new_cluster.go:1326] Cloud Provider ID: "aws"
I1228 23:23:57.569993   27276 subnets.go:185] Assigned CIDR 172.20.32.0/19 to subnet us-west-2a
... skipping 8 lines ...
Upgrading is recommended (try kops upgrade cluster)

More information: https://github.com/kubernetes/kops/blob/master/permalinks/upgrade_k8s.md#1.24.9

*********************************************************************************

Error: cannot determine hash for "https://storage.googleapis.com/kubernetes-release/release/v1.24/bin/linux/amd64/kubelet" (have you specified a valid file location?)
Error: exit status 1
make: *** [Makefile:133: test-e2e] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...