This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdims: Bump dependencies and go version (in go.mod)
ResultABORTED
Tests 0 failed / 0 succeeded
Started2023-01-21 03:41
Elapsed16m54s
Revisionc09e6ea4125da5677a88431c1c35971a15948313
Refs 547

No Test Failures!


Error lines from build-log.txt

... skipping 399 lines ...
	go build \
	-o=/home/prow/go/src/github.com/kubernetes-sigs/aws-iam-authenticator/_output/bin/aws-iam-authenticator \
	-ldflags="-w -s -X sigs.k8s.io/aws-iam-authenticator/pkg.Version= -X sigs.k8s.io/aws-iam-authenticator/pkg.BuildDate=2023-01-21T03:46:39Z -X sigs.k8s.io/aws-iam-authenticator/pkg.CommitID=1d64e41079f7088102b1977c7a62e51c862b04d1" \
	./cmd/aws-iam-authenticator/
make[2]: Leaving directory '/home/prow/go/src/github.com/kubernetes-sigs/aws-iam-authenticator'
make[1]: Leaving directory '/home/prow/go/src/github.com/kubernetes-sigs/aws-iam-authenticator'
Error: cluster not found "test-cluster-11454.k8s.local"
###
## Setting up roles
#

An error occurred (NoSuchEntity) when calling the GetRole operation: The role with name aws-iam-authenticator-test-role-KubernetesAdmin cannot be found.
###
## Creating aws-iam-authenticator-test-role-KubernetesAdmin role
#
admin role: arn:aws:iam::607362164682:role/aws-iam-authenticator-test-role-KubernetesAdmin

An error occurred (NoSuchEntity) when calling the GetRole operation: The role with name aws-iam-authenticator-test-role-KubernetesUsers cannot be found.
###
## Creating aws-iam-authenticator-test-role-KubernetesUsers role
#
user role: arn:aws:iam::607362164682:role/aws-iam-authenticator-test-role-KubernetesUsers
###
## Generating SSH key /home/prow/go/src/github.com/kubernetes-sigs/aws-iam-authenticator/hack/e2e/e2e-test-artifacts/id_rsa
... skipping 12 lines ...
|=.* * * S        |
|.o * + .         |
|. + = .          |
| o + oo.         |
|  o .oo.         |
+----[SHA256]-----+
Error: cluster not found "test-cluster-11454.k8s.local"
###
## Creating cluster test-cluster-11454.k8s.local with /home/prow/go/src/github.com/kubernetes-sigs/aws-iam-authenticator/hack/e2e/e2e-test-artifacts/test-cluster-11454.k8s.local.json (dry run)
#
I0121 03:46:48.796004   17746 new_cluster.go:248] Inferred "aws" cloud provider from zone "us-west-2a"
I0121 03:46:48.796171   17746 new_cluster.go:1102]  Cloud Provider ID = aws
I0121 03:46:49.126288   17746 subnets.go:182] Assigned CIDR 172.20.32.0/19 to subnet us-west-2a
... skipping 79 lines ...
Unable to connect to the server: dial tcp: lookup api-test-cluster-11454-k8-fav6f7-2139274178.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Unable to connect to the server: dial tcp: lookup api-test-cluster-11454-k8-fav6f7-2139274178.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Unable to connect to the server: dial tcp: lookup api-test-cluster-11454-k8-fav6f7-2139274178.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Unable to connect to the server: dial tcp: lookup api-test-cluster-11454-k8-fav6f7-2139274178.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Unable to connect to the server: dial tcp: lookup api-test-cluster-11454-k8-fav6f7-2139274178.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Unable to connect to the server: dial tcp: lookup api-test-cluster-11454-k8-fav6f7-2139274178.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
error: Get "https://api-test-cluster-11454-k8-fav6f7-2139274178.us-west-2.elb.amazonaws.com/api?timeout=32s": dial tcp: lookup api-test-cluster-11454-k8-fav6f7-2139274178.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host - error from a previous attempt: EOF
error: Get "https://api-test-cluster-11454-k8-fav6f7-2139274178.us-west-2.elb.amazonaws.com/api?timeout=32s": dial tcp: lookup api-test-cluster-11454-k8-fav6f7-2139274178.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host - error from a previous attempt: EOF
Unable to connect to the server: dial tcp: lookup api-test-cluster-11454-k8-fav6f7-2139274178.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Unable to connect to the server: EOF
Unable to connect to the server: EOF
###
## Cluster is up!
#
... skipping 45 lines ...
ip-172-20-73-64.us-west-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/aws-iam-authenticator-s474t	system-node-critical pod "aws-iam-authenticator-s474t" is pending

Validation Failed
W0121 03:52:39.077334   18252 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 7 lines ...
ip-172-20-73-64.us-west-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-115-124.us-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-115-124.us-west-2.compute.internal" is pending

Validation Failed
W0121 03:52:51.105315   18252 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 42 lines ...