This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdims: Bump dependencies and go version (in go.mod)
ResultABORTED
Tests 0 failed / 0 succeeded
Started2023-01-21 18:09
Elapsed17m11s
Revision3c850a32fb44f74185a1f46b03d14bb72768a014
Refs 547

No Test Failures!


Error lines from build-log.txt

... skipping 407 lines ...
	go build \
	-o=/home/prow/go/src/github.com/kubernetes-sigs/aws-iam-authenticator/_output/bin/aws-iam-authenticator \
	-ldflags="-w -s -X sigs.k8s.io/aws-iam-authenticator/pkg.Version= -X sigs.k8s.io/aws-iam-authenticator/pkg.BuildDate=2023-01-21T18:14:12Z -X sigs.k8s.io/aws-iam-authenticator/pkg.CommitID=c7e4328f60640f43b2f0b21e150c6e0a86d2ef44" \
	./cmd/aws-iam-authenticator/
make[2]: Leaving directory '/home/prow/go/src/github.com/kubernetes-sigs/aws-iam-authenticator'
make[1]: Leaving directory '/home/prow/go/src/github.com/kubernetes-sigs/aws-iam-authenticator'
Error: cluster not found "test-cluster-18046.k8s.local"
###
## Setting up roles
#

An error occurred (NoSuchEntity) when calling the GetRole operation: The role with name aws-iam-authenticator-test-role-KubernetesAdmin cannot be found.
###
## Creating aws-iam-authenticator-test-role-KubernetesAdmin role
#
admin role: arn:aws:iam::607362164682:role/aws-iam-authenticator-test-role-KubernetesAdmin

An error occurred (NoSuchEntity) when calling the GetRole operation: The role with name aws-iam-authenticator-test-role-KubernetesUsers cannot be found.
###
## Creating aws-iam-authenticator-test-role-KubernetesUsers role
#
user role: arn:aws:iam::607362164682:role/aws-iam-authenticator-test-role-KubernetesUsers
###
## Generating SSH key /home/prow/go/src/github.com/kubernetes-sigs/aws-iam-authenticator/hack/e2e/e2e-test-artifacts/id_rsa
... skipping 12 lines ...
|      . S ..+o.= |
|          .. o=  |
|           ..o+ .|
|           oo+.+ |
|          oo=+. o|
+----[SHA256]-----+
Error: cluster not found "test-cluster-18046.k8s.local"
###
## Creating cluster test-cluster-18046.k8s.local with /home/prow/go/src/github.com/kubernetes-sigs/aws-iam-authenticator/hack/e2e/e2e-test-artifacts/test-cluster-18046.k8s.local.json (dry run)
#
I0121 18:14:22.321379   17830 new_cluster.go:248] Inferred "aws" cloud provider from zone "us-west-2a"
I0121 18:14:22.322571   17830 new_cluster.go:1102]  Cloud Provider ID = aws
I0121 18:14:22.651678   17830 subnets.go:182] Assigned CIDR 172.20.32.0/19 to subnet us-west-2a
... skipping 80 lines ...
Unable to connect to the server: dial tcp: lookup api-test-cluster-18046-k8-i63vvl-371650772.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Unable to connect to the server: dial tcp: lookup api-test-cluster-18046-k8-i63vvl-371650772.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Unable to connect to the server: dial tcp: lookup api-test-cluster-18046-k8-i63vvl-371650772.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Unable to connect to the server: dial tcp: lookup api-test-cluster-18046-k8-i63vvl-371650772.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Unable to connect to the server: dial tcp: lookup api-test-cluster-18046-k8-i63vvl-371650772.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Unable to connect to the server: dial tcp: lookup api-test-cluster-18046-k8-i63vvl-371650772.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
error: Get "https://api-test-cluster-18046-k8-i63vvl-371650772.us-west-2.elb.amazonaws.com/api?timeout=32s": dial tcp: lookup api-test-cluster-18046-k8-i63vvl-371650772.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host - error from a previous attempt: EOF
Unable to connect to the server: EOF
error: the server doesn't have a resource type "nodes"
###
## Cluster is up!
#
###
## Applying testing roles
#
... skipping 42 lines ...
ip-172-20-85-23.us-west-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/aws-iam-authenticator-8wggk	system-node-critical pod "aws-iam-authenticator-8wggk" is pending

Validation Failed
W0121 18:20:24.173852   18345 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 8 lines ...

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-101-21.us-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-101-21.us-west-2.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-58-48.us-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-58-48.us-west-2.compute.internal" is pending

Validation Failed
W0121 18:20:36.272738   18345 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 47 lines ...
• [SLOW TEST] [78.530 seconds]
[apiserver] [Disruptive] the apiserver when the manifest changes restarts successfully
/home/prow/go/src/github.com/kubernetes-sigs/aws-iam-authenticator/tests/e2e/apiserver_test.go:88
------------------------------

Ran 8 of 8 Specs in 81.086 seconds
SUCCESS! -- 8 Passed | 0 Failed | 0 Pending | 0 Skipped
--- PASS: TestE2E (81.09s)
PASS
You're using deprecated Ginkgo functionality:
=============================================
  Support for custom reporters has been removed in V2.  Please read the documentation linked to below for Ginkgo's new behavior and for a migration path:
  Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed-custom-reporters
... skipping 407 lines ...