This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 1 succeeded
Started2022-09-03 19:01
Elapsed2h10m
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 1 Passed Tests

Error lines from build-log.txt

... skipping 217 lines ...
I0903 19:03:12.213453    6288 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/ce3f728c-2bba-11ed-a329-1a5ef8605916"
I0903 19:03:12.221447    6288 app.go:128] ID for this run: "ce3f728c-2bba-11ed-a329-1a5ef8605916"
I0903 19:03:12.222750    6288 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519
I0903 19:03:12.230125    6288 up.go:44] Cleaning up any leaked resources from previous cluster
I0903 19:03:12.230213    6288 dumplogs.go:45] /tmp/kops.83pYKIUT3 toolbox dump --name e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0903 19:03:12.230230    6288 local.go:42] ⚙️ /tmp/kops.83pYKIUT3 toolbox dump --name e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0903 19:03:12.766282    6288 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0903 19:03:12.766324    6288 down.go:48] /tmp/kops.83pYKIUT3 delete cluster --name e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io --yes
I0903 19:03:12.766333    6288 local.go:42] ⚙️ /tmp/kops.83pYKIUT3 delete cluster --name e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io --yes
I0903 19:03:12.797867    6312 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0903 19:03:12.797956    6312 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io" not found
I0903 19:03:13.283642    6288 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/09/03 19:03:13 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0903 19:03:13.300277    6288 http.go:37] curl https://ip.jsb.workers.dev
I0903 19:03:13.424490    6288 up.go:159] /tmp/kops.83pYKIUT3 create cluster --name e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.25.0 --ssh-public-key /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --discovery-store=s3://k8s-kops-prow/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/discovery --admin-access 35.192.181.89/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-east-2a --master-size c5.large
I0903 19:03:13.424521    6288 local.go:42] ⚙️ /tmp/kops.83pYKIUT3 create cluster --name e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.25.0 --ssh-public-key /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking calico --discovery-store=s3://k8s-kops-prow/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/discovery --admin-access 35.192.181.89/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-east-2a --master-size c5.large
I0903 19:03:13.456621    6323 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0903 19:03:13.456712    6323 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0903 19:03:13.473438    6323 create_cluster.go:843] Using SSH public key: /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519.pub
... skipping 565 lines ...
I0903 19:03:53.397001    6288 up.go:243] /tmp/kops.83pYKIUT3 validate cluster --name e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0903 19:03:53.397060    6288 local.go:42] ⚙️ /tmp/kops.83pYKIUT3 validate cluster --name e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0903 19:03:53.427450    6361 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
I0903 19:03:53.427562    6361 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io

W0903 19:03:54.420363    6361 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0903 19:04:04.508233    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
W0903 19:04:14.552190    6361 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0903 19:04:24.598970    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0903 19:04:34.656282    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0903 19:04:44.705608    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0903 19:04:54.750957    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0903 19:05:04.799339    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0903 19:05:14.845011    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0903 19:05:24.904574    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0903 19:05:34.953862    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0903 19:05:44.996066    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0903 19:05:55.057735    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0903 19:06:05.113200    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0903 19:06:15.218706    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0903 19:06:25.281784    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0903 19:06:35.329573    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 20 lines ...
Pod	kube-system/ebs-csi-controller-6cc7cc95cb-f7lxk	system-cluster-critical pod "ebs-csi-controller-6cc7cc95cb-f7lxk" is pending
Pod	kube-system/ebs-csi-node-2ls22			system-node-critical pod "ebs-csi-node-2ls22" is pending
Pod	kube-system/ebs-csi-node-2mhnw			system-node-critical pod "ebs-csi-node-2mhnw" is pending
Pod	kube-system/ebs-csi-node-7wn9z			system-node-critical pod "ebs-csi-node-7wn9z" is pending
Pod	kube-system/ebs-csi-node-whfnx			system-node-critical pod "ebs-csi-node-whfnx" is pending

Validation Failed
W0903 19:06:46.676752    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 22 lines ...
Pod	kube-system/ebs-csi-controller-6cc7cc95cb-f7lxk	system-cluster-critical pod "ebs-csi-controller-6cc7cc95cb-f7lxk" is pending
Pod	kube-system/ebs-csi-node-2ls22			system-node-critical pod "ebs-csi-node-2ls22" is pending
Pod	kube-system/ebs-csi-node-2mhnw			system-node-critical pod "ebs-csi-node-2mhnw" is pending
Pod	kube-system/ebs-csi-node-7wn9z			system-node-critical pod "ebs-csi-node-7wn9z" is pending
Pod	kube-system/ebs-csi-node-whfnx			system-node-critical pod "ebs-csi-node-whfnx" is pending

Validation Failed
W0903 19:06:57.596587    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 22 lines ...
Pod	kube-system/ebs-csi-controller-6cc7cc95cb-f7lxk	system-cluster-critical pod "ebs-csi-controller-6cc7cc95cb-f7lxk" is pending
Pod	kube-system/ebs-csi-node-2ls22			system-node-critical pod "ebs-csi-node-2ls22" is pending
Pod	kube-system/ebs-csi-node-2mhnw			system-node-critical pod "ebs-csi-node-2mhnw" is pending
Pod	kube-system/ebs-csi-node-7wn9z			system-node-critical pod "ebs-csi-node-7wn9z" is pending
Pod	kube-system/ebs-csi-node-whfnx			system-node-critical pod "ebs-csi-node-whfnx" is pending

Validation Failed
W0903 19:07:08.620702    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 23 lines ...
Pod	kube-system/ebs-csi-node-2ls22				system-node-critical pod "ebs-csi-node-2ls22" is pending
Pod	kube-system/ebs-csi-node-2mhnw				system-node-critical pod "ebs-csi-node-2mhnw" is pending
Pod	kube-system/ebs-csi-node-7wn9z				system-node-critical pod "ebs-csi-node-7wn9z" is pending
Pod	kube-system/ebs-csi-node-whfnx				system-node-critical pod "ebs-csi-node-whfnx" is pending
Pod	kube-system/kube-controller-manager-i-0770751b7129591fc	system-cluster-critical pod "kube-controller-manager-i-0770751b7129591fc" is pending

Validation Failed
W0903 19:07:19.558702    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 21 lines ...
Pod	kube-system/ebs-csi-controller-6cc7cc95cb-f7lxk	system-cluster-critical pod "ebs-csi-controller-6cc7cc95cb-f7lxk" is pending
Pod	kube-system/ebs-csi-node-2ls22			system-node-critical pod "ebs-csi-node-2ls22" is pending
Pod	kube-system/ebs-csi-node-2mhnw			system-node-critical pod "ebs-csi-node-2mhnw" is pending
Pod	kube-system/ebs-csi-node-7wn9z			system-node-critical pod "ebs-csi-node-7wn9z" is pending
Pod	kube-system/ebs-csi-node-whfnx			system-node-critical pod "ebs-csi-node-whfnx" is pending

Validation Failed
W0903 19:07:30.526953    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 17 lines ...
Pod	kube-system/ebs-csi-node-2ls22			system-node-critical pod "ebs-csi-node-2ls22" is pending
Pod	kube-system/ebs-csi-node-2mhnw			system-node-critical pod "ebs-csi-node-2mhnw" is pending
Pod	kube-system/ebs-csi-node-7wn9z			system-node-critical pod "ebs-csi-node-7wn9z" is pending
Pod	kube-system/ebs-csi-node-whfnx			system-node-critical pod "ebs-csi-node-whfnx" is pending
Pod	kube-system/kube-proxy-i-0a3e37da2097b7669	system-node-critical pod "kube-proxy-i-0a3e37da2097b7669" is pending

Validation Failed
W0903 19:07:41.486089    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 6 lines ...
i-0d7e4d91dea7a2214	node	True

VALIDATION ERRORS
KIND	NAME				MESSAGE
Pod	kube-system/ebs-csi-node-2mhnw	system-node-critical pod "ebs-csi-node-2mhnw" is pending

Validation Failed
W0903 19:07:52.504109    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 6 lines ...
i-0d7e4d91dea7a2214	node	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Pod	kube-system/kube-proxy-i-0d455ca44c71a751f	system-node-critical pod "kube-proxy-i-0d455ca44c71a751f" is pending

Validation Failed
W0903 19:08:03.520281    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 6 lines ...
i-0d7e4d91dea7a2214	node	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Pod	kube-system/kube-proxy-i-0d7e4d91dea7a2214	system-node-critical pod "kube-proxy-i-0d7e4d91dea7a2214" is pending

Validation Failed
W0903 19:08:14.571499    6361 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-2a	Master	c5.large	1	1	us-east-2a
nodes-us-east-2a	Node	t3.medium	4	4	us-east-2a

... skipping 195 lines ...
evicting pod kube-system/dns-controller-644b99f887-nwkj6
evicting pod kube-system/calico-kube-controllers-7c6d874c78-lrjpq
I0903 19:10:22.936357    6404 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0903 19:10:27.937224    6404 instancegroups.go:591] Stopping instance "i-0770751b7129591fc", node "i-0770751b7129591fc", in group "master-us-east-2a.masters.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io" (this may take a while).
I0903 19:10:28.144987    6404 instancegroups.go:436] waiting for 15s after terminating instance
I0903 19:10:43.150464    6404 instancegroups.go:470] Validating the cluster.
I0903 19:10:43.276949    6404 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.117.146.84:443: connect: connection refused.
I0903 19:11:43.338358    6404 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.117.146.84:443: i/o timeout.
I0903 19:12:43.405504    6404 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.117.146.84:443: i/o timeout.
I0903 19:13:43.450498    6404 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.117.146.84:443: i/o timeout.
I0903 19:14:43.502070    6404 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.117.146.84:443: i/o timeout.
I0903 19:15:15.076654    6404 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-0879a433cb575467e" of role "node" is not ready, node "i-0a3e37da2097b7669" of role "node" is not ready, node "i-0d455ca44c71a751f" of role "node" is not ready, node "i-0d7e4d91dea7a2214" of role "node" is not ready, system-node-critical pod "calico-node-r8z6s" is not ready (calico-node).
I0903 19:15:46.044036    6404 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-7c6d874c78-bhbxn" is not ready (calico-kube-controllers), system-node-critical pod "calico-node-r8z6s" is not ready (calico-node).
I0903 19:16:16.960373    6404 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-7c6d874c78-bhbxn" is not ready (calico-kube-controllers), system-node-critical pod "calico-node-r8z6s" is not ready (calico-node).
I0903 19:16:47.954686    6404 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-7c6d874c78-bhbxn" is not ready (calico-kube-controllers).
I0903 19:17:18.991467    6404 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-7c6d874c78-bhbxn" is not ready (calico-kube-controllers).
I0903 19:17:49.925195    6404 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
... skipping 231 lines ...
WARNING: ignoring DaemonSet-managed Pods: kube-system/aws-cloud-controller-manager-llgfb, kube-system/calico-node-r8z6s, kube-system/ebs-csi-node-l7js4, kube-system/kops-controller-2c6tt
evicting pod kube-system/dns-controller-644b99f887-wj2cp
I0903 19:43:50.754500    6476 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0903 19:43:55.755193    6476 instancegroups.go:591] Stopping instance "i-0c7f2a9c5fd1b3284", node "i-0c7f2a9c5fd1b3284", in group "master-us-east-2a.masters.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io" (this may take a while).
I0903 19:43:55.965063    6476 instancegroups.go:436] waiting for 15s after terminating instance
I0903 19:44:10.968828    6476 instancegroups.go:470] Validating the cluster.
I0903 19:44:11.073659    6476 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.117.162.10:443: connect: connection refused.
I0903 19:45:11.109810    6476 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.117.162.10:443: i/o timeout.
I0903 19:46:11.154712    6476 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.117.162.10:443: i/o timeout.
I0903 19:47:11.204700    6476 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.117.162.10:443: i/o timeout.
I0903 19:48:11.247336    6476 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.117.162.10:443: i/o timeout.
I0903 19:49:11.311340    6476 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.117.162.10:443: i/o timeout.
I0903 19:49:42.699060    6476 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-05e00a1c2c933ee26" of role "node" is not ready, node "i-0a0154745fef364ff" of role "node" is not ready, node "i-0dcaa89a37fc44d14" of role "node" is not ready, node "i-0e11b9e1d1412b438" of role "node" is not ready, system-node-critical pod "calico-node-b5c6h" is not ready (calico-node).
I0903 19:50:13.629133    6476 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-05e00a1c2c933ee26" of role "node" is not ready, node "i-0a0154745fef364ff" of role "node" is not ready, node "i-0dcaa89a37fc44d14" of role "node" is not ready, node "i-0e11b9e1d1412b438" of role "node" is not ready, system-node-critical pod "calico-node-b5c6h" is not ready (calico-node).
I0903 19:50:44.632278    6476 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-05e00a1c2c933ee26" of role "node" is not ready, node "i-0a0154745fef364ff" of role "node" is not ready, node "i-0dcaa89a37fc44d14" of role "node" is not ready, node "i-0e11b9e1d1412b438" of role "node" is not ready, system-node-critical pod "calico-node-b5c6h" is not ready (calico-node).
I0903 19:51:15.572128    6476 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-05e00a1c2c933ee26" of role "node" is not ready, node "i-0a0154745fef364ff" of role "node" is not ready, node "i-0dcaa89a37fc44d14" of role "node" is not ready, node "i-0e11b9e1d1412b438" of role "node" is not ready, system-node-critical pod "calico-node-b5c6h" is not ready (calico-node).
I0903 19:51:46.564085    6476 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-05e00a1c2c933ee26" of role "node" is not ready, node "i-0a0154745fef364ff" of role "node" is not ready, node "i-0dcaa89a37fc44d14" of role "node" is not ready, node "i-0e11b9e1d1412b438" of role "node" is not ready, system-node-critical pod "calico-node-b5c6h" is not ready (calico-node).
I0903 19:52:17.506606    6476 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": node "i-05e00a1c2c933ee26" of role "node" is not ready, node "i-0a0154745fef364ff" of role "node" is not ready, node "i-0dcaa89a37fc44d14" of role "node" is not ready, node "i-0e11b9e1d1412b438" of role "node" is not ready, system-node-critical pod "calico-node-b5c6h" is not ready (calico-node).
... skipping 238 lines ...
WARNING: ignoring DaemonSet-managed Pods: kube-system/aws-cloud-controller-manager-m6hvj, kube-system/calico-node-b5c6h, kube-system/ebs-csi-node-dsvsj, kube-system/kops-controller-7d5k7
evicting pod kube-system/dns-controller-644b99f887-q852q
I0903 20:22:27.905773    6545 instancegroups.go:660] Waiting for 5s for pods to stabilize after draining.
I0903 20:22:32.906506    6545 instancegroups.go:591] Stopping instance "i-0c8e0b5a6a019b6fa", node "i-0c8e0b5a6a019b6fa", in group "master-us-east-2a.masters.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io" (this may take a while).
I0903 20:22:33.126868    6545 instancegroups.go:436] waiting for 15s after terminating instance
I0903 20:22:48.129509    6545 instancegroups.go:470] Validating the cluster.
I0903 20:22:48.213338    6545 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.137.177.237:443: connect: connection refused.
I0903 20:23:48.253833    6545 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.137.177.237:443: i/o timeout.
I0903 20:24:48.314725    6545 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.137.177.237:443: i/o timeout.
I0903 20:25:48.359249    6545 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.137.177.237:443: i/o timeout.
I0903 20:26:48.422713    6545 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.137.177.237:443: i/o timeout.
I0903 20:27:48.487579    6545 instancegroups.go:516] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.137.177.237:443: i/o timeout.
I0903 20:28:19.798066    6545 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-7c6d874c78-bjvkw" is not ready (calico-kube-controllers).
I0903 20:28:50.758469    6545 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-7c6d874c78-bjvkw" is not ready (calico-kube-controllers).
I0903 20:29:21.728836    6545 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-7c6d874c78-bjvkw" is not ready (calico-kube-controllers).
I0903 20:29:52.770308    6545 instancegroups.go:526] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "calico-kube-controllers-7c6d874c78-bjvkw" is not ready (calico-kube-controllers).
I0903 20:30:23.834104    6545 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
I0903 20:30:34.768514    6545 instancegroups.go:506] Cluster validated; revalidating in 10s to make sure it does not flap.
... skipping 252 lines ...
Warning: Permanently added '18.217.198.209' (ECDSA) to the list of known hosts.
I0903 20:59:18.478782    6606 dumplogs.go:248] ssh -i /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@18.217.198.209 -- rm -rf /tmp/cluster-info
I0903 20:59:18.478837    6606 local.go:42] ⚙️ ssh -i /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@18.217.198.209 -- rm -rf /tmp/cluster-info
Warning: Permanently added '18.217.198.209' (ECDSA) to the list of known hosts.
I0903 20:59:19.202169    6606 dumplogs.go:126] kubectl --request-timeout 5s get csinodes --all-namespaces -o yaml
I0903 20:59:19.202211    6606 local.go:42] ⚙️ kubectl --request-timeout 5s get csinodes --all-namespaces -o yaml
W0903 20:59:19.512961    6606 dumplogs.go:132] Failed to get csinodes: exit status 1
I0903 20:59:19.513098    6606 dumplogs.go:126] kubectl --request-timeout 5s get csidrivers --all-namespaces -o yaml
I0903 20:59:19.513113    6606 local.go:42] ⚙️ kubectl --request-timeout 5s get csidrivers --all-namespaces -o yaml
W0903 20:59:19.827777    6606 dumplogs.go:132] Failed to get csidrivers: exit status 1
I0903 20:59:19.827886    6606 dumplogs.go:126] kubectl --request-timeout 5s get storageclasses --all-namespaces -o yaml
I0903 20:59:19.827896    6606 local.go:42] ⚙️ kubectl --request-timeout 5s get storageclasses --all-namespaces -o yaml
W0903 20:59:20.126508    6606 dumplogs.go:132] Failed to get storageclasses: exit status 1
I0903 20:59:20.126630    6606 dumplogs.go:126] kubectl --request-timeout 5s get persistentvolumes --all-namespaces -o yaml
I0903 20:59:20.126641    6606 local.go:42] ⚙️ kubectl --request-timeout 5s get persistentvolumes --all-namespaces -o yaml
W0903 20:59:20.415711    6606 dumplogs.go:132] Failed to get persistentvolumes: exit status 1
I0903 20:59:20.415832    6606 dumplogs.go:126] kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces -o yaml
I0903 20:59:20.415842    6606 local.go:42] ⚙️ kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces -o yaml
W0903 20:59:20.714463    6606 dumplogs.go:132] Failed to get mutatingwebhookconfigurations: exit status 1
I0903 20:59:20.714815    6606 dumplogs.go:126] kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces -o yaml
I0903 20:59:20.714834    6606 local.go:42] ⚙️ kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces -o yaml
W0903 20:59:21.001365    6606 dumplogs.go:132] Failed to get validatingwebhookconfigurations: exit status 1
I0903 20:59:21.001438    6606 local.go:42] ⚙️ kubectl --request-timeout 5s get namespaces --no-headers -o custom-columns=name:.metadata.name
I0903 20:59:21.313876    6606 dumplogs.go:162] kubectl get configmaps -n default -o yaml
I0903 20:59:21.313910    6606 local.go:42] ⚙️ kubectl get configmaps -n default -o yaml
I0903 20:59:21.498819    6606 dumplogs.go:188] /tmp/kops.83pYKIUT3 toolbox dump --name e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io --private-key /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu -o yaml
I0903 20:59:21.498903    6606 local.go:42] ⚙️ /tmp/kops.83pYKIUT3 toolbox dump --name e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io --private-key /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu -o yaml
I0903 20:59:39.624547    6606 dumplogs.go:217] ssh -i /tmp/kops/e2e-dc9e19fb9a-ad003.test-cncf-aws.k8s.io/id_ed25519 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@18.217.198.209 -- kubectl cluster-info dump --all-namespaces -o yaml --output-directory /tmp/cluster-info
... skipping 533 lines ...