This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 10 succeeded
Started2020-08-08 17:20
Elapsed9m15s
Revision
job-versionv1.16.13
revisionv1.16.13

Test Failures


IsUp 1.58s

error during kubectl get nodes --no-headers: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 10 Passed Tests

Error lines from build-log.txt

Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
fatal: not a git repository (or any of the parent directories): .git
+ /workspace/scenarios/kubernetes_e2e.py --cluster=e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io --deployment=kops --kops-ssh-user=core --env=KUBE_SSH_USER=core --env=KOPS_DEPLOY_LATEST_URL=https://storage.googleapis.com/kubernetes-release/release/stable-1.16.txt --env=KOPS_KUBE_RELEASE_URL=https://storage.googleapis.com/kubernetes-release/release --env=KOPS_RUN_TOO_NEW_VERSION=1 --extract=release/stable-1.16 --ginkgo-parallel --kops-args=--networking=cilium --kops-image=075585003325/Flatcar-stable-2512.2.0-hvm --kops-priority-path=/workspace/kubernetes/platforms/linux/amd64 --kops-version=https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt --provider=aws '--test_args=--ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort|Services.*rejected.*endpoints|Services.*affinity' --timeout=60m
starts with local mode
Environment:
ARTIFACTS=/logs/artifacts
AWS_DEFAULT_PROFILE=default
AWS_PROFILE=default
... skipping 146 lines ...
2020/08/08 17:20:41 process.go:155: Step './get-kube.sh' finished in 19.673072475s
2020/08/08 17:20:41 process.go:153: Running: /tmp/kops285327455/kops get clusters e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io

cluster not found "e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io"
2020/08/08 17:20:42 process.go:155: Step '/tmp/kops285327455/kops get clusters e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io' finished in 499.877312ms
2020/08/08 17:20:42 util.go:42: curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2020/08/08 17:20:42 kops.go:505: failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
2020/08/08 17:20:42 util.go:68: curl https://ip.jsb.workers.dev
2020/08/08 17:20:42 kops.go:430: Using external IP for admin access: 34.69.67.227/32
2020/08/08 17:20:42 process.go:153: Running: /tmp/kops285327455/kops create cluster --name e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones eu-west-2a --master-size c5.large --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.16.13 --admin-access 34.69.67.227/32 --image 075585003325/Flatcar-stable-2512.2.0-hvm --cloud aws --networking=cilium --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes
I0808 17:20:42.355085     160 featureflag.go:158] FeatureFlag "SpecOverrideFlag"=true
I0808 17:20:42.402243     160 create_cluster.go:687] Using SSH public key: /workspace/.ssh/kube_aws_rsa.pub
I0808 17:20:43.339060     160 subnets.go:184] Assigned CIDR 172.20.32.0/19 to subnet eu-west-2a
... skipping 47 lines ...

2020/08/08 17:21:06 process.go:155: Step '/tmp/kops285327455/kops create cluster --name e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones eu-west-2a --master-size c5.large --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.16.13 --admin-access 34.69.67.227/32 --image 075585003325/Flatcar-stable-2512.2.0-hvm --cloud aws --networking=cilium --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes' finished in 24.341447683s
2020/08/08 17:21:06 process.go:153: Running: /tmp/kops285327455/kops validate cluster e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io --wait 15m
I0808 17:21:06.691965     183 featureflag.go:158] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io

W0808 17:21:08.162122     183 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0808 17:21:18.197114     183 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:21:28.241158     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:21:38.287159     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:21:48.335721     183 validate_cluster.go:221] (will retry): cluster not yet healthy
W0808 17:21:58.367109     183 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:22:08.399385     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:22:18.434639     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:22:28.470639     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:22:38.521408     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:22:48.555844     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:22:58.590923     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:23:08.707143     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:23:18.739078     183 validate_cluster.go:221] (will retry): cluster not yet healthy
W0808 17:23:28.772178     183 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:23:38.811823     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:23:48.847913     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:23:58.881628     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:24:08.918642     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:24:18.950325     183 validate_cluster.go:221] (will retry): cluster not yet healthy
W0808 17:24:28.970041     183 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:24:39.031917     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:24:49.082620     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:24:59.127214     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:25:09.177933     183 validate_cluster.go:221] (will retry): cluster not yet healthy
W0808 17:25:19.197611     183 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0808 17:25:29.250488     183 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:25:39.281139     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0808 17:25:49.329378     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 17 lines ...
Pod	kube-system/cilium-vvkfd				system-node-critical pod "cilium-vvkfd" is pending
Pod	kube-system/cilium-vvrb6				system-node-critical pod "cilium-vvrb6" is not ready (cilium-agent)
Pod	kube-system/cilium-xmfqq				system-node-critical pod "cilium-xmfqq" is not ready (cilium-agent)
Pod	kube-system/kube-dns-67689f84b-mm6jd			system-cluster-critical pod "kube-dns-67689f84b-mm6jd" is pending
Pod	kube-system/kube-dns-autoscaler-6f6dd8b99f-hdl7b	system-cluster-critical pod "kube-dns-autoscaler-6f6dd8b99f-hdl7b" is pending

Validation Failed
W0808 17:26:01.405058     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 12 lines ...
Pod	kube-system/cilium-h2qsg			system-node-critical pod "cilium-h2qsg" is not ready (cilium-agent)
Pod	kube-system/cilium-vvkfd			system-node-critical pod "cilium-vvkfd" is not ready (cilium-agent)
Pod	kube-system/cilium-vvrb6			system-node-critical pod "cilium-vvrb6" is not ready (cilium-agent)
Pod	kube-system/kube-dns-67689f84b-4wxj5		system-cluster-critical pod "kube-dns-67689f84b-4wxj5" is pending
Pod	kube-system/kube-dns-67689f84b-mm6jd		system-cluster-critical pod "kube-dns-67689f84b-mm6jd" is pending

Validation Failed
W0808 17:26:12.701495     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 9 lines ...
KIND	NAME									MESSAGE
Pod	kube-system/cilium-h2qsg						system-node-critical pod "cilium-h2qsg" is not ready (cilium-agent)
Pod	kube-system/kube-dns-67689f84b-4wxj5					system-cluster-critical pod "kube-dns-67689f84b-4wxj5" is not ready (kubedns)
Pod	kube-system/kube-dns-67689f84b-mm6jd					system-cluster-critical pod "kube-dns-67689f84b-mm6jd" is not ready (kubedns)
Pod	kube-system/kube-proxy-ip-172-20-56-123.eu-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-56-123.eu-west-2.compute.internal" is pending

Validation Failed
W0808 17:26:23.961690     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-49-66.eu-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-49-66.eu-west-2.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-60-119.eu-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-60-119.eu-west-2.compute.internal" is pending

Validation Failed
W0808 17:26:35.279677     183 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 41 lines ...
2020/08/08 17:27:10 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 443.434392ms
2020/08/08 17:27:10 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false get nodes -oyaml
2020/08/08 17:27:11 process.go:155: Step './cluster/kubectl.sh --match-server-version=false get nodes -oyaml' finished in 548.076324ms
2020/08/08 17:27:11 process.go:153: Running: kubectl get nodes --no-headers
The connection to the server api.e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io was refused - did you specify the right host or port?
2020/08/08 17:27:12 process.go:155: Step 'kubectl get nodes --no-headers' finished in 1.580415883s
2020/08/08 17:27:12 e2e.go:470: kubectl get nodes failed: error during kubectl get nodes --no-headers: exit status 1
2020/08/08 17:27:12 process.go:153: Running: kubectl -n kube-system get pods -ojson -l k8s-app=kops-controller
2020/08/08 17:27:12 process.go:153: Running: /tmp/kops285327455/kops toolbox dump --name e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io -ojson
I0808 17:27:12.789950     401 featureflag.go:158] FeatureFlag "SpecOverrideFlag"=true
The connection to the server api.e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io was refused - did you specify the right host or port?
2020/08/08 17:27:13 process.go:155: Step 'kubectl -n kube-system get pods -ojson -l k8s-app=kops-controller' finished in 241.364549ms
2020/08/08 17:27:13 kubernetes.go:117: kubectl get pods failed: error during kubectl -n kube-system get pods -ojson -l k8s-app=kops-controller: exit status 1
{
    "apiVersion": "v1",
    "items": [],
    "kind": "List",
    "metadata": {
        "resourceVersion": "",
        "selfLink": ""
    }
}
2020/08/08 17:27:19 process.go:155: Step '/tmp/kops285327455/kops toolbox dump --name e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io -ojson' finished in 6.706020178s
2020/08/08 17:27:19 process.go:153: Running: kubectl get nodes -ojson
The connection to the server api.e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io was refused - did you specify the right host or port?
2020/08/08 17:27:35 process.go:155: Step 'kubectl get nodes -ojson' finished in 15.558760216s
2020/08/08 17:27:35 kubernetes.go:35: kubectl get nodes failed: error during kubectl get nodes -ojson: exit status 1
{
    "apiVersion": "v1",
    "items": [],
    "kind": "List",
    "metadata": {
        "resourceVersion": "",
        "selfLink": ""
    }
}
2020/08/08 17:27:35 dump.go:95: Failed to get nodes for dumping via kubectl: error during kubectl get nodes -ojson: exit status 1
2020/08/08 17:27:35 dump.go:129: dumping node not registered in kubernetes: 18.130.224.94
2020/08/08 17:27:35 dump.go:163: Dumping node 18.130.224.94
2020/08/08 17:27:36 dump.go:413: Running SSH command: sudo journalctl --output=short-precise -k
2020/08/08 17:27:36 dump.go:413: Running SSH command: sudo journalctl --output=short-precise
2020/08/08 17:27:37 dump.go:413: Running SSH command: sudo sysctl --all
2020/08/08 17:27:37 dump.go:413: Running SSH command: sudo systemctl list-units -t service --no-pager --no-legend --all
... skipping 159 lines ...
	volume:vol-01bd631e0cac26348
	vpc:vpc-038ffb95e013e0921
	volume:vol-0f4c8fcc23a7649f7
	volume:vol-04e19cce2a59e2a87
	dhcp-options:dopt-07ba3f4c34684a483
	volume:vol-0927f59cbe35b28e6
I0808 17:28:25.295627     446 errors.go:32] unexpected aws error code: "InvalidVolume.NotFound"
volume:vol-03d4a8c4517ddcc16	ok
volume:vol-04ab0d3b17fa6cd4d	still has dependencies, will retry
volume:vol-04e19cce2a59e2a87	still has dependencies, will retry
volume:vol-01bd631e0cac26348	still has dependencies, will retry
volume:vol-0730413655cc5cb86	still has dependencies, will retry
volume:vol-0927f59cbe35b28e6	still has dependencies, will retry
... skipping 14 lines ...
	internet-gateway:igw-094c10d288e4fa036
	volume:vol-01bd631e0cac26348
	volume:vol-04ab0d3b17fa6cd4d
	volume:vol-0f4c8fcc23a7649f7
	vpc:vpc-038ffb95e013e0921
volume:vol-0730413655cc5cb86	still has dependencies, will retry
I0808 17:28:36.083680     446 errors.go:32] unexpected aws error code: "InvalidVolume.NotFound"
volume:vol-04e19cce2a59e2a87	ok
volume:vol-0927f59cbe35b28e6	still has dependencies, will retry
volume:vol-0f4c8fcc23a7649f7	still has dependencies, will retry
volume:vol-04ab0d3b17fa6cd4d	ok
volume:vol-01bd631e0cac26348	ok
subnet:subnet-0957a654f317c5628	still has dependencies, will retry
... skipping 24 lines ...
	volume:vol-0927f59cbe35b28e6
	dhcp-options:dopt-07ba3f4c34684a483
	volume:vol-0730413655cc5cb86
	security-group:sg-0d53e5fe038425cc8
	subnet:subnet-0957a654f317c5628
volume:vol-0927f59cbe35b28e6	still has dependencies, will retry
I0808 17:28:57.730521     446 errors.go:32] unexpected aws error code: "InvalidVolume.NotFound"
volume:vol-0730413655cc5cb86	ok
I0808 17:28:57.741775     446 errors.go:32] unexpected aws error code: "InvalidVolume.NotFound"
volume:vol-0f4c8fcc23a7649f7	ok
subnet:subnet-0957a654f317c5628	still has dependencies, will retry
internet-gateway:igw-094c10d288e4fa036	still has dependencies, will retry
security-group:sg-0d53e5fe038425cc8	still has dependencies, will retry
Not all resources deleted; waiting before reattempting deletion
	security-group:sg-0d53e5fe038425cc8
... skipping 12 lines ...
	dhcp-options:dopt-07ba3f4c34684a483
	volume:vol-0927f59cbe35b28e6
	security-group:sg-0d53e5fe038425cc8
	subnet:subnet-0957a654f317c5628
	route-table:rtb-006d3ab218647a1fd
	internet-gateway:igw-094c10d288e4fa036
I0808 17:29:19.351802     446 errors.go:32] unexpected aws error code: "InvalidVolume.NotFound"
volume:vol-0927f59cbe35b28e6	ok
subnet:subnet-0957a654f317c5628	ok
security-group:sg-0d53e5fe038425cc8	ok
internet-gateway:igw-094c10d288e4fa036	ok
route-table:rtb-006d3ab218647a1fd	ok
vpc:vpc-038ffb95e013e0921	ok
dhcp-options:dopt-07ba3f4c34684a483	ok
Deleted kubectl config for e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io

Deleted cluster: "e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io"
2020/08/08 17:29:25 process.go:155: Step '/tmp/kops285327455/kops delete cluster e2e-kops-cilium-flatcar-k16.test-cncf-aws.k8s.io --yes' finished in 1m29.676703206s
2020/08/08 17:29:25 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2020/08/08 17:29:25 main.go:312: Something went wrong: encountered 1 errors: [error during kubectl get nodes --no-headers: exit status 1]
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 720, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 570, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 9 lines ...