This job view page is being replaced by Spyglass soon. Check out the new job view.
PRglenonn: Fix security group reconciliation loop
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-09-09 21:18
Elapsed1h49m
Revisiona5a9539f9cf652426bf23a4e0b629136cc787e4a
Refs 1390

No Test Failures!


Error lines from build-log.txt

... skipping 188 lines ...
Upgrading is recommended (try kops upgrade cluster)

More information: https://github.com/kubernetes/kops/blob/master/permalinks/upgrade_k8s.md#1.13.12

*********************************************************************************

I0909 21:21:07.113564   10385 context.go:251] hit maximum retries 1 with error file does not exist
I0909 21:21:07.214382   10385 context.go:251] hit maximum retries 1 with error file does not exist
I0909 21:21:07.775818   10385 apply_cluster.go:562] Gossip DNS: skipping DNS validation
I0909 21:21:07.862969   10385 executor.go:103] Tasks: 0 done / 102 total; 49 can run
I0909 21:21:08.399156   10385 executor.go:103] Tasks: 49 done / 102 total; 26 can run
I0909 21:21:08.940018   10385 executor.go:103] Tasks: 75 done / 102 total; 23 can run
I0909 21:21:09.827061   10385 executor.go:103] Tasks: 98 done / 102 total; 3 can run
W0909 21:21:10.076434   10385 keypair.go:140] Task did not have an address: *awstasks.LoadBalancer {"Name":"api.test-cluster-1121.k8s.local","Lifecycle":"Sync","LoadBalancerName":"api-test-cluster-1121-k8s-p8q6sj","DNSName":null,"HostedZoneId":null,"Subnets":[{"Name":"us-west-2b.test-cluster-1121.k8s.local","ShortName":"us-west-2b","Lifecycle":"Sync","ID":null,"VPC":{"Name":"test-cluster-1121.k8s.local","Lifecycle":"Sync","ID":null,"CIDR":"172.20.0.0/16","EnableDNSHostnames":true,"EnableDNSSupport":true,"Shared":false,"Tags":{"KubernetesCluster":"test-cluster-1121.k8s.local","Name":"test-cluster-1121.k8s.local","kubernetes.io/cluster/test-cluster-1121.k8s.local":"owned"}},"AvailabilityZone":"us-west-2b","CIDR":"172.20.64.0/19","Shared":false,"Tags":{"KubernetesCluster":"test-cluster-1121.k8s.local","Name":"us-west-2b.test-cluster-1121.k8s.local","SubnetType":"Public","kubernetes.io/cluster/test-cluster-1121.k8s.local":"owned","kubernetes.io/role/elb":"1"}},{"Name":"us-west-2c.test-cluster-1121.k8s.local","ShortName":"us-west-2c","Lifecycle":"Sync","ID":null,"VPC":{"Name":"test-cluster-1121.k8s.local","Lifecycle":"Sync","ID":null,"CIDR":"172.20.0.0/16","EnableDNSHostnames":true,"EnableDNSSupport":true,"Shared":false,"Tags":{"KubernetesCluster":"test-cluster-1121.k8s.local","Name":"test-cluster-1121.k8s.local","kubernetes.io/cluster/test-cluster-1121.k8s.local":"owned"}},"AvailabilityZone":"us-west-2c","CIDR":"172.20.96.0/19","Shared":false,"Tags":{"KubernetesCluster":"test-cluster-1121.k8s.local","Name":"us-west-2c.test-cluster-1121.k8s.local","SubnetType":"Public","kubernetes.io/cluster/test-cluster-1121.k8s.local":"owned","kubernetes.io/role/elb":"1"}},{"Name":"us-west-2a.test-cluster-1121.k8s.local","ShortName":"us-west-2a","Lifecycle":"Sync","ID":null,"VPC":{"Name":"test-cluster-1121.k8s.local","Lifecycle":"Sync","ID":null,"CIDR":"172.20.0.0/16","EnableDNSHostnames":true,"EnableDNSSupport":true,"Shared":false,"Tags":{"KubernetesCluster":"test-cluster-1121.k8s.local","Name":"test-cluster-1121.k8s.local","kubernetes.io/cluster/test-cluster-1121.k8s.local":"owned"}},"AvailabilityZone":"us-west-2a","CIDR":"172.20.32.0/19","Shared":false,"Tags":{"KubernetesCluster":"test-cluster-1121.k8s.local","Name":"us-west-2a.test-cluster-1121.k8s.local","SubnetType":"Public","kubernetes.io/cluster/test-cluster-1121.k8s.local":"owned","kubernetes.io/role/elb":"1"}}],"SecurityGroups":[{"Name":"api-elb.test-cluster-1121.k8s.local","Lifecycle":"Sync","ID":null,"Description":"Security group for api ELB","VPC":{"Name":"test-cluster-1121.k8s.local","Lifecycle":"Sync","ID":null,"CIDR":"172.20.0.0/16","EnableDNSHostnames":true,"EnableDNSSupport":true,"Shared":false,"Tags":{"KubernetesCluster":"test-cluster-1121.k8s.local","Name":"test-cluster-1121.k8s.local","kubernetes.io/cluster/test-cluster-1121.k8s.local":"owned"}},"RemoveExtraRules":["port=443"],"Shared":null,"Tags":{"KubernetesCluster":"test-cluster-1121.k8s.local","Name":"api-elb.test-cluster-1121.k8s.local","kubernetes.io/cluster/test-cluster-1121.k8s.local":"owned"}}],"Listeners":{"443":{"InstancePort":443,"SSLCertificateID":""}},"Scheme":null,"HealthCheck":{"Target":"SSL:443","HealthyThreshold":2,"UnhealthyThreshold":2,"Interval":10,"Timeout":5},"AccessLog":null,"ConnectionDraining":null,"ConnectionSettings":{"IdleTimeout":300},"CrossZoneLoadBalancing":{"Enabled":false},"SSLCertificateID":"","Tags":{"KubernetesCluster":"test-cluster-1121.k8s.local","Name":"api.test-cluster-1121.k8s.local","kubernetes.io/cluster/test-cluster-1121.k8s.local":"owned"}}
... skipping 497 lines ...
Upgrading is recommended (try kops upgrade cluster)

More information: https://github.com/kubernetes/kops/blob/master/permalinks/upgrade_k8s.md#1.13.12

*********************************************************************************

I0909 21:21:13.961454   10434 context.go:251] hit maximum retries 1 with error file does not exist
I0909 21:21:13.990756   10434 context.go:251] hit maximum retries 1 with error file does not exist
I0909 21:21:14.467359   10434 apply_cluster.go:562] Gossip DNS: skipping DNS validation
I0909 21:21:15.445303   10434 executor.go:103] Tasks: 0 done / 102 total; 49 can run
I0909 21:21:16.395017   10434 vfs_castore.go:728] Issuing new certificate: "etcd-manager-ca-events"
I0909 21:21:16.408870   10434 vfs_castore.go:728] Issuing new certificate: "apiserver-aggregator-ca"
I0909 21:21:16.416933   10434 vfs_castore.go:728] Issuing new certificate: "etcd-clients-ca"
I0909 21:21:16.480667   10434 vfs_castore.go:728] Issuing new certificate: "etcd-peers-ca-main"
... skipping 31 lines ...

Using cluster from kubectl context: test-cluster-1121.k8s.local

Validating cluster test-cluster-1121.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-1121-k8s-p8q6sj-444270704.us-west-2.elb.amazonaws.com/api/v1/nodes: dial tcp: lookup api-test-cluster-1121-k8s-p8q6sj-444270704.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Using cluster from kubectl context: test-cluster-1121.k8s.local

Validating cluster test-cluster-1121.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-1121-k8s-p8q6sj-444270704.us-west-2.elb.amazonaws.com/api/v1/nodes: dial tcp: lookup api-test-cluster-1121-k8s-p8q6sj-444270704.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Using cluster from kubectl context: test-cluster-1121.k8s.local

Validating cluster test-cluster-1121.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-1121-k8s-p8q6sj-444270704.us-west-2.elb.amazonaws.com/api/v1/nodes: dial tcp: lookup api-test-cluster-1121-k8s-p8q6sj-444270704.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Using cluster from kubectl context: test-cluster-1121.k8s.local

Validating cluster test-cluster-1121.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-1121-k8s-p8q6sj-444270704.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-1121.k8s.local

Validating cluster test-cluster-1121.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-1121-k8s-p8q6sj-444270704.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-1121.k8s.local

Validating cluster test-cluster-1121.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-1121-k8s-p8q6sj-444270704.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-1121.k8s.local

Validating cluster test-cluster-1121.k8s.local

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
... skipping 12 lines ...
Node	ip-172-20-60-166.us-west-2.compute.internal				master "ip-172-20-60-166.us-west-2.compute.internal" is not ready
Pod	kube-system/dns-controller-6749498bb8-5lfn5				kube-system pod "dns-controller-6749498bb8-5lfn5" is pending
Pod	kube-system/kube-dns-6c8ddfc858-ssn68					kube-system pod "kube-dns-6c8ddfc858-ssn68" is pending
Pod	kube-system/kube-dns-autoscaler-5dd55df495-rvvl2			kube-system pod "kube-dns-autoscaler-5dd55df495-rvvl2" is pending
Pod	kube-system/kube-scheduler-ip-172-20-60-166.us-west-2.compute.internal	kube-system pod "kube-scheduler-ip-172-20-60-166.us-west-2.compute.internal" is pending

Validation Failed
Using cluster from kubectl context: test-cluster-1121.k8s.local

Validating cluster test-cluster-1121.k8s.local

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
... skipping 9 lines ...
Machine	i-070fa07d5ee3bb946					machine "i-070fa07d5ee3bb946" has not yet joined cluster
Machine	i-0db093b069dde6e0e					machine "i-0db093b069dde6e0e" has not yet joined cluster
Machine	i-0fb7b44698852f362					machine "i-0fb7b44698852f362" has not yet joined cluster
Pod	kube-system/kube-dns-6c8ddfc858-ssn68			kube-system pod "kube-dns-6c8ddfc858-ssn68" is pending
Pod	kube-system/kube-dns-autoscaler-5dd55df495-rvvl2	kube-system pod "kube-dns-autoscaler-5dd55df495-rvvl2" is pending

Validation Failed
Using cluster from kubectl context: test-cluster-1121.k8s.local

Validating cluster test-cluster-1121.k8s.local

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
... skipping 10 lines ...
VALIDATION ERRORS
KIND	NAME							MESSAGE
Node	ip-172-20-116-209.us-west-2.compute.internal		node "ip-172-20-116-209.us-west-2.compute.internal" is not ready
Pod	kube-system/kube-dns-6c8ddfc858-ssn68			kube-system pod "kube-dns-6c8ddfc858-ssn68" is pending
Pod	kube-system/kube-dns-autoscaler-5dd55df495-rvvl2	kube-system pod "kube-dns-autoscaler-5dd55df495-rvvl2" is pending

Validation Failed
Using cluster from kubectl context: test-cluster-1121.k8s.local

Validating cluster test-cluster-1121.k8s.local

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
... skipping 444 lines ...
vpc:vpc-0d947bf691debb70c	still has dependencies, will retry
Not all resources deleted; waiting before reattempting deletion
	dhcp-options:dopt-06869554ba3d87e4d
	vpc:vpc-0d947bf691debb70c

not making progress deleting resources; giving up
2020/09/09 23:07:27 Failed to run tear down step: exit status 1
2020/09/09 23:07:27 signal: killed
make: *** [Makefile:50: e2e-test] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...