This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 234 succeeded
Started2020-10-18 10:52
Elapsed17m19s
Revision
job-versionv1.15.12
revisionv1.15.12

Test Failures


kubectl version 40s

error starting ./cluster/kubectl.sh --match-server-version=false version: exec: already started
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 234 Passed Tests

Show 4191 Skipped Tests

Error lines from build-log.txt

Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
fatal: not a git repository (or any of the parent directories): .git
+ /workspace/scenarios/kubernetes_e2e.py --cluster=e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --deployment=kops --kops-ssh-user=admin --env=KUBE_SSH_USER=admin --env=KOPS_DEPLOY_LATEST_URL=https://storage.googleapis.com/kubernetes-release/release/stable-1.15.txt --env=KOPS_KUBE_RELEASE_URL=https://storage.googleapis.com/kubernetes-release/release --extract=release/stable-1.15 --ginkgo-parallel --kops-priority-path=/workspace/kubernetes/platforms/linux/amd64 --kops-version=https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt --provider=aws '--test_args=--ginkgo.focus=\[Conformance\]|\[NodeConformance\] --ginkgo.skip=\[Slow\]|\[Serial\]|AdmissionWebhook|Aggregator|CustomResource' --timeout=60m
starts with local mode
Environment:
ARTIFACTS=/logs/artifacts
AWS_DEFAULT_PROFILE=default
AWS_PROFILE=default
... skipping 147 lines ...
2020/10/18 10:53:08 process.go:155: Step './get-kube.sh' finished in 22.639673718s
2020/10/18 10:53:08 process.go:153: Running: /tmp/kops847402944/kops get clusters e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io

cluster not found "e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io"
2020/10/18 10:53:09 process.go:155: Step '/tmp/kops847402944/kops get clusters e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io' finished in 553.666627ms
2020/10/18 10:53:09 util.go:42: curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2020/10/18 10:53:09 kops.go:514: failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
2020/10/18 10:53:09 util.go:68: curl https://ip.jsb.workers.dev
2020/10/18 10:53:09 kops.go:439: Using external IP for admin access: 34.67.240.224/32
2020/10/18 10:53:09 process.go:153: Running: /tmp/kops847402944/kops create cluster --name e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones eu-central-1b --master-size c5.large --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.15.12 --admin-access 34.67.240.224/32 --cloud aws --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes
I1018 10:53:09.173435     153 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I1018 10:53:09.229535     153 create_cluster.go:711] Using SSH public key: /workspace/.ssh/kube_aws_rsa.pub
I1018 10:53:10.280627     153 subnets.go:180] Assigned CIDR 172.20.32.0/19 to subnet eu-central-1b
... skipping 44 lines ...

2020/10/18 10:53:35 process.go:155: Step '/tmp/kops847402944/kops create cluster --name e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones eu-central-1b --master-size c5.large --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.15.12 --admin-access 34.67.240.224/32 --cloud aws --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes' finished in 26.446264012s
2020/10/18 10:53:35 process.go:153: Running: /tmp/kops847402944/kops validate cluster e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --wait 15m
I1018 10:53:35.620086     175 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io

W1018 10:53:36.988105     175 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:53:47.022809     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:53:57.086065     175 validate_cluster.go:221] (will retry): cluster not yet healthy
W1018 10:54:07.119085     175 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:54:17.151722     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:54:27.190731     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:54:37.241751     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:54:47.280228     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:54:57.312252     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:55:07.345425     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:55:17.382079     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:55:27.416424     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:55:37.453787     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:55:47.499841     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:55:57.530437     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:56:07.575223     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:56:17.608718     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:56:27.656843     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:56:37.726819     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:56:47.794921     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1018 10:56:57.847453     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

... skipping 9 lines ...
Machine	i-0a58373f9f6c36f38								machine "i-0a58373f9f6c36f38" has not yet joined cluster
Node	ip-172-20-38-52.eu-central-1.compute.internal					master "ip-172-20-38-52.eu-central-1.compute.internal" is missing kube-scheduler pod
Pod	kube-system/kube-dns-6fdc66c546-st6ss						system-cluster-critical pod "kube-dns-6fdc66c546-st6ss" is pending
Pod	kube-system/kube-dns-autoscaler-7cb5768b84-2nlqj				system-cluster-critical pod "kube-dns-autoscaler-7cb5768b84-2nlqj" is pending
Pod	kube-system/kube-scheduler-ip-172-20-38-52.eu-central-1.compute.internal	system-cluster-critical pod "kube-scheduler-ip-172-20-38-52.eu-central-1.compute.internal" is pending

Validation Failed
W1018 10:57:10.396872     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

... skipping 7 lines ...
Machine	i-0306e1ed6700bd862					machine "i-0306e1ed6700bd862" has not yet joined cluster
Machine	i-0330548d4ab08f474					machine "i-0330548d4ab08f474" has not yet joined cluster
Machine	i-086fc3df564b8083f					machine "i-086fc3df564b8083f" has not yet joined cluster
Pod	kube-system/kube-dns-6fdc66c546-st6ss			system-cluster-critical pod "kube-dns-6fdc66c546-st6ss" is pending
Pod	kube-system/kube-dns-autoscaler-7cb5768b84-2nlqj	system-cluster-critical pod "kube-dns-autoscaler-7cb5768b84-2nlqj" is pending

Validation Failed
W1018 10:57:21.904410     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

... skipping 7 lines ...
KIND	NAME					MESSAGE
Machine	i-0330548d4ab08f474			machine "i-0330548d4ab08f474" has not yet joined cluster
Machine	i-086fc3df564b8083f			machine "i-086fc3df564b8083f" has not yet joined cluster
Pod	kube-system/kube-dns-6fdc66c546-kn8xv	system-cluster-critical pod "kube-dns-6fdc66c546-kn8xv" is not ready (kubedns)
Pod	kube-system/kube-dns-6fdc66c546-st6ss	system-cluster-critical pod "kube-dns-6fdc66c546-st6ss" is not ready (kubedns)

Validation Failed
W1018 10:57:33.514261     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

... skipping 6 lines ...
VALIDATION ERRORS
KIND	NAME									MESSAGE
Machine	i-0330548d4ab08f474							machine "i-0330548d4ab08f474" has not yet joined cluster
Machine	i-086fc3df564b8083f							machine "i-086fc3df564b8083f" has not yet joined cluster
Pod	kube-system/kube-proxy-ip-172-20-40-195.eu-central-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-40-195.eu-central-1.compute.internal" is pending

Validation Failed
W1018 10:57:44.983046     175 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1b	Master	c5.large	1	1	eu-central-1b
nodes-eu-central-1b	Node	t3.medium	4	4	eu-central-1b

... skipping 49 lines ...
ip-172-20-40-195.eu-central-1.compute.internal   Ready   node     62s     v1.15.12
ip-172-20-57-51.eu-central-1.compute.internal    Ready   node     40s     v1.15.12
ip-172-20-58-174.eu-central-1.compute.internal   Ready   node     41s     v1.15.12
2020/10/18 10:58:25 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version
Unable to connect to the server: dial tcp: lookup api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
2020/10/18 10:58:25 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 304.706018ms
2020/10/18 10:58:25 e2e.go:334: Failed to reach api. Sleeping for 10 seconds before retrying... ([./cluster/kubectl.sh --match-server-version=false version])
2020/10/18 10:58:35 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version
2020/10/18 10:58:35 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 14.351µs
2020/10/18 10:58:35 e2e.go:334: Failed to reach api. Sleeping for 10 seconds before retrying... ([./cluster/kubectl.sh --match-server-version=false version])
2020/10/18 10:58:45 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version
2020/10/18 10:58:45 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 12.594µs
2020/10/18 10:58:45 e2e.go:334: Failed to reach api. Sleeping for 10 seconds before retrying... ([./cluster/kubectl.sh --match-server-version=false version])
2020/10/18 10:58:55 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version
2020/10/18 10:58:55 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 22.915µs
2020/10/18 10:58:55 e2e.go:334: Failed to reach api. Sleeping for 10 seconds before retrying... ([./cluster/kubectl.sh --match-server-version=false version])
2020/10/18 10:59:05 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version
2020/10/18 10:59:05 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 18.526µs
2020/10/18 10:59:05 process.go:153: Running: kubectl config view --minify -ojson --kubeconfig /tmp/kops847402944/kubeconfig
2020/10/18 10:59:05 process.go:155: Step 'kubectl config view --minify -ojson --kubeconfig /tmp/kops847402944/kubeconfig' finished in 150.510251ms
2020/10/18 10:59:05 kops.go:666: running ginkgo tests directly
2020/10/18 10:59:05 runner.go:220: bazel-bin not found at bazel-bin
... skipping 426 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Oct 18 10:59:37.812: INFO: Successfully updated pod "pod-update-activedeadlineseconds-33b2e644-881d-4e8e-9b98-e51779dc1773"
Oct 18 10:59:37.812: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-33b2e644-881d-4e8e-9b98-e51779dc1773" in namespace "pods-4774" to be "terminated due to deadline exceeded"
Oct 18 10:59:37.928: INFO: Pod "pod-update-activedeadlineseconds-33b2e644-881d-4e8e-9b98-e51779dc1773": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 115.835822ms
Oct 18 10:59:37.928: INFO: Pod "pod-update-activedeadlineseconds-33b2e644-881d-4e8e-9b98-e51779dc1773" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 18 10:59:37.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4774" for this suite.
Oct 18 10:59:44.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 2206 lines ...
Oct 18 10:59:31.915: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Oct 18 10:59:31.916: INFO: Waiting for all frontend pods to be Running.
Oct 18 10:59:52.076: INFO: Waiting for frontend to serve content.
Oct 18 10:59:57.202: INFO: Trying to add a new entry to the guestbook.
Oct 18 10:59:57.324: INFO: Verifying that added entry can be retrieved.
Oct 18 10:59:57.822: INFO: Failed to get response from guestbook. err: <nil>, response: {"data": ""}
STEP: using delete to clean up resources
Oct 18 11:00:02.957: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig delete --grace-period=0 --force -f - --namespace=kubectl-9835'
Oct 18 11:00:03.635: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct 18 11:00:03.635: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Oct 18 11:00:03.635: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig delete --grace-period=0 --force -f - --namespace=kubectl-9835'
... skipping 729 lines ...
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 18 11:00:58.744: INFO: >>> kubeConfig: /tmp/kops847402944/kubeconfig
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-765f00c4-df19-4e31-b0d8-3e8392a4a2bd
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 18 11:00:59.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7666" for this suite.
Oct 18 11:01:05.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 18 11:01:09.925: INFO: namespace configmap-7666 deletion completed in 10.484562364s


• [SLOW TEST:11.181 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 1226 lines ...
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 18 11:01:35.506: INFO: >>> kubeConfig: /tmp/kops847402944/kubeconfig
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-394139e8-0edb-42af-a27e-aaff35012f11
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 18 11:01:36.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7180" for this suite.
Oct 18 11:01:42.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 18 11:01:46.674: INFO: namespace secrets-7180 deletion completed in 10.479829274s


• [SLOW TEST:11.169 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 530 lines ...
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:205
Oct 18 11:01:48.954: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-55365ef3-4d4f-4c6f-9b34-d627e156475f" in namespace "security-context-test-6922" to be "success or failure"
Oct 18 11:01:49.070: INFO: Pod "busybox-readonly-true-55365ef3-4d4f-4c6f-9b34-d627e156475f": Phase="Pending", Reason="", readiness=false. Elapsed: 115.108044ms
Oct 18 11:01:51.185: INFO: Pod "busybox-readonly-true-55365ef3-4d4f-4c6f-9b34-d627e156475f": Phase="Failed", Reason="", readiness=false. Elapsed: 2.230284442s
Oct 18 11:01:51.185: INFO: Pod "busybox-readonly-true-55365ef3-4d4f-4c6f-9b34-d627e156475f" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 18 11:01:51.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6922" for this suite.
Oct 18 11:01:57.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 135 lines ...
Oct 18 11:01:40.054: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Oct 18 11:01:40.054: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig describe pod redis-master-c54pq --namespace=kubectl-1197'
Oct 18 11:01:40.855: INFO: stderr: ""
Oct 18 11:01:40.855: INFO: stdout: "Name:           redis-master-c54pq\nNamespace:      kubectl-1197\nPriority:       0\nNode:           ip-172-20-40-195.eu-central-1.compute.internal/172.20.40.195\nStart Time:     Sun, 18 Oct 2020 11:01:37 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    <none>\nStatus:         Running\nIP:             100.96.2.59\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://18a6cacc7426e5054e7b9c1573758399725e9704a86ef403605cc9e886de025b\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 18 Oct 2020 11:01:38 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-m7987 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-m7987:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-m7987\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                                                     Message\n  ----    ------     ----  ----                                                     -------\n  Normal  Scheduled  3s    default-scheduler                                        Successfully assigned kubectl-1197/redis-master-c54pq to ip-172-20-40-195.eu-central-1.compute.internal\n  Normal  Pulled     2s    kubelet, ip-172-20-40-195.eu-central-1.compute.internal  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, ip-172-20-40-195.eu-central-1.compute.internal  Created container redis-master\n  Normal  Started    2s    kubelet, ip-172-20-40-195.eu-central-1.compute.internal  Started container redis-master\n"
Oct 18 11:01:40.855: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig describe rc redis-master --namespace=kubectl-1197'
Oct 18 11:01:41.773: INFO: stderr: ""
Oct 18 11:01:41.773: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-1197\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: redis-master-c54pq\n"
Oct 18 11:01:41.773: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig describe service redis-master --namespace=kubectl-1197'
Oct 18 11:01:42.712: INFO: stderr: ""
Oct 18 11:01:42.712: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-1197\nLabels:            app=redis\n                   role=master\nAnnotations:       <none>\nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                100.71.144.46\nPort:              <unset>  6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         100.96.2.59:6379\nSession Affinity:  None\nEvents:            <none>\n"
Oct 18 11:01:42.828: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig describe node ip-172-20-36-125.eu-central-1.compute.internal'
Oct 18 11:01:43.866: INFO: stderr: ""
Oct 18 11:01:43.866: INFO: stdout: "Name:               ip-172-20-36-125.eu-central-1.compute.internal\nRoles:              node\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=t3.medium\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=eu-central-1\n                    failure-domain.beta.kubernetes.io/zone=eu-central-1b\n                    kops.k8s.io/instancegroup=nodes-eu-central-1b\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=ip-172-20-36-125.eu-central-1.compute.internal\n                    kubernetes.io/os=linux\n                    kubernetes.io/role=node\n                    node-role.kubernetes.io/node=\nAnnotations:        node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 18 Oct 2020 10:57:13 +0000\nTaints:             <none>\nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sun, 18 Oct 2020 10:57:22 +0000   Sun, 18 Oct 2020 10:57:22 +0000   RouteCreated                 RouteController created a route\n  MemoryPressure       False   Sun, 18 Oct 2020 11:01:13 +0000   Sun, 18 Oct 2020 10:57:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sun, 18 Oct 2020 11:01:13 +0000   Sun, 18 Oct 2020 10:57:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sun, 18 Oct 2020 11:01:13 +0000   Sun, 18 Oct 2020 10:57:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sun, 18 Oct 2020 11:01:13 +0000   Sun, 18 Oct 2020 10:57:20 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:   172.20.36.125\n  ExternalIP:   3.120.205.146\n  Hostname:     ip-172-20-36-125.eu-central-1.compute.internal\n  InternalDNS:  ip-172-20-36-125.eu-central-1.compute.internal\n  ExternalDNS:  ec2-3-120-205-146.eu-central-1.compute.amazonaws.com\nCapacity:\n attachable-volumes-aws-ebs:  25\n cpu:                         2\n ephemeral-storage:           47115904Ki\n hugepages-1Gi:               0\n hugepages-2Mi:               0\n memory:                      3989404Ki\n pods:                        110\nAllocatable:\n attachable-volumes-aws-ebs:  25\n cpu:                         2\n ephemeral-storage:           43422017055\n hugepages-1Gi:               0\n hugepages-2Mi:               0\n memory:                      3887004Ki\n pods:                        110\nSystem Info:\n Machine ID:                 ec2cef1f166bba998b3489afc9451a91\n System UUID:                EC2CEF1F-166B-BA99-8B34-89AFC9451A91\n Boot ID:                    6db97a5a-f144-4cb1-8747-efe80ea9a86a\n Kernel Version:             4.9.0-13-amd64\n OS Image:                   Debian GNU/Linux 9 (stretch)\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.6.3\n Kubelet Version:            v1.15.12\n Kube-Proxy Version:         v1.15.12\nPodCIDR:                     100.96.1.0/24\nProviderID:                  aws:///eu-central-1b/i-0a58373f9f6c36f38\nNon-terminated Pods:         (7 in total)\n  Namespace                  Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                                         ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-dns-6fdc66c546-kn8xv                                    260m (13%)    0 (0%)      110Mi (2%)       170Mi (4%)     4m20s\n  kube-system                kube-dns-6fdc66c546-st6ss                                    260m (13%)    0 (0%)      110Mi (2%)       170Mi (4%)     5m42s\n  kube-system                kube-dns-autoscaler-7cb5768b84-2nlqj                         20m (1%)      0 (0%)      10Mi (0%)        0 (0%)         5m41s\n  kube-system                kube-proxy-ip-172-20-36-125.eu-central-1.compute.internal    100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m6s\n  pod-network-test-5194      netserver-1                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s\n  statefulset-7598           ss2-1                                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s\n  statefulset-923            ss2-0                                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                    Requests    Limits\n  --------                    --------    ------\n  cpu                         640m (32%)  0 (0%)\n  memory                      230Mi (6%)  340Mi (8%)\n  ephemeral-storage           0 (0%)      0 (0%)\n  attachable-volumes-aws-ebs  0           0\nEvents:\n  Type    Reason                   Age                   From                                                        Message\n  ----    ------                   ----                  ----                                                        -------\n  Normal  Starting                 7m6s                  kubelet, ip-172-20-36-125.eu-central-1.compute.internal     Starting kubelet.\n  Normal  NodeAllocatableEnforced  7m6s                  kubelet, ip-172-20-36-125.eu-central-1.compute.internal     Updated Node Allocatable limit across pods\n  Normal  NodeHasSufficientPID     6m36s (x7 over 7m6s)  kubelet, ip-172-20-36-125.eu-central-1.compute.internal     Node ip-172-20-36-125.eu-central-1.compute.internal status is now: NodeHasSufficientPID\n  Normal  Starting                 6m33s                 kube-proxy, ip-172-20-36-125.eu-central-1.compute.internal  Starting kube-proxy.\n  Normal  NodeHasSufficientMemory  6m6s (x8 over 7m6s)   kubelet, ip-172-20-36-125.eu-central-1.compute.internal     Node ip-172-20-36-125.eu-central-1.compute.internal status is now: NodeHasSufficientMemory\n  Normal  NodeHasNoDiskPressure    6m6s (x8 over 7m6s)   kubelet, ip-172-20-36-125.eu-central-1.compute.internal     Node ip-172-20-36-125.eu-central-1.compute.internal status is now: NodeHasNoDiskPressure\n"
... skipping 633 lines ...
STEP: Creating a kubernetes client
Oct 18 11:01:29.208: INFO: >>> kubeConfig: /tmp/kops847402944/kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Oct 18 11:01:29.670: INFO: PodSpec: initContainers in spec.initContainers
Oct 18 11:02:11.072: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-19d0eabd-56e6-4eb7-8a42-9b3e0469abed", GenerateName:"", Namespace:"init-container-2071", SelfLink:"/api/v1/namespaces/init-container-2071/pods/pod-init-19d0eabd-56e6-4eb7-8a42-9b3e0469abed", UID:"41e1b1a9-a3a7-42af-91c2-ded7feb964e9", ResourceVersion:"4588", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63738615689, loc:(*time.Location)(0x7edea20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"670376386"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-lzmrw", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001cc28c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lzmrw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lzmrw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lzmrw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0027f91a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"ip-172-20-40-195.eu-central-1.compute.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002250de0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0027f9220)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0027f9240)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0027f9248), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0027f924c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738615689, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738615689, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738615689, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738615689, loc:(*time.Location)(0x7edea20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.40.195", PodIP:"100.96.2.58", StartTime:(*v1.Time)(0xc002e4c680), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00015eee0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00015ef50)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://1aff197e316cbb342d4a7a28bfeb43ae96d997dccf426627e3103150427d105e"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002e4c6c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002e4c6a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 18 11:02:11.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2071" for this suite.
Oct 18 11:02:33.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 18 11:02:37.668: INFO: namespace init-container-2071 deletion completed in 26.479969512s


• [SLOW TEST:68.460 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 1179 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-8337
STEP: Creating statefulset with conflicting port in namespace statefulset-8337
STEP: Waiting until pod test-pod will start running in namespace statefulset-8337
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8337
Oct 18 11:02:43.244: INFO: Observed stateful pod in namespace: statefulset-8337, name: ss-0, uid: c107e6e3-e16e-4492-a744-4e544cac0b18, status phase: Pending. Waiting for statefulset controller to delete.
Oct 18 11:02:43.649: INFO: Observed stateful pod in namespace: statefulset-8337, name: ss-0, uid: c107e6e3-e16e-4492-a744-4e544cac0b18, status phase: Failed. Waiting for statefulset controller to delete.
Oct 18 11:02:43.653: INFO: Observed stateful pod in namespace: statefulset-8337, name: ss-0, uid: c107e6e3-e16e-4492-a744-4e544cac0b18, status phase: Failed. Waiting for statefulset controller to delete.
Oct 18 11:02:43.655: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8337
STEP: Removing pod with conflicting port in namespace statefulset-8337
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8337 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Oct 18 11:02:46.005: INFO: Deleting all statefulset in ns statefulset-8337
... skipping 24 lines ...
Oct 18 11:03:04.809: INFO: >>> kubeConfig: /tmp/kops847402944/kubeconfig
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Oct 18 11:03:07.843: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
... skipping 1596 lines ...
STEP: Creating a kubernetes client
Oct 18 11:03:32.248: INFO: >>> kubeConfig: /tmp/kops847402944/kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Oct 18 11:03:32.708: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 18 11:03:37.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
Oct 18 11:03:48.325: INFO: namespace init-container-3918 deletion completed in 10.484387816s


• [SLOW TEST:16.077 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
Oct 18 11:03:48.326: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [k8s.io] Pods
... skipping 1030 lines ...
Oct 18 11:02:59.347: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Oct 18 11:02:59.347: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Oct 18 11:02:59.347: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Oct 18 11:02:59.347: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:03:00.404: INFO: rc: 1
Oct 18 11:03:00.404: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  error: unable to upgrade connection: container not found ("nginx")
 [] <nil> 0xc002023d10 exit status 1 <nil> <nil> true [0xc00095b030 0xc00095b168 0xc00095b1d0] [0xc00095b030 0xc00095b168 0xc00095b1d0] [0xc00095b0f8 0xc00095b1a0] [0xba70e0 0xba70e0] 0xc003173140 <nil>}:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Oct 18 11:03:10.404: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:03:11.472: INFO: rc: 1
Oct 18 11:03:11.472: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  error: unable to upgrade connection: container not found ("nginx")
 [] <nil> 0xc001b5e090 exit status 1 <nil> <nil> true [0xc001fea800 0xc001fea878 0xc001fea898] [0xc001fea800 0xc001fea878 0xc001fea898] [0xc001fea858 0xc001fea888] [0xba70e0 0xba70e0] 0xc0030bb920 <nil>}:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Oct 18 11:03:21.472: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:03:22.165: INFO: rc: 1
Oct 18 11:03:22.165: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc001b5e750 exit status 1 <nil> <nil> true [0xc001fea8a8 0xc001fea900 0xc001fea948] [0xc001fea8a8 0xc001fea900 0xc001fea948] [0xc001fea8e8 0xc001fea938] [0xba70e0 0xba70e0] 0xc0030bbc20 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:03:32.165: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:03:32.848: INFO: rc: 1
Oct 18 11:03:32.848: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc0025c9e00 exit status 1 <nil> <nil> true [0xc0018a08a8 0xc0018a08c8 0xc0018a0948] [0xc0018a08a8 0xc0018a08c8 0xc0018a0948] [0xc0018a08b8 0xc0018a0928] [0xba70e0 0xba70e0] 0xc00327fe00 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:03:42.849: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:03:43.530: INFO: rc: 1
Oct 18 11:03:43.530: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc001b5ee40 exit status 1 <nil> <nil> true [0xc001fea980 0xc001fea9b8 0xc001fea9d8] [0xc001fea980 0xc001fea9b8 0xc001fea9d8] [0xc001fea9b0 0xc001fea9c8] [0xba70e0 0xba70e0] 0xc0030bbf20 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:03:53.530: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:03:54.210: INFO: rc: 1
Oct 18 11:03:54.211: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc00214e420 exit status 1 <nil> <nil> true [0xc00095b220 0xc00095b298 0xc00095b3d8] [0xc00095b220 0xc00095b298 0xc00095b3d8] [0xc00095b248 0xc00095b370] [0xba70e0 0xba70e0] 0xc003173440 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:04:04.211: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:04:04.890: INFO: rc: 1
Oct 18 11:04:04.890: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc002a42000 exit status 1 <nil> <nil> true [0xc0018a0960 0xc0018a09a0 0xc0018a09d0] [0xc0018a0960 0xc0018a09a0 0xc0018a09d0] [0xc0018a0980 0xc0018a09c8] [0xba70e0 0xba70e0] 0xc0031c6120 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:04:14.891: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:04:15.568: INFO: rc: 1
Oct 18 11:04:15.568: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc002a8c960 exit status 1 <nil> <nil> true [0xc0000102a0 0xc000599680 0xc000599c28] [0xc0000102a0 0xc000599680 0xc000599c28] [0xc000011f98 0xc000599990] [0xba70e0 0xba70e0] 0xc00327e5a0 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:04:25.570: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:04:26.275: INFO: rc: 1
Oct 18 11:04:26.275: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc002a8d020 exit status 1 <nil> <nil> true [0xc001fea000 0xc001fea058 0xc001fea0d8] [0xc001fea000 0xc001fea058 0xc001fea0d8] [0xc001fea030 0xc001fea090] [0xba70e0 0xba70e0] 0xc00327e8a0 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:04:36.275: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:04:36.974: INFO: rc: 1
Oct 18 11:04:36.974: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc002a8d710 exit status 1 <nil> <nil> true [0xc001fea0f8 0xc001fea130 0xc001fea198] [0xc001fea0f8 0xc001fea130 0xc001fea198] [0xc001fea118 0xc001fea178] [0xba70e0 0xba70e0] 0xc00327eba0 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:04:46.975: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:04:47.673: INFO: rc: 1
Oct 18 11:04:47.673: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc0020228a0 exit status 1 <nil> <nil> true [0xc00095a2c0 0xc00095ab40 0xc00095add0] [0xc00095a2c0 0xc00095ab40 0xc00095add0] [0xc00095ab08 0xc00095ac88] [0xba70e0 0xba70e0] 0xc003172480 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:04:57.673: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:04:58.354: INFO: rc: 1
Oct 18 11:04:58.354: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc002022f60 exit status 1 <nil> <nil> true [0xc00095ae10 0xc00095b030 0xc00095b168] [0xc00095ae10 0xc00095b030 0xc00095b168] [0xc00095b020 0xc00095b0f8] [0xba70e0 0xba70e0] 0xc003172780 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:05:08.355: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:05:09.057: INFO: rc: 1
Oct 18 11:05:09.058: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc001a1ec60 exit status 1 <nil> <nil> true [0xc002a74000 0xc002a74018 0xc002a74030] [0xc002a74000 0xc002a74018 0xc002a74030] [0xc002a74010 0xc002a74028] [0xba70e0 0xba70e0] 0xc0031c6d20 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:05:19.058: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:05:19.769: INFO: rc: 1
Oct 18 11:05:19.769: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc002023620 exit status 1 <nil> <nil> true [0xc00095b178 0xc00095b220 0xc00095b298] [0xc00095b178 0xc00095b220 0xc00095b298] [0xc00095b1d0 0xc00095b248] [0xba70e0 0xba70e0] 0xc003172a80 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:05:29.769: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:05:30.462: INFO: rc: 1
Oct 18 11:05:30.463: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc0025c86f0 exit status 1 <nil> <nil> true [0xc0018a0010 0xc0018a00a0 0xc0018a00e8] [0xc0018a0010 0xc0018a00a0 0xc0018a00e8] [0xc0018a0088 0xc0018a00c0] [0xba70e0 0xba70e0] 0xc0030ba3c0 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:05:40.463: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:05:41.173: INFO: rc: 1
Oct 18 11:05:41.173: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc002a8de00 exit status 1 <nil> <nil> true [0xc001fea1b0 0xc001fea210 0xc001fea248] [0xc001fea1b0 0xc001fea210 0xc001fea248] [0xc001fea1f0 0xc001fea238] [0xba70e0 0xba70e0] 0xc00327ef00 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:05:51.174: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:05:51.854: INFO: rc: 1
Oct 18 11:05:51.854: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc002d4e5d0 exit status 1 <nil> <nil> true [0xc001fea250 0xc001fea298 0xc001fea300] [0xc001fea250 0xc001fea298 0xc001fea300] [0xc001fea278 0xc001fea2c8] [0xba70e0 0xba70e0] 0xc00327f320 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:06:01.855: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:06:02.543: INFO: rc: 1
Oct 18 11:06:02.544: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc0025c8db0 exit status 1 <nil> <nil> true [0xc0018a0168 0xc0018a01f0 0xc0018a0238] [0xc0018a0168 0xc0018a01f0 0xc0018a0238] [0xc0018a01d0 0xc0018a0230] [0xba70e0 0xba70e0] 0xc0030ba6c0 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:06:12.544: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:06:13.234: INFO: rc: 1
Oct 18 11:06:13.235: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc002a8c930 exit status 1 <nil> <nil> true [0xc000599878 0xc0000102a0 0xc00095a2c0] [0xc000599878 0xc0000102a0 0xc00095a2c0] [0xc000599c28 0xc000011f98] [0xba70e0 0xba70e0] 0xc0030ba3c0 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:06:23.235: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:06:23.933: INFO: rc: 1
Oct 18 11:06:23.933: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc002022930 exit status 1 <nil> <nil> true [0xc001fea000 0xc001fea058 0xc001fea0d8] [0xc001fea000 0xc001fea058 0xc001fea0d8] [0xc001fea030 0xc001fea090] [0xba70e0 0xba70e0] 0xc003172480 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:06:33.933: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:06:34.635: INFO: rc: 1
Oct 18 11:06:34.636: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc002d4e7e0 exit status 1 <nil> <nil> true [0xc0018a0010 0xc0018a00a0 0xc0018a00e8] [0xc0018a0010 0xc0018a00a0 0xc0018a00e8] [0xc0018a0088 0xc0018a00c0] [0xba70e0 0xba70e0] 0xc00327e5a0 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:06:44.636: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:06:45.322: INFO: rc: 1
Oct 18 11:06:45.322: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc002d4eea0 exit status 1 <nil> <nil> true [0xc0018a0168 0xc0018a01f0 0xc0018a0238] [0xc0018a0168 0xc0018a01f0 0xc0018a0238] [0xc0018a01d0 0xc0018a0230] [0xba70e0 0xba70e0] 0xc00327e8a0 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:06:55.322: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:06:56.014: INFO: rc: 1
Oct 18 11:06:56.014: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc002a8d080 exit status 1 <nil> <nil> true [0xc00095a650 0xc00095ab68 0xc00095ae10] [0xc00095a650 0xc00095ab68 0xc00095ae10] [0xc00095ab40 0xc00095add0] [0xba70e0 0xba70e0] 0xc0030ba6c0 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:07:06.014: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:07:06.704: INFO: rc: 1
Oct 18 11:07:06.704: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc002d4f560 exit status 1 <nil> <nil> true [0xc0018a0250 0xc0018a02d0 0xc0018a0350] [0xc0018a0250 0xc0018a02d0 0xc0018a0350] [0xc0018a02b8 0xc0018a0348] [0xba70e0 0xba70e0] 0xc00327eba0 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:07:16.705: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:07:17.393: INFO: rc: 1
Oct 18 11:07:17.393: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc002a8d7d0 exit status 1 <nil> <nil> true [0xc00095b000 0xc00095b0a0 0xc00095b178] [0xc00095b000 0xc00095b0a0 0xc00095b178] [0xc00095b030 0xc00095b168] [0xba70e0 0xba70e0] 0xc0030ba9c0 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:07:27.393: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:07:28.079: INFO: rc: 1
Oct 18 11:07:28.079: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc0025c86c0 exit status 1 <nil> <nil> true [0xc002a74000 0xc002a74018 0xc002a74030] [0xc002a74000 0xc002a74018 0xc002a74030] [0xc002a74010 0xc002a74028] [0xba70e0 0xba70e0] 0xc0031c6a20 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:07:38.080: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:07:38.770: INFO: rc: 1
Oct 18 11:07:38.770: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc002023080 exit status 1 <nil> <nil> true [0xc001fea0f8 0xc001fea130 0xc001fea198] [0xc001fea0f8 0xc001fea130 0xc001fea198] [0xc001fea118 0xc001fea178] [0xba70e0 0xba70e0] 0xc003172780 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:07:48.770: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:07:49.492: INFO: rc: 1
Oct 18 11:07:49.492: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []  <nil>  Error from server (NotFound): pods "ss-2" not found
 [] <nil> 0xc0025c8de0 exit status 1 <nil> <nil> true [0xc002a74038 0xc002a74050 0xc002a74068] [0xc002a74038 0xc002a74050 0xc002a74068] [0xc002a74048 0xc002a74060] [0xba70e0 0xba70e0] 0xc0031c7620 <nil>}:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 18 11:07:59.493: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops847402944/kubeconfig exec --namespace=statefulset-3386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 18 11:08:00.564: INFO: rc: 1
Oct 18 11:08:00.564: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Oct 18 11:08:00.564: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
... skipping 25 lines ...
Oct 18 11:03:45.161: INFO: Running AfterSuite actions on all nodes
Oct 18 11:08:12.473: INFO: Running AfterSuite actions on node 1
Oct 18 11:08:12.473: INFO: Skipping dumping logs from cluster


Ran 222 of 4413 Specs in 532.637 seconds
SUCCESS! -- 222 Passed | 0 Failed | 0 Pending | 4191 Skipped


Ginkgo ran 1 suite in 9m6.660305345s
Test Suite Passed
2020/10/18 11:08:12 process.go:155: Step 'platforms/linux/amd64/ginkgo --nodes=25 platforms/linux/amd64/e2e.test -- --kubeconfig=/tmp/kops847402944/kubeconfig --ginkgo.flakeAttempts=1 --provider=aws --gce-zone=eu-central-1b --gce-region=eu-central-1 --gce-multizone=false --host=https://api.e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --cluster-tag=e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --repo-root=. --num-nodes=0 --ginkgo.focus=\[Conformance\]|\[NodeConformance\] --ginkgo.skip=\[Slow\]|\[Serial\]|AdmissionWebhook|Aggregator|CustomResource --report-dir=/logs/artifacts --disable-log-dump=true' finished in 9m6.70601401s
2020/10/18 11:08:12 process.go:153: Running: kubectl -n kube-system get pods -ojson -l k8s-app=kops-controller
... skipping 195 lines ...
	internet-gateway:igw-06fb095c45d7a16f2
	subnet:subnet-015fde5ce88968226
	security-group:sg-065903a9d362f0c20
	dhcp-options:dopt-07b1611103d0d0615
	volume:vol-02af8e6b7194952a7
volume:vol-02af8e6b7194952a7	still has dependencies, will retry
I1018 11:09:29.216342    3290 errors.go:32] unexpected aws error code: "InvalidVolume.NotFound"
volume:vol-0a1ae96df6ffac56e	ok
volume:vol-0f35c4c9c2947e5ab	still has dependencies, will retry
volume:vol-0847a010bfeada4b6	ok
volume:vol-0afd6c6d0f5467a55	ok
subnet:subnet-015fde5ce88968226	still has dependencies, will retry
internet-gateway:igw-06fb095c45d7a16f2	still has dependencies, will retry
... skipping 19 lines ...
	volume:vol-02af8e6b7194952a7
	route-table:rtb-0b39836927485208b
	vpc:vpc-0b5c9d21ec4725a38
	volume:vol-0f35c4c9c2947e5ab
	subnet:subnet-015fde5ce88968226
	internet-gateway:igw-06fb095c45d7a16f2
I1018 11:09:50.908048    3290 errors.go:32] unexpected aws error code: "InvalidVolume.NotFound"
volume:vol-0f35c4c9c2947e5ab	ok
I1018 11:09:50.912704    3290 errors.go:32] unexpected aws error code: "InvalidVolume.NotFound"
volume:vol-02af8e6b7194952a7	ok
subnet:subnet-015fde5ce88968226	ok
security-group:sg-065903a9d362f0c20	ok
internet-gateway:igw-06fb095c45d7a16f2	ok
route-table:rtb-0b39836927485208b	ok
vpc:vpc-0b5c9d21ec4725a38	ok
dhcp-options:dopt-07b1611103d0d0615	ok
Deleted kubectl config for e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io

Deleted cluster: "e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io"
2020/10/18 11:09:58 process.go:155: Step '/tmp/kops847402944/kops delete cluster e2e-kops-aws-k8s-1-15.test-cncf-aws.k8s.io --yes' finished in 1m10.393427631s
2020/10/18 11:09:58 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2020/10/18 11:09:58 main.go:312: Something went wrong: encountered 1 errors: [error starting ./cluster/kubectl.sh --match-server-version=false version: exec: already started]
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 720, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 570, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 9 lines ...