Recent runs || View in Spyglass
PR | hakman: Use -ginkgo.junit-report instead of -ginkgo.reportFile |
Result | ABORTED |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 21m46s |
Revision | 4dd2dc3821b23ee623f87f24ab230820781856f8 |
Refs |
13650 |
... skipping 415 lines ... + kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-c4347fd6f3-7640b.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --up --kops-binary-path=/home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops --kubernetes-version=v1.24.0 '--create-args=--networking amazonvpc --set=cluster.spec.awsLoadBalancerController.enabled=true --set=cluster.spec.certManager.enabled=true --zones=eu-west-1a,eu-west-1b,eu-west-1c --discovery-store=s3://k8s-kops-prow/e2e-c4347fd6f3-7640b.test-cncf-aws.k8s.io/discovery' --template-path= I0514 12:55:20.862966 38090 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true I0514 12:55:20.864012 38090 app.go:61] RunDir for this run: "/logs/artifacts/e475efca-d383-11ec-90e2-ce841d29e8b8" I0514 12:55:20.867627 38090 app.go:120] ID for this run: "e475efca-d383-11ec-90e2-ce841d29e8b8" I0514 12:55:20.867702 38090 up.go:44] Cleaning up any leaked resources from previous cluster I0514 12:55:20.867778 38090 dumplogs.go:45] /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops toolbox dump --name e2e-c4347fd6f3-7640b.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-c4347fd6f3-7640b.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu W0514 12:55:21.366067 38090 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1 I0514 12:55:21.366131 38090 down.go:48] /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops delete cluster --name e2e-c4347fd6f3-7640b.test-cncf-aws.k8s.io --yes I0514 12:55:21.386884 38111 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true I0514 12:55:21.387072 38111 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true I0514 12:55:21.387098 38111 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-c4347fd6f3-7640b.test-cncf-aws.k8s.io" not found I0514 12:55:21.855816 38090 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 2022/05/14 12:55:21 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404 I0514 12:55:21.865091 38090 http.go:37] curl https://ip.jsb.workers.dev I0514 12:55:21.991996 38090 up.go:156] /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops create cluster --name e2e-c4347fd6f3-7640b.test-cncf-aws.k8s.io --cloud aws --kubernetes-version v1.24.0 --ssh-public-key /tmp/kops/e2e-c4347fd6f3-7640b.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --networking amazonvpc --set=cluster.spec.awsLoadBalancerController.enabled=true --set=cluster.spec.certManager.enabled=true --zones=eu-west-1a,eu-west-1b,eu-west-1c --discovery-store=s3://k8s-kops-prow/e2e-c4347fd6f3-7640b.test-cncf-aws.k8s.io/discovery --admin-access 34.68.225.152/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --master-size c5.large I0514 12:55:22.013635 38121 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true I0514 12:55:22.013708 38121 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true I0514 12:55:22.013713 38121 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true I0514 12:55:22.042370 38121 create_cluster.go:862] Using SSH public key: /tmp/kops/e2e-c4347fd6f3-7640b.test-cncf-aws.k8s.io/id_ed25519.pub ... skipping 648 lines ... I0514 12:55:58.978609 38090 up.go:240] /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops validate cluster --name e2e-c4347fd6f3-7640b.test-cncf-aws.k8s.io --count 10 --wait 15m0s I0514 12:55:58.999182 38163 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true I0514 12:55:58.999273 38163 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true I0514 12:55:58.999279 38163 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true Validating cluster e2e-c4347fd6f3-7640b.test-cncf-aws.k8s.io W0514 12:56:00.449449 38163 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-c4347fd6f3-7640b.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b nodes-eu-west-1c Node t3.medium 1 1 eu-west-1c NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0514 12:56:10.479034 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b nodes-eu-west-1c Node t3.medium 1 1 eu-west-1c NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0514 12:56:20.515952 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b nodes-eu-west-1c Node t3.medium 1 1 eu-west-1c NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0514 12:56:30.561387 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b nodes-eu-west-1c Node t3.medium 1 1 eu-west-1c NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0514 12:56:40.590155 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b nodes-eu-west-1c Node t3.medium 1 1 eu-west-1c NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0514 12:56:50.622053 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b nodes-eu-west-1c Node t3.medium 1 1 eu-west-1c NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0514 12:57:00.655295 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b nodes-eu-west-1c Node t3.medium 1 1 eu-west-1c NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0514 12:57:10.686872 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b nodes-eu-west-1c Node t3.medium 1 1 eu-west-1c NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0514 12:57:20.719411 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b nodes-eu-west-1c Node t3.medium 1 1 eu-west-1c NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0514 12:57:30.767137 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b nodes-eu-west-1c Node t3.medium 1 1 eu-west-1c NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0514 12:57:40.801865 38163 validate_cluster.go:232] (will retry): cluster not yet healthy W0514 12:57:50.824401 38163 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-c4347fd6f3-7640b.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b nodes-eu-west-1c Node t3.medium 1 1 eu-west-1c NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0514 12:58:00.863971 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b nodes-eu-west-1c Node t3.medium 1 1 eu-west-1c NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0514 12:58:10.912812 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b nodes-eu-west-1c Node t3.medium 1 1 eu-west-1c NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0514 12:58:20.950905 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b nodes-eu-west-1c Node t3.medium 1 1 eu-west-1c NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0514 12:58:30.984574 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b nodes-eu-west-1c Node t3.medium 1 1 eu-west-1c NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0514 12:58:41.033030 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b nodes-eu-west-1c Node t3.medium 1 1 eu-west-1c NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0514 12:58:51.070313 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b nodes-eu-west-1c Node t3.medium 1 1 eu-west-1c NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0514 12:59:01.107512 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b nodes-eu-west-1c Node t3.medium 1 1 eu-west-1c NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0514 12:59:11.153808 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b nodes-eu-west-1c Node t3.medium 1 1 eu-west-1c NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0514 12:59:21.189515 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b ... skipping 24 lines ... Pod kube-system/ebs-csi-controller-77f6c5996b-h47hr system-cluster-critical pod "ebs-csi-controller-77f6c5996b-h47hr" is pending Pod kube-system/ebs-csi-node-c2bbg system-node-critical pod "ebs-csi-node-c2bbg" is pending Pod kube-system/ebs-csi-node-p6gzg system-node-critical pod "ebs-csi-node-p6gzg" is pending Pod kube-system/ebs-csi-node-z2jxq system-node-critical pod "ebs-csi-node-z2jxq" is pending Pod kube-system/ebs-csi-node-zbntk system-node-critical pod "ebs-csi-node-zbntk" is pending Validation Failed W0514 12:59:35.190116 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b ... skipping 24 lines ... Pod kube-system/ebs-csi-controller-77f6c5996b-h47hr system-cluster-critical pod "ebs-csi-controller-77f6c5996b-h47hr" is pending Pod kube-system/ebs-csi-node-c2bbg system-node-critical pod "ebs-csi-node-c2bbg" is pending Pod kube-system/ebs-csi-node-p6gzg system-node-critical pod "ebs-csi-node-p6gzg" is pending Pod kube-system/ebs-csi-node-z2jxq system-node-critical pod "ebs-csi-node-z2jxq" is pending Pod kube-system/ebs-csi-node-zbntk system-node-critical pod "ebs-csi-node-zbntk" is pending Validation Failed W0514 12:59:48.223278 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b ... skipping 19 lines ... Pod kube-system/ebs-csi-controller-77f6c5996b-h47hr system-cluster-critical pod "ebs-csi-controller-77f6c5996b-h47hr" is pending Pod kube-system/ebs-csi-node-c2bbg system-node-critical pod "ebs-csi-node-c2bbg" is pending Pod kube-system/ebs-csi-node-p6gzg system-node-critical pod "ebs-csi-node-p6gzg" is pending Pod kube-system/ebs-csi-node-z2jxq system-node-critical pod "ebs-csi-node-z2jxq" is pending Pod kube-system/ebs-csi-node-zbntk system-node-critical pod "ebs-csi-node-zbntk" is pending Validation Failed W0514 13:00:01.347588 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b ... skipping 11 lines ... KIND NAME MESSAGE Pod kube-system/aws-load-balancer-controller-55fb784444-vhq24 system-cluster-critical pod "aws-load-balancer-controller-55fb784444-vhq24" is pending Pod kube-system/ebs-csi-controller-77f6c5996b-4h9gl system-cluster-critical pod "ebs-csi-controller-77f6c5996b-4h9gl" is pending Pod kube-system/ebs-csi-controller-77f6c5996b-h47hr system-cluster-critical pod "ebs-csi-controller-77f6c5996b-h47hr" is pending Pod kube-system/ebs-csi-node-z2jxq system-node-critical pod "ebs-csi-node-z2jxq" is pending Validation Failed W0514 13:00:14.369955 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b ... skipping 8 lines ... i-0d6e71a6e058132bd node True VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/aws-load-balancer-controller-55fb784444-vhq24 system-cluster-critical pod "aws-load-balancer-controller-55fb784444-vhq24" is pending Validation Failed W0514 13:00:27.394562 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b ... skipping 8 lines ... i-0d6e71a6e058132bd node True VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/aws-load-balancer-controller-55fb784444-vhq24 system-cluster-critical pod "aws-load-balancer-controller-55fb784444-vhq24" is pending Validation Failed W0514 13:00:40.413241 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b ... skipping 8 lines ... i-0d6e71a6e058132bd node True VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/aws-load-balancer-controller-55fb784444-vhq24 system-cluster-critical pod "aws-load-balancer-controller-55fb784444-vhq24" is pending Validation Failed W0514 13:00:53.354281 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b ... skipping 8 lines ... i-0d6e71a6e058132bd node True VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/aws-load-balancer-controller-55fb784444-vhq24 system-cluster-critical pod "aws-load-balancer-controller-55fb784444-vhq24" is pending Validation Failed W0514 13:01:06.267509 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b ... skipping 8 lines ... i-0d6e71a6e058132bd node True VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/aws-load-balancer-controller-55fb784444-vhq24 system-cluster-critical pod "aws-load-balancer-controller-55fb784444-vhq24" is pending Validation Failed W0514 13:01:19.287229 38163 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master c5.large 1 1 eu-west-1a nodes-eu-west-1a Node t3.medium 2 2 eu-west-1a nodes-eu-west-1b Node t3.medium 1 1 eu-west-1b ... skipping 353 lines ... multiple times, values are ORed.[0m [38;5;14m--ginkgo.skip-file[0m [file (regexp) | file:line | file:lineA-lineB | file:line,line,line] [38;5;243m[0m [38;5;246mIf set, ginkgo will skip specs in matching files. Can be specified multiple times, values are ORed.[0m [38;5;9m[1m[4mFailure Handling[0m [38;5;9m--ginkgo.fail-on-pending[0m [38;5;243m[0m [38;5;246mIf set, ginkgo will mark the test suite as failed if any specs are pending.[0m [38;5;9m--ginkgo.fail-fast[0m [38;5;243m[0m [38;5;246mIf set, ginkgo will stop running a test suite after a failure occurs.[0m [38;5;9m--ginkgo.flake-attempts[0m [int] [38;5;243m(default: 0 - failed tests are not retried)[0m [38;5;246mMake up to this many attempts to run each spec. If any of the attempts succeed, the suite will not be failed.[0m [38;5;13m[1m[4mControlling Output Formatting[0m [38;5;13m--ginkgo.no-color[0m [38;5;243m[0m [38;5;246mIf set, suppress color output in default reporter.[0m [38;5;13m--ginkgo.slow-spec-threshold[0m [duration] [38;5;243m(default: 5s)[0m [38;5;246mSpecs that take longer to run than this threshold are flagged as slow by the ... skipping 114 lines ... write an execution trace to file -test.v verbose: print additional output Ginkgo ran 1 suite in 1m13.009729623s Test Suite Failed [38;5;228mGinkgo 2.0 is coming soon![0m [38;5;228m==========================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m ... skipping 375 lines ...