Recent runs || View in Spyglass
PR | christianjoun: Implement API load balancer class with NLB and ELB support on AWS |
Result | FAILURE |
Tests | 1 failed / 5 succeeded |
Started | |
Elapsed | 5m45s |
Revision | 0aa7a631405b340865262adf8315cc968efa1f93 |
Refs |
9011 |
job-version | v1.19.4-rc.0 |
revision | v1.19.4-rc.0 |
kops create cluster failed: error during /workspace/kops create cluster --name e2e-a8c77c5d83-ff1eb.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ap-southeast-2b --master-size c5.large --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.19.4-rc.0 --admin-access 35.202.201.37/32 --cloud aws --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes: exit status 2
from junit_runner.xml
Filter through log files | View test history on testgrid
Deferred TearDown
DumpClusterLogs (--up failed)
Extract
TearDown Previous
Timeout
... skipping 631 lines ... 2020/10/27 21:08:56 process.go:155: Step '/workspace/get-kube.sh' finished in 20.586274677s 2020/10/27 21:08:56 process.go:153: Running: /workspace/kops get clusters e2e-a8c77c5d83-ff1eb.test-cncf-aws.k8s.io cluster not found "e2e-a8c77c5d83-ff1eb.test-cncf-aws.k8s.io" 2020/10/27 21:08:57 process.go:155: Step '/workspace/kops get clusters e2e-a8c77c5d83-ff1eb.test-cncf-aws.k8s.io' finished in 762.117823ms 2020/10/27 21:08:57 util.go:42: curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 2020/10/27 21:08:57 kops.go:514: failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404 2020/10/27 21:08:57 util.go:68: curl https://ip.jsb.workers.dev 2020/10/27 21:08:57 kops.go:439: Using external IP for admin access: 35.202.201.37/32 2020/10/27 21:08:57 process.go:153: Running: /workspace/kops create cluster --name e2e-a8c77c5d83-ff1eb.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ap-southeast-2b --master-size c5.large --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.19.4-rc.0 --admin-access 35.202.201.37/32 --cloud aws --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes I1027 21:08:57.496564 9229 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true I1027 21:08:57.601216 9229 create_cluster.go:726] Using SSH public key: /workspace/.ssh/kube_aws_rsa.pub W1027 21:08:58.157319 9229 channel.go:299] unable to parse kops version "pull-56d6244a6e" I1027 21:08:58.957385 9229 subnets.go:180] Assigned CIDR 172.20.32.0/19 to subnet ap-southeast-2b W1027 21:09:01.577037 9229 apply_cluster.go:863] unable to parse kops version "pull-56d6244a6e" W1027 21:09:02.177343 9229 urls.go:75] Using base url from KOPS_BASE_URL env var: "https://storage.googleapis.com/kops-ci/pulls/pull-kops-e2e-kubernetes-aws/pull-56d6244a6e" panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x30107d8] goroutine 1 [running]: k8s.io/kops/pkg/model/awsmodel.(*AutoscalingGroupModelBuilder).buildLaunchConfigurationTask(0xc0006bdbc0, 0xc000961b70, 0xc000190be0, 0x48, 0xc000bc3180, 0xc000744d58, 0x3, 0x3) pkg/model/awsmodel/autoscalinggroup.go:207 +0x558 k8s.io/kops/pkg/model/awsmodel.(*AutoscalingGroupModelBuilder).buildLaunchTemplateTask(0xc0006bdbc0, 0xc000961b70, 0xc000190be0, 0x48, 0xc000bc3180, 0x48, 0x0, 0x0) ... skipping 25 lines ... 2020/10/27 21:09:03 process.go:155: Step '/workspace/kops create cluster --name e2e-a8c77c5d83-ff1eb.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ap-southeast-2b --master-size c5.large --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.19.4-rc.0 --admin-access 35.202.201.37/32 --cloud aws --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes' finished in 6.254028184s 2020/10/27 21:09:03 process.go:153: Running: kubectl -n kube-system get pods -ojson -l k8s-app=kops-controller 2020/10/27 21:09:03 process.go:153: Running: /workspace/kops export kubecfg e2e-a8c77c5d83-ff1eb.test-cncf-aws.k8s.io I1027 21:09:03.757291 9241 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true The connection to the server localhost:8080 was refused - did you specify the right host or port? 2020/10/27 21:09:03 process.go:155: Step 'kubectl -n kube-system get pods -ojson -l k8s-app=kops-controller' finished in 183.105041ms 2020/10/27 21:09:03 kubernetes.go:117: kubectl get pods failed: error during kubectl -n kube-system get pods -ojson -l k8s-app=kops-controller: exit status 1 W1027 21:09:04.526998 9241 vfs_castore.go:604] CA private key was not found cannot find CA certificate 2020/10/27 21:09:04 process.go:155: Step '/workspace/kops export kubecfg e2e-a8c77c5d83-ff1eb.test-cncf-aws.k8s.io' finished in 811.782919ms 2020/10/27 21:09:04 process.go:153: Running: /workspace/kops get clusters e2e-a8c77c5d83-ff1eb.test-cncf-aws.k8s.io I1027 21:09:04.561458 9262 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true ... skipping 4 lines ... I1027 21:09:05.267136 9272 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true No cloud resources to delete Deleted cluster: "e2e-a8c77c5d83-ff1eb.test-cncf-aws.k8s.io" 2020/10/27 21:09:15 process.go:155: Step '/workspace/kops delete cluster e2e-a8c77c5d83-ff1eb.test-cncf-aws.k8s.io --yes' finished in 10.323241841s 2020/10/27 21:09:15 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml. 2020/10/27 21:09:15 main.go:316: Something went wrong: starting e2e cluster: kops create cluster failed: error during /workspace/kops create cluster --name e2e-a8c77c5d83-ff1eb.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones ap-southeast-2b --master-size c5.large --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.19.4-rc.0 --admin-access 35.202.201.37/32 --cloud aws --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes: exit status 2 Traceback (most recent call last): File "/workspace/scenarios/kubernetes_e2e.py", line 720, in <module> main(parse_args()) File "/workspace/scenarios/kubernetes_e2e.py", line 570, in main mode.start(runner_args) File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start ... skipping 15 lines ...