This job view page is being replaced by Spyglass soon. Check out the new job view.
PRoliviassss: Restrict subnet auto-discovery to new LB creation on service side
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-07-21 20:11
Elapsed1h7m
Revisionc2ca4c22123132d089619c64fae5ea9103cca10a
Refs 2129

No Test Failures!


Error lines from build-log.txt

... skipping 1047 lines ...
  /home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/ingress/multi_path_backend_test.go:37
    IngressGroup across namespaces should behaves correctly
    /home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/ingress/multi_path_backend_test.go:99
------------------------------

Ran 2 of 2 Specs in 598.302 seconds
SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 0 Skipped
PASS

Running Suite: Service Suite
============================
Random Seed: 1626899746
Will run 8 of 8 specs
... skipping 320 lines ...
    /home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/service/nlb_ip_target_test.go:335
------------------------------


Summarizing 7 Failures:

[Fail] test k8s service reconciled by the aws load balancer controller with NLB instance target configuration [It] should provision internet-facing load balancer resources 
/home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/service/nlb_instance_target_test.go:96

[Fail] test k8s service reconciled by the aws load balancer controller with NLB instance target configuration [It] should provision internal load-balancer resources 
/home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/service/nlb_instance_target_test.go:207

[Fail] test k8s service reconciled by the aws load balancer controller with NLB instance target configuration [It] should create TLS listeners 
/home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/service/nlb_instance_target_test.go:269

[Fail] test k8s service reconciled by the aws load balancer controller with NLB instance target configuration [It] should enable proxy protocol v2 
/home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/service/nlb_instance_target_test.go:300

[Fail] test k8s service reconciled by the aws load balancer controller with NLB instance target configuration with target node labels [It] should add only the labelled nodes to the target group 
/home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/service/aws_resource_verifier.go:211

[Fail] k8s service reconciled by the aws load balancer NLB with IP target configuration [It] Should create and verify internet-facing NLB with IP targets 
/home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/service/nlb_ip_target_test.go:159

[Fail] k8s service reconciled by the aws load balancer NLB IP with TLS configuration [It] Should create TLS listeners 
/home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/service/nlb_ip_target_test.go:259

Ran 8 of 8 Specs in 1430.790 seconds
FAIL! -- 1 Passed | 7 Failed | 0 Pending | 0 Skipped
--- FAIL: TestService (1430.79s)
FAIL

Ginkgo ran 2 suites in 35m32.801340647s
Test Suite Failed
+ cleanup
+ sleep 60
+ cleanup_cluster
+ eksctl::delete_cluster lb-controller-e2e-2129-1417940131221868544 us-west-2
+ declare -r cluster_name=lb-controller-e2e-2129-1417940131221868544 region=us-west-2
+ local cluster_config=/tmp/lb-controller-e2e/clusters/lb-controller-e2e-2129-1417940131221868544.yaml
... skipping 3 lines ...
[ℹ]  eksctl version 0.34.0
[ℹ]  using region us-west-2
[ℹ]  deleting EKS cluster "lb-controller-e2e-2129-1417940131221868544"
[ℹ]  deleted 0 Fargate profile(s)
[✔]  kubeconfig has been updated
[ℹ]  cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
[!]  retryable error (Throttling: Rate exceeded
	status code: 400, request id: 5dee63fe-865b-405a-8e30-9730f0bb52fe) from cloudformation/DescribeStacks - will retry after delay of 751.766159ms
[ℹ]  4 sequential tasks: { delete nodegroup "ng-1", 2 sequential sub-tasks: { 2 sequential sub-tasks: { delete IAM role for serviceaccount "kube-system/aws-load-balancer-controller", delete serviceaccount "kube-system/aws-load-balancer-controller" }, delete IAM OIDC provider }, delete addon IAM "eksctl-lb-controller-e2e-2129-1417940131221868544-addon-vpc-cni", delete cluster control plane "lb-controller-e2e-2129-1417940131221868544" }
[ℹ]  will delete stack "eksctl-lb-controller-e2e-2129-1417940131221868544-nodegroup-ng-1"
[ℹ]  waiting for stack "eksctl-lb-controller-e2e-2129-1417940131221868544-nodegroup-ng-1" to get deleted
[ℹ]  will delete stack "eksctl-lb-controller-e2e-2129-1417940131221868544-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
[ℹ]  waiting for stack "eksctl-lb-controller-e2e-2129-1417940131221868544-addon-iamserviceaccount-kube-system-aws-load-balancer-controller" to get deleted
... skipping 12 lines ...
deleting IAM policy for controller
+ iam::delete_policy arn:aws:iam::607362164682:policy/lb-controller-e2e-2129-1417940131221868544 us-west-2
+ declare -r policy_arn=arn:aws:iam::607362164682:policy/lb-controller-e2e-2129-1417940131221868544 region=us-west-2
+ aws iam delete-policy --region us-west-2 --policy-arn arn:aws:iam::607362164682:policy/lb-controller-e2e-2129-1417940131221868544
+ echo 'deleted IAM policy for controller: arn:aws:iam::607362164682:policy/lb-controller-e2e-2129-1417940131221868544'
deleted IAM policy for controller: arn:aws:iam::607362164682:policy/lb-controller-e2e-2129-1417940131221868544
make: *** [Makefile:117: e2e-test] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
b38e8a511809
... skipping 4 lines ...