This job view page is being replaced by Spyglass soon. Check out the new job view.
PRkishorj: Update NLB docs
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-06-30 16:40
Elapsed41m5s
Revisione024af28559e52e23085673ba5dde8f8503ae1ac
Refs 2111

No Test Failures!


Error lines from build-log.txt

... skipping 221 lines ...
#14 227.8 	github.com/rubenv/sql-migrate@v0.0.0-20200616145509-8d140a17f351 requires
#14 227.8 	github.com/godror/godror@v0.13.3 requires
#14 227.8 	github.com/go-kit/kit@v0.10.0 requires
#14 227.8 	github.com/hashicorp/consul/api@v1.3.0 requires
#14 227.8 	github.com/hashicorp/serf@v0.8.2 requires
#14 227.8 	github.com/hashicorp/logutils@v1.0.0: reading github.com/hashicorp/logutils/go.mod at revision v1.0.0: unknown revision v1.0.0
#14 ERROR: executor failed running [/bin/sh -c GOPROXY=direct go mod download]: exit code: 1
------
 > [base 5/5] RUN GOPROXY=direct go mod download:
------
Dockerfile:10
--------------------
   8 |     # cache deps before building and copying source so that we don't need to re-download as much
   9 |     # and so that source changes don't invalidate our downloaded layer
  10 | >>> RUN GOPROXY=direct go mod download
  11 |     
  12 |     FROM base AS build
--------------------
error: failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c GOPROXY=direct go mod download]: exit code: 1
+ n=1
+ sleep 2
+ '[' 1 -ge 2 ']'
+ DOCKER_CLI_EXPERIMENTAL=enabled
+ docker buildx build . --target bin --tag 607362164682.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-load-balancer-controller:v2.2.1-6-gae58fc6a --push --progress plain --platform linux/amd64
#1 [internal] load build definition from Dockerfile
... skipping 59 lines ...
#13 CACHED

#14 [base 5/5] RUN GOPROXY=direct go mod download
#14 sha256:57272bcd4784a867fb8e60ffe5e78bf547d0078123adea002af64b07ccbbf23b
#14 253.5 go: k8s.io/apimachinery@v0.21.2 requires
#14 253.5 	github.com/moby/spdystream@v0.2.0: reading github.com/moby/spdystream/go.mod at revision v0.2.0: unknown revision v0.2.0
#14 ERROR: executor failed running [/bin/sh -c GOPROXY=direct go mod download]: exit code: 1
------
 > [base 5/5] RUN GOPROXY=direct go mod download:
------
Dockerfile:10
--------------------
   8 |     # cache deps before building and copying source so that we don't need to re-download as much
   9 |     # and so that source changes don't invalidate our downloaded layer
  10 | >>> RUN GOPROXY=direct go mod download
  11 |     
  12 |     FROM base AS build
--------------------
error: failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c GOPROXY=direct go mod download]: exit code: 1
+ n=2
+ sleep 2
+ '[' 2 -ge 2 ']'
+ [[ 0 -ne 0 ]]
+ return 0
+ go install github.com/mikefarah/yq/v4@v4.6.1
... skipping 606 lines ...
/home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/ingress/multi_path_backend_test.go:16
  with podReadinessGate enabled
  /home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/ingress/multi_path_backend_test.go:37
    standalone Ingress should behaves correctly [It]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/ingress/multi_path_backend_test.go:38

    Unexpected error:
        <*utils.MultiError | 0xc002a07590>: {
            errs: [
                <*errors.StatusError | 0xc000625180>{
                    ErrStatus: {
                        TypeMeta: {Kind: "", APIVersion: ""},
                        ListMeta: {
                            SelfLink: "",
                            ResourceVersion: "",
                            Continue: "",
                            RemainingItemCount: nil,
                        },
                        Status: "Failure",
                        Message: "Internal error occurred: failed calling webhook \"vingress.elbv2.k8s.aws\": Post https://aws-load-balancer-webhook-service.kube-system.svc:443/validate-networking-v1beta1-ingress?timeout=10s: no endpoints available for service \"aws-load-balancer-webhook-service\"",
                        Reason: "InternalError",
                        Details: {
                            Name: "",
                            Group: "",
                            Kind: "",
                            UID: "",
                            Causes: [
                                {
                                    Type: "",
                                    Message: "failed calling webhook \"vingress.elbv2.k8s.aws\": Post https://aws-load-balancer-webhook-service.kube-system.svc:443/validate-networking-v1beta1-ingress?timeout=10s: no endpoints available for service \"aws-load-balancer-webhook-service\"",
                                    Field: "",
                                },
                            ],
                            RetryAfterSeconds: 0,
                        },
                        Code: 500,
                    },
                },
            ],
        }
        multiple error: [Internal error occurred: failed calling webhook "vingress.elbv2.k8s.aws": Post https://aws-load-balancer-webhook-service.kube-system.svc:443/validate-networking-v1beta1-ingress?timeout=10s: no endpoints available for service "aws-load-balancer-webhook-service"]
    occurred

    /home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/ingress/multi_path_backend_test.go:72
------------------------------
test ingresses with multiple path and backends with podReadinessGate enabled 
  IngressGroup across namespaces should behaves correctly
... skipping 68 lines ...
{"level":"info","ts":1625073230.8714955,"msg":"Deployment is not ready: kube-system/aws-load-balancer-controller. 0 out of 2 expected pods are ready"}
{"level":"info","ts":1625073232.8680446,"msg":"Deployment is not ready: kube-system/aws-load-balancer-controller. 0 out of 2 expected pods are ready"}
{"level":"info","ts":1625073234.8678508,"msg":"Deployment is not ready: kube-system/aws-load-balancer-controller. 0 out of 2 expected pods are ready"}
{"level":"info","ts":1625073236.8683763,"msg":"Deployment is not ready: kube-system/aws-load-balancer-controller. 0 out of 2 expected pods are ready"}
{"level":"info","ts":1625073238.8701444,"msg":"Deployment is not ready: kube-system/aws-load-balancer-controller. 0 out of 2 expected pods are ready"}
{"level":"info","ts":1625073240.8697755,"msg":"Deployment is not ready: kube-system/aws-load-balancer-controller. 0 out of 2 expected pods are ready"}
{"level":"info","ts":1625073242.6674962,"msg":"warning: Upgrade \"aws-load-balancer-controller\" failed: timed out waiting for the condition"}
STEP: deploy stack
{"level":"info","ts":1625073302.7564116,"msg":"allocate all namespaces"}
{"level":"info","ts":1625073302.7564576,"msg":"allocating namespace","nsID":"ns-1"}
{"level":"info","ts":1625073302.9946554,"msg":"allocated namespace","nsID":"ns-1","nsName":"aws-lb-e2e-ac34f8"}
{"level":"info","ts":1625073302.9947143,"msg":"allocating namespace","nsID":"ns-2"}
{"level":"info","ts":1625073303.067277,"msg":"allocated namespace","nsID":"ns-2","nsName":"aws-lb-e2e-ac505d"}
... skipping 26 lines ...
/home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/ingress/multi_path_backend_test.go:16
  with podReadinessGate enabled
  /home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/ingress/multi_path_backend_test.go:37
    IngressGroup across namespaces should behaves correctly [It]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/ingress/multi_path_backend_test.go:99

    Unexpected error:
        <*utils.MultiError | 0xc00000f908>: {
            errs: [
                <*errors.StatusError | 0xc00047e5a0>{
                    ErrStatus: {
                        TypeMeta: {Kind: "", APIVersion: ""},
                        ListMeta: {
                            SelfLink: "",
                            ResourceVersion: "",
                            Continue: "",
                            RemainingItemCount: nil,
                        },
                        Status: "Failure",
                        Message: "Internal error occurred: failed calling webhook \"vingress.elbv2.k8s.aws\": Post https://aws-load-balancer-webhook-service.kube-system.svc:443/validate-networking-v1beta1-ingress?timeout=10s: no endpoints available for service \"aws-load-balancer-webhook-service\"",
                        Reason: "InternalError",
                        Details: {
                            Name: "",
                            Group: "",
                            Kind: "",
                            UID: "",
                            Causes: [
                                {
                                    Type: "",
                                    Message: "failed calling webhook \"vingress.elbv2.k8s.aws\": Post https://aws-load-balancer-webhook-service.kube-system.svc:443/validate-networking-v1beta1-ingress?timeout=10s: no endpoints available for service \"aws-load-balancer-webhook-service\"",
                                    Field: "",
                                },
                            ],
                            RetryAfterSeconds: 0,
                        },
                        Code: 500,
... skipping 6 lines ...
                            SelfLink: "",
                            ResourceVersion: "",
                            Continue: "",
                            RemainingItemCount: nil,
                        },
                        Status: "Failure",
                        Message: "Internal error occurred: failed calling webhook \"vingress.elbv2.k8s.aws\": Post https://aws-load-balancer-webhook-service.kube-system.svc:443/validate-networking-v1beta1-ingress?timeout=10s: no endpoints available for service \"aws-load-balancer-webhook-service\"",
                        Reason: "InternalError",
                        Details: {
                            Name: "",
                            Group: "",
                            Kind: "",
                            UID: "",
                            Causes: [
                                {
                                    Type: "",
                                    Message: "failed calling webhook \"vingress.elbv2.k8s.aws\": Post https://aws-load-balancer-webhook-service.kube-system.svc:443/validate-networking-v1beta1-ingress?timeout=10s: no endpoints available for service \"aws-load-balancer-webhook-service\"",
                                    Field: "",
                                },
                            ],
                            RetryAfterSeconds: 0,
                        },
                        Code: 500,
                    },
                },
            ],
        }
        multiple error: [Internal error occurred: failed calling webhook "vingress.elbv2.k8s.aws": Post https://aws-load-balancer-webhook-service.kube-system.svc:443/validate-networking-v1beta1-ingress?timeout=10s: no endpoints available for service "aws-load-balancer-webhook-service" Internal error occurred: failed calling webhook "vingress.elbv2.k8s.aws": Post https://aws-load-balancer-webhook-service.kube-system.svc:443/validate-networking-v1beta1-ingress?timeout=10s: no endpoints available for service "aws-load-balancer-webhook-service"]
    occurred

    /home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/ingress/multi_path_backend_test.go:178
------------------------------


Summarizing 2 Failures:

[Fail] test ingresses with multiple path and backends with podReadinessGate enabled [It] standalone Ingress should behaves correctly 
/home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/ingress/multi_path_backend_test.go:72

[Fail] test ingresses with multiple path and backends with podReadinessGate enabled [It] IngressGroup across namespaces should behaves correctly 
/home/prow/go/src/github.com/kubernetes-sigs/aws-load-balancer-controller/test/e2e/ingress/multi_path_backend_test.go:178

Ran 2 of 2 Specs in 379.111 seconds
FAIL! -- 0 Passed | 2 Failed | 0 Pending | 0 Skipped
--- FAIL: TestIngress (379.11s)
FAIL

Ginkgo ran 1 suite in 8m25.454911953s
Test Suite Failed
+ cleanup
+ sleep 60
+ cleanup_cluster
+ eksctl::delete_cluster lb-controller-e2e-2111-1410277122173308928 us-west-2
+ declare -r cluster_name=lb-controller-e2e-2111-1410277122173308928 region=us-west-2
+ local cluster_config=/tmp/lb-controller-e2e/clusters/lb-controller-e2e-2111-1410277122173308928.yaml
... skipping 26 lines ...
deleting IAM policy for controller
+ iam::delete_policy arn:aws:iam::607362164682:policy/lb-controller-e2e-2111-1410277122173308928 us-west-2
+ declare -r policy_arn=arn:aws:iam::607362164682:policy/lb-controller-e2e-2111-1410277122173308928 region=us-west-2
+ aws iam delete-policy --region us-west-2 --policy-arn arn:aws:iam::607362164682:policy/lb-controller-e2e-2111-1410277122173308928
+ echo 'deleted IAM policy for controller: arn:aws:iam::607362164682:policy/lb-controller-e2e-2111-1410277122173308928'
deleted IAM policy for controller: arn:aws:iam::607362164682:policy/lb-controller-e2e-2111-1410277122173308928
make: *** [Makefile:106: e2e-test] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
a9af44a4ab52
... skipping 4 lines ...