This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 23 failed / 714 succeeded
Started2019-07-11 22:04
Elapsed52m43s
Revision
Buildergke-prow-ssd-pool-1a225945-z5n1
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/97b4a725-00e2-4688-a3a3-da33d1c20671/targets/test'}}
podaeeacdae-a427-11e9-8217-96c43017ab5b
resultstorehttps://source.cloud.google.com/results/invocations/97b4a725-00e2-4688-a3a3-da33d1c20671/targets/test
infra-commit4de0259d8
job-versionv1.16.0-alpha.0.2128+2659b3755aa16e
master_os_image
node_os_imagecos-u-73-11647-217-0
podaeeacdae-a427-11e9-8217-96c43017ab5b
revisionv1.16.0-alpha.0.2128+2659b3755aa16e

Test Failures


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should be able to deny attaching pod 34s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\sbe\sable\sto\sdeny\sattaching\spod$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:100
creating service e2e-test-webhook in namespace webhook-1490
Unexpected error:
    <*errors.StatusError | 0xc001c490e0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/webhook-1490/services\\\": the server could not find the requested resource\") has prevented the request from succeeding (post services)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "services",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/webhook-1490/services\": the server could not find the requested resource",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/webhook-1490/services\": the server could not find the requested resource") has prevented the request from succeeding (post services)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:407
				
				Click to see stdout/stderrfrom junit_06.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] 2m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAggregator\sShould\sbe\sable\sto\ssupport\sthe\s1\.10\sSample\sAPI\sServer\susing\sthe\scurrent\sAggregator\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
gave up waiting for apiservice wardle to come up successfully
Unexpected error:
    <*errors.errorString | 0xc0002ad850>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:392
				
				Click to see stdout/stderrfrom junit_03.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods 1m21s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sadopt\smatching\sorphans\sand\srelease\snon\-matching\spods$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 11 22:42:17.237: Couldn't delete ns: "statefulset-6695": an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1/namespaces/statefulset-6695/rolebindings\": the server could not find the requested resource") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1/namespaces/statefulset-6695/rolebindings\\\": the server could not find the requested resource\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc001a0bc20), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_14.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] DNS horizontal autoscaling [DisabledForLargeClusters] kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios 25s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\sDNS\shorizontal\sautoscaling\s\[DisabledForLargeClusters\]\skube\-dns\-autoscaler\sshould\sscale\skube\-dns\spods\sin\sboth\snonfaulty\sand\sfaulty\sscenarios$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/dns_autoscaling.go:57
Unexpected error:
    <*errors.errorString | 0xc002ec3870>: {
        s: "expected 1 DNS deployment, got 0",
    }
    expected 1 DNS deployment, got 0
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/dns_autoscaling.go:67
				
				Click to see stdout/stderrfrom junit_24.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] ReplicationController light Should scale from 1 pod to 2 pods 16m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\sReplicationController\slight\sShould\sscale\sfrom\s1\spod\sto\s2\spods$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:70
timeout waiting 15m0s for 2 replicas
Unexpected error:
    <*errors.errorString | 0xc0002ea870>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_21.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends DATA, and disconnects 1m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sPort\sforwarding\s\[k8s\.io\]\sWith\sa\sserver\slistening\son\slocalhost\s\[k8s\.io\]\sthat\sexpects\sa\sclient\srequest\sshould\ssupport\sa\sclient\sthat\sconnects\,\ssends\sDATA\,\sand\sdisconnects$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 11 22:42:16.414: Couldn't delete ns: "port-forwarding-2001": an error on the server ("Internal Server Error: \"/apis/policy/v1beta1/namespaces/port-forwarding-2001/poddisruptionbudgets\": the server could not find the requested resource") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/policy/v1beta1/namespaces/port-forwarding-2001/poddisruptionbudgets\\\": the server could not find the requested resource\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002bcecc0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_11.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] 11m37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sGuestbook\sapplication\sshould\screate\sand\sstop\sa\sworking\sapplication\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Jul 11 22:34:17.824: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2156
				
				Click to see stdout/stderrfrom junit_17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema 51s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sKubectl\sclient\-side\svalidation\sshould\screate\/apply\sa\svalid\sCR\swith\sarbitrary\-extra\sproperties\sfor\sCRD\swith\spartially\-specified\svalidation\sschema$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:950
Jul 11 22:32:08.444: failed to create CR {"kind":"E2e-test-kubectl-4092-crd","apiVersion":"kubectl-crd-test.k8s.io/v1","metadata":{"name":"test-cr"},"spec":{"bars":[{"name":"test-bar"}],"extraProperty":"arbitrary-value"}} in namespace --namespace=kubectl-7012: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.233.164.69 --kubeconfig=/tmp/gke-kubecfg933856301 --namespace=kubectl-7012 create --validate=true -f -] []  0xc0014a8720  error: error validating "STDIN": error validating data: ValidationError(E2e-test-kubectl-4092-crd.spec): unknown field "extraProperty" in io.k8s.kubectl-crd-test.v1.E2e-test-kubectl-4092-crd.spec; if you choose to ignore these errors, turn validation off with --validate=false
 [] <nil> 0xc00226e8d0 exit status 1 <nil> <nil> true [0xc0021f68f0 0xc0021f6958 0xc0021f6978] [0xc0021f68f0 0xc0021f6958 0xc0021f6978] [0xc0021f6910 0xc0021f6940 0xc0021f6968] [0x9d0fb0 0x9d10e0 0x9d10e0] 0xc002c59b00 <nil>}:
Command stdout:

stderr:
error: error validating "STDIN": error validating data: ValidationError(E2e-test-kubectl-4092-crd.spec): unknown field "extraProperty" in io.k8s.kubectl-crd-test.v1.E2e-test-kubectl-4092-crd.spec; if you choose to ignore these errors, turn validation off with --validate=false

error:
exit status 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:983
				
				Click to see stdout/stderrfrom junit_24.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Simple pod should handle in-cluster config 2m33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sSimple\spod\sshould\shandle\sin\-cluster\sconfig$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:622
Expected
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.233.164.69 --kubeconfig=/tmp/gke-kubecfg933856301 exec --namespace=kubectl-9059 nginx -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1] []  <nil> I0711 22:17:22.940706     112 merged_client_builder.go:164] Using in-cluster namespace\nI0711 22:17:37.948004     112 round_trippers.go:438] GET http://invalid/api?timeout=32s  in 15003 milliseconds\nI0711 22:17:37.948288     112 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.44.0.90:54637->10.0.0.10:53: read: connection refused\nI0711 22:17:52.953174     112 round_trippers.go:438] GET http://invalid/api?timeout=32s  in 15004 milliseconds\nI0711 22:17:52.953275     112 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.44.0.90:44074->10.0.0.10:53: read: connection refused\nI0711 22:17:52.953327     112 shortcut.go:89] Error loading discovery information: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.44.0.90:44074->10.0.0.10:53: read: connection refused\nI0711 22:18:12.969293     112 round_trippers.go:438] GET http://invalid/api?timeout=32s  in 20015 milliseconds\nI0711 22:18:12.969411     112 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.44.0.90:52404->10.0.0.10:53: read: connection refused\nI0711 22:18:32.972955     112 round_trippers.go:438] GET http://invalid/api?timeout=32s  in 20003 milliseconds\nI0711 22:18:32.973040     112 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.44.0.90:50486->10.0.0.10:53: i/o timeout\nI0711 22:18:47.976685     112 round_trippers.go:438] GET http://invalid/api?timeout=32s  in 15003 milliseconds\nI0711 22:18:47.976764     112 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.44.0.90:42890->10.0.0.10:53: read: connection refused\nI0711 22:18:47.976808     112 helpers.go:217] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.44.0.90:42890->10.0.0.10:53: read: connection refused\nF0711 22:18:47.976852     112 helpers.go:114] The connection to the server invalid was refused - did you specify the right host or port?\n + /tmp/kubectl get pods '--server=invalid' '--v=6'\ncommand terminated with exit code 255\n [] <nil> 0xc000976810 exit status 255 <nil> <nil> true [0xc0027c62a0 0xc0027c62b8 0xc0027c62d0] [0xc0027c62a0 0xc0027c62b8 0xc0027c62d0] [0xc0027c62b0 0xc0027c62c8] [0x9d10e0 0x9d10e0] 0xc0024f1c80 <nil>}:\nCommand stdout:\nI0711 22:17:22.940706     112 merged_client_builder.go:164] Using in-cluster namespace\nI0711 22:17:37.948004     112 round_trippers.go:438] GET http://invalid/api?timeout=32s  in 15003 milliseconds\nI0711 22:17:37.948288     112 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.44.0.90:54637->10.0.0.10:53: read: connection refused\nI0711 22:17:52.953174     112 round_trippers.go:438] GET http://invalid/api?timeout=32s  in 15004 milliseconds\nI0711 22:17:52.953275     112 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.44.0.90:44074->10.0.0.10:53: read: connection refused\nI0711 22:17:52.953327     112 shortcut.go:89] Error loading discovery information: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.44.0.90:44074->10.0.0.10:53: read: connection refused\nI0711 22:18:12.969293     112 round_trippers.go:438] GET http://invalid/api?timeout=32s  in 20015 milliseconds\nI0711 22:18:12.969411     112 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.44.0.90:52404->10.0.0.10:53: read: connection refused\nI0711 22:18:32.972955     112 round_trippers.go:438] GET http://invalid/api?timeout=32s  in 20003 milliseconds\nI0711 22:18:32.973040     112 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.44.0.90:50486->10.0.0.10:53: i/o timeout\nI0711 22:18:47.976685     112 round_trippers.go:438] GET http://invalid/api?timeout=32s  in 15003 milliseconds\nI0711 22:18:47.976764     112 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.44.0.90:42890->10.0.0.10:53: read: connection refused\nI0711 22:18:47.976808     112 helpers.go:217] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: read udp 10.44.0.90:42890->10.0.0.10:53: read: connection refused\nF0711 22:18:47.976852     112 helpers.go:114] The connection to the server invalid was refused - did you specify the right host or port?\n\nstderr:\n+ /tmp/kubectl get pods '--server=invalid' '--v=6'\ncommand terminated with exit code 255\n\nerror:\nexit status 255",
        },
        Code: 255,
    }
to contain substring
    <string>: Unable to connect to the server
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:729
				
				Click to see stdout/stderrfrom junit_23.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] 10m41s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sDNS\sshould\sprovide\s\/etc\/hosts\sentries\sfor\sthe\scluster\s\[LinuxOnly\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Unexpected error:
    <*errors.errorString | 0xc0002ad850>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:518
				
				Click to see stdout/stderrfrom junit_09.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] DNS should provide DNS for ExternalName services [Conformance] 10m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sDNS\sshould\sprovide\sDNS\sfor\sExternalName\sservices\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Unexpected error:
    <*errors.errorString | 0xc0002ad850>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:518
				
				Click to see stdout/stderrfrom junit_09.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] 10m46s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sDNS\sshould\sprovide\sDNS\sfor\spods\sfor\sHostname\s\[LinuxOnly\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Unexpected error:
    <*errors.errorString | 0xc0002af850>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:518