This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 42 failed / 438 succeeded
Started2019-11-22 12:01
Elapsed15h6m
Revision
Buildergke-prow-ssd-pool-1a225945-nj5q
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/51b0b9fd-43e0-4b90-8180-a773bcb59caf/targets/test'}}
poda8ff84d6-0d1f-11ea-b26a-065b5133c63f
resultstorehttps://source.cloud.google.com/results/invocations/51b0b9fd-43e0-4b90-8180-a773bcb59caf/targets/test
infra-commit4ab1254b1
job-versionv1.15.7-beta.0.1+54260e2be0c03f
master_os_image
node_os_imagecos-77-12371-89-0
poda8ff84d6-0d1f-11ea-b26a-065b5133c63f
revisionv1.15.7-beta.0.1+54260e2be0c03f

Test Failures


Kubernetes e2e suite [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] 10m19s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\sbe\srestarted\swith\sa\s\/healthz\shttp\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:30:37.012: Couldn't delete ns: "container-probe-5713": namespace container-probe-5713 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace container-probe-5713 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] 10m49s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\sbe\srestarted\swith\sa\sexec\s\"cat\s\/tmp\/health\"\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 23 02:01:14.334: Couldn't delete ns: "container-probe-5457": namespace container-probe-5457 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace container-probe-5457 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Sysctls [NodeFeature:Sysctls] should reject invalid sysctls 10m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSysctls\s\[NodeFeature\:Sysctls\]\sshould\sreject\sinvalid\ssysctls$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 23 00:37:20.209: Couldn't delete ns: "sysctl-2883": namespace sysctl-2883 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace sysctl-2883 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] 10m20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPods\sExtended\s\[k8s\.io\]\sDelete\sGrace\sPeriod\sshould\sbe\ssubmitted\sand\sremoved\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 21:23:54.957: Couldn't delete ns: "pods-2760": namespace pods-2760 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace pods-2760 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] SSH should SSH to all nodes and run commands 10m6s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sSSH\sshould\sSSH\sto\sall\snodes\sand\srun\scommands$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 21:34:01.768: Couldn't delete ns: "ssh-663": namespace ssh-663 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace ssh-663 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should be able to deny pod and configmap creation 51s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\sbe\sable\sto\sdeny\spod\sand\sconfigmap\screation$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:127
Nov 22 19:25:50.327: expect timeout error "request did not complete within", got "context deadline exceeded"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:721
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] ResourceQuota Should be able to update and delete ResourceQuota. 10m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sResourceQuota\sShould\sbe\sable\sto\supdate\sand\sdelete\sResourceQuota\.$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 22:38:15.683: Couldn't delete ns: "resourcequota-8638": namespace resourcequota-8638 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace resourcequota-8638 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] 10m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDeployment\sRecreateDeployment\sshould\sdelete\sold\spods\sand\screate\snew\sones\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 22:48:21.373: Couldn't delete ns: "deployment-4335": namespace deployment-4335 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace deployment-4335 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] 13m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sNetwork\sPartition\s\[Disruptive\]\s\[Slow\]\s\[k8s\.io\]\s\[StatefulSet\]\sshould\scome\sback\sup\sif\snode\sgoes\sdown\s\[Slow\]\s\[Disruptive\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 20:20:39.485: Couldn't delete ns: "network-partition-1176": namespace network-partition-1176 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace network-partition-1176 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-auth] PodSecurityPolicy should enforce the restricted policy.PodSecurityPolicy 31s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-auth\]\sPodSecurityPolicy\sshould\senforce\sthe\srestricted\spolicy\.PodSecurityPolicy$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:85
should be forbidden
Expected an error to have occurred.  Got:
    <nil>: nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2059
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 15m51s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sDeployment\sShould\sscale\sfrom\s1\spod\sto\s3\spods\sand\sfrom\s3\sto\s5$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:40
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc0002a18b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 15m55s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sDeployment\sShould\sscale\sfrom\s5\spods\sto\s3\spods\sand\sfrom\s3\sto\s1$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:43
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc0002a18b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] 10m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sProxy\sserver\sshould\ssupport\sproxy\swith\s\-\-port\s0\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 22:14:43.360: Couldn't delete ns: "kubectl-4791": namespace kubectl-4791 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace kubectl-4791 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Simple pod should handle in-cluster config 13m30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sSimple\spod\sshould\shandle\sin\-cluster\sconfig$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:621
Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{../../../../kubernetes_skew/cluster/kubectl.sh [../../../../kubernetes_skew/cluster/kubectl.sh --server=https://35.223.127.186 --kubeconfig=/tmp/gke-kubecfg295322224 exec --namespace=kubectl-6015 nginx -- /bin/sh -x -c /tmp/kubectl get pods --kubeconfig=/tmp/icc-override.kubeconfig --v=6 2>&1] []  <nil> I1122 22:16:34.002583     141 loader.go:375] Config loaded from file:  /tmp/icc-override.kubeconfig\nI1122 22:16:49.007184     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 15003 milliseconds\nI1122 22:16:49.007310     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:48462->10.55.240.10:53: read: connection refused\nI1122 22:17:09.017194     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 20003 milliseconds\nI1122 22:17:09.017266     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:43464->10.55.240.10:53: read: connection refused\nI1122 22:17:09.017285     141 shortcut.go:89] Error loading discovery information: Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:43464->10.55.240.10:53: read: connection refused\nI1122 22:17:29.020985     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 20003 milliseconds\nI1122 22:17:29.021059     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:52447->10.55.240.10:53: read: connection refused\nI1122 22:17:44.023754     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 15002 milliseconds\nI1122 22:17:44.024051     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:36036->10.55.240.10:53: read: connection refused\nI1122 22:18:09.027757     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 25003 milliseconds\nI1122 22:18:09.027866     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:49765->10.55.240.10:53: i/o timeout\nI1122 22:18:09.027919     141 helpers.go:217] Connection error: Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:49765->10.55.240.10:53: i/o timeout\nF1122 22:18:09.027943     141 helpers.go:114] Unable to connect to the server: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:49765->10.55.240.10:53: i/o timeout\n + /tmp/kubectl get pods '--kubeconfig=/tmp/icc-override.kubeconfig' '--v=6'\ncommand terminated with exit code 255\n [] <nil> 0xc0064126c0 exit status 255 <nil> <nil> true [0xc003656638 0xc003656650 0xc003656668] [0xc003656638 0xc003656650 0xc003656668] [0xc003656648 0xc003656660] [0xba6c10 0xba6c10] 0xc0039de600 <nil>}:\nCommand stdout:\nI1122 22:16:34.002583     141 loader.go:375] Config loaded from file:  /tmp/icc-override.kubeconfig\nI1122 22:16:49.007184     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 15003 milliseconds\nI1122 22:16:49.007310     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:48462->10.55.240.10:53: read: connection refused\nI1122 22:17:09.017194     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 20003 milliseconds\nI1122 22:17:09.017266     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:43464->10.55.240.10:53: read: connection refused\nI1122 22:17:09.017285     141 shortcut.go:89] Error loading discovery information: Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:43464->10.55.240.10:53: read: connection refused\nI1122 22:17:29.020985     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 20003 milliseconds\nI1122 22:17:29.021059     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:52447->10.55.240.10:53: read: connection refused\nI1122 22:17:44.023754     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 15002 milliseconds\nI1122 22:17:44.024051     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:36036->10.55.240.10:53: read: connection refused\nI1122 22:18:09.027757     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 25003 milliseconds\nI1122 22:18:09.027866     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:49765->10.55.240.10:53: i/o timeout\nI1122 22:18:09.027919     141 helpers.go:217] Connection error: Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:49765->10.55.240.10:53: i/o timeout\nF1122 22:18:09.027943     141 helpers.go:114] Unable to connect to the server: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:49765->10.55.240.10:53: i/o timeout\n\nstderr:\n+ /tmp/kubectl get pods '--kubeconfig=/tmp/icc-override.kubeconfig' '--v=6'\ncommand terminated with exit code 255\n\nerror:\nexit status 255",
        },
        Code: 255,
    }
    error running &{../../../../kubernetes_skew/cluster/kubectl.sh [../../../../kubernetes_skew/cluster/kubectl.sh --server=https://35.223.127.186 --kubeconfig=/tmp/gke-kubecfg295322224 exec --namespace=kubectl-6015 nginx -- /bin/sh -x -c /tmp/kubectl get pods --kubeconfig=/tmp/icc-override.kubeconfig --v=6 2>&1] []  <nil> I1122 22:16:34.002583     141 loader.go:375] Config loaded from file:  /tmp/icc-override.kubeconfig
    I1122 22:16:49.007184     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 15003 milliseconds
    I1122 22:16:49.007310     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:48462->10.55.240.10:53: read: connection refused
    I1122 22:17:09.017194     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 20003 milliseconds
    I1122 22:17:09.017266     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:43464->10.55.240.10:53: read: connection refused
    I1122 22:17:09.017285     141 shortcut.go:89] Error loading discovery information: Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:43464->10.55.240.10:53: read: connection refused
    I1122 22:17:29.020985     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 20003 milliseconds
    I1122 22:17:29.021059     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:52447->10.55.240.10:53: read: connection refused
    I1122 22:17:44.023754     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 15002 milliseconds
    I1122 22:17:44.024051     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:36036->10.55.240.10:53: read: connection refused
    I1122 22:18:09.027757     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 25003 milliseconds
    I1122 22:18:09.027866     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:49765->10.55.240.10:53: i/o timeout
    I1122 22:18:09.027919     141 helpers.go:217] Connection error: Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:49765->10.55.240.10:53: i/o timeout
    F1122 22:18:09.027943     141 helpers.go:114] Unable to connect to the server: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:49765->10.55.240.10:53: i/o timeout
     + /tmp/kubectl get pods '--kubeconfig=/tmp/icc-override.kubeconfig' '--v=6'
    command terminated with exit code 255
     [] <nil> 0xc0064126c0 exit status 255 <nil> <nil> true [0xc003656638 0xc003656650 0xc003656668] [0xc003656638 0xc003656650 0xc003656668] [0xc003656648 0xc003656660] [0xba6c10 0xba6c10] 0xc0039de600 <nil>}:
    Command stdout:
    I1122 22:16:34.002583     141 loader.go:375] Config loaded from file:  /tmp/icc-override.kubeconfig
    I1122 22:16:49.007184     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 15003 milliseconds
    I1122 22:16:49.007310     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:48462->10.55.240.10:53: read: connection refused
    I1122 22:17:09.017194     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 20003 milliseconds
    I1122 22:17:09.017266     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:43464->10.55.240.10:53: read: connection refused
    I1122 22:17:09.017285     141 shortcut.go:89] Error loading discovery information: Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:43464->10.55.240.10:53: read: connection refused
    I1122 22:17:29.020985     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 20003 milliseconds
    I1122 22:17:29.021059     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:52447->10.55.240.10:53: read: connection refused
    I1122 22:17:44.023754     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 15002 milliseconds
    I1122 22:17:44.024051     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:36036->10.55.240.10:53: read: connection refused
    I1122 22:18:09.027757     141 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s  in 25003 milliseconds
    I1122 22:18:09.027866     141 cached_discovery.go:121] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:49765->10.55.240.10:53: i/o timeout
    I1122 22:18:09.027919     141 helpers.go:217] Connection error: Get https://kubernetes.default.svc:443/api?timeout=32s: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:49765->10.55.240.10:53: i/o timeout
    F1122 22:18:09.027943     141 helpers.go:114] Unable to connect to the server: dial tcp: lookup kubernetes.default.svc on 10.55.240.10:53: read udp 10.52.3.240:49765->10.55.240.10:53: i/o timeout
    
    stderr:
    + /tmp/kubectl get pods '--kubeconfig=/tmp/icc-override.kubeconfig' '--v=6'
    command terminated with exit code 255
    
    error:
    exit status 255
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3348
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover 15m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sRestart\s\[Disruptive\]\sshould\srestart\sall\snodes\sand\sensure\sall\snodes\sand\spods\srecover$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:50
Nov 23 02:49:06.331: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:76