This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 10 failed / 759 succeeded
Started2019-10-18 17:44
Elapsed1h35m
Revision823183a9166e58f9101fc9f94b047e697b4b5e0b
Refs 82703
job-versionv1.17.0-alpha.2.192+5e13f34af0a787
master_os_imagecos-77-12371-89-0
node_os_imagecos-77-12371-89-0
revisionv1.17.0-alpha.2.192+5e13f34af0a787

Test Failures


DumpClusterLogs 10m43s

error during ./cluster/log-dump/log-dump.sh /logs/artifacts (interrupted): signal: interrupt
				from junit_runner.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate] 31s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\ssupport\spod\sreadiness\sgates\s\[NodeFeature\:PodReadinessGate\]$'
test/e2e/common/pods.go:776
Oct 18 18:25:07.368: Unexpected error:
    <*errors.errorString | 0xc0000d7090>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/common/pods.go:810
				
				Click to see stdout/stderrfrom junit_13.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined 45s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sAppArmor\sload\sAppArmor\sprofiles\scan\sdisable\san\sAppArmor\sprofile\,\susing\sunconfined$'
test/e2e/node/apparmor.go:45
Oct 18 18:23:57.755: Unexpected error:
    <*errors.errorString | 0xc001a69d60>: {
        s: "pod \"test-apparmor-t9jck\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-10-18 18:23:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-10-18 18:23:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-10-18 18:23:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-10-18 18:23:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.40.0.5 PodIP:10.64.3.27 PodIPs:[{IP:10.64.3.27}] StartTime:2019-10-18 18:23:43 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:test State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:OCI runtime start failed: container process is already dead: unknown,StartedAt:2019-10-18 18:23:49 +0000 UTC,FinishedAt:2019-10-18 18:23:49 +0000 UTC,ContainerID:docker://a275ccd61b9726fab07a1ff1023cb2685fdcac6f45006db755bbe11c1e5d9159,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:busybox:1.29 ImageID:docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 ContainerID:docker://a275ccd61b9726fab07a1ff1023cb2685fdcac6f45006db755bbe11c1e5d9159 Started:0xc0014b92fc}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
    }
    pod "test-apparmor-t9jck" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-10-18 18:23:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-10-18 18:23:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-10-18 18:23:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-10-18 18:23:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.40.0.5 PodIP:10.64.3.27 PodIPs:[{IP:10.64.3.27}] StartTime:2019-10-18 18:23:43 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:test State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:OCI runtime start failed: container process is already dead: unknown,StartedAt:2019-10-18 18:23:49 +0000 UTC,FinishedAt:2019-10-18 18:23:49 +0000 UTC,ContainerID:docker://a275ccd61b9726fab07a1ff1023cb2685fdcac6f45006db755bbe11c1e5d9159,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:busybox:1.29 ImageID:docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 ContainerID:docker://a275ccd61b9726fab07a1ff1023cb2685fdcac6f45006db755bbe11c1e5d9159 Started:0xc0014b92fc}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
occurred
test/e2e/common/apparmor.go:128
				
				Click to see stdout/stderrfrom junit_08.xml

Find test-apparmor-t9jck mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob 9m45s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sGarbage\scollector\sshould\sdelete\sjobs\sand\spods\screated\sby\scronjob$'
test/e2e/framework/framework.go:151
Oct 18 18:33:58.722: All nodes should be ready after test, Get https://35.247.16.225/api/v1/nodes: net/http: TLS handshake timeout
test/e2e/framework/framework.go:388
				
				Click to see stdout/stderrfrom junit_13.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout 14m59s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDeployment\sshould\snot\sdisrupt\sa\scloud\sload\-balancer\'s\sconnectivity\sduring\srollout$'
test/e2e/apps/deployment.go:123
Oct 18 18:33:30.449: Unexpected error:
    <*errors.errorString | 0xc002d5b490>: {
        s: "error waiting for deployment \"test-rolling-update-with-lb\" status to match expectation: Get https://35.247.16.225/apis/apps/v1/namespaces/deployment-5438/deployments/test-rolling-update-with-lb: net/http: TLS handshake timeout",
    }
    error waiting for deployment "test-rolling-update-with-lb" status to match expectation: Get https://35.247.16.225/apis/apps/v1/namespaces/deployment-5438/deployments/test-rolling-update-with-lb: net/http: TLS handshake timeout
occurred
test/e2e/apps/deployment.go:953
				
				Click to see stdout/stderrfrom junit_25.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] 3m15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sKubectl\slabel\sshould\supdate\sthe\slabel\son\sa\sresource\s\s\[Conformance\]$'
test/e2e/framework/framework.go:691
Oct 18 18:41:50.848: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.247.16.225 --kubeconfig=/workspace/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1876] []  <nil>  Unable to connect to the server: net/http: TLS handshake timeout\n [] <nil> 0xc0026e7920 exit status 1 <nil> <nil> true [0xc00291ab90 0xc00291aba8 0xc00291abc0] [0xc00291ab90 0xc00291aba8 0xc00291abc0] [0xc00291aba0 0xc00291abb8] [0x10f1850 0x10f1850] 0xc002185c20 <nil>}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: net/http: TLS handshake timeout\n\nerror:\nexit status 1",
        },
        Code: 1,
    }
    error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.247.16.225 --kubeconfig=/workspace/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1876] []  <nil>  Unable to connect to the server: net/http: TLS handshake timeout
     [] <nil> 0xc0026e7920 exit status 1 <nil> <nil> true [0xc00291ab90 0xc00291aba8 0xc00291abc0] [0xc00291ab90 0xc00291aba8 0xc00291abc0] [0xc00291aba0 0xc00291abb8] [0x10f1850 0x10f1850] 0xc002185c20 <nil>}:
    Command stdout:
    
    stderr:
    Unable to connect to the server: net/http: TLS handshake timeout
    
    error:
    exit status 1
occurred
test/e2e/framework/util.go:1091
				
				Click to see stdout/stderrfrom junit_27.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp 10m50s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\sfunction\sfor\sendpoint\-Service\:\sudp$'
test/e2e/network/networking.go:153
Oct 18 18:36:21.421: failed to get pod host-test-container-pod
Unexpected error:
    <*url.Error | 0xc002266c30>: {
        Op: "Get",
        URL: "https://35.247.16.225/api/v1/namespaces/nettest-8731/pods/host-test-container-pod",
        Err: {},
    }
    Get https://35.247.16.225/api/v1/namespaces/nettest-8731/pods/host-test-container-pod: net/http: TLS handshake timeout
occurred
test/e2e/framework/exec_util.go:121
				
				Click to see stdout/stderrfrom junit_29.xml

Find host-test-container-pod mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] 2m29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sServices\sshould\sbe\sable\sto\sswitch\ssession\saffinity\sfor\sservice\swith\stype\sclusterIP\s\[LinuxOnly\]$'
test/e2e/network/service.go:1766
Oct 18 18:23:45.058: Connection to 10.0.48.69:80 timed out or not enough responses.
test/e2e/framework/service/affinity_checker.go:113
				
				Click to see stdout/stderrfrom junit_07.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly] 13m36s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\-link\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\sreadOnly\sfile\sspecified\sin\sthe\svolumeMount\s\[LinuxOnly\]$'
test/e2e/storage/testsuites/subpath.go:365
Oct 18 18:40:18.299: Error getting Kubelet e2e-e7e264e655-abe28-minion-group-l85b metrics: Get https://35.247.16.225/api/v1/nodes?fieldSelector=metadata.name%3De2e-e7e264e655-abe28-minion-group-l85b: net/http: TLS handshake timeout
Unexpected error:
    <*url.Error | 0xc0029437a0>: {
        Op: "Get",
        URL: "https://35.247.16.225/api/v1/nodes?fieldSelector=metadata.name%3De2e-e7e264e655-abe28-minion-group-l85b",
        Err: {},
    }
    Get https://35.247.16.225/api/v1/nodes?fieldSelector=metadata.name%3De2e-e7e264e655-abe28-minion-group-l85b: net/http: TLS handshake timeout
occurred
test/e2e/storage/testsuites/base.go:591
				
				Click to see stdout/stderrfrom junit_17.xml

Filter through log files | View test history on testgrid


Timeout 1h20m

kubetest --timeout triggered
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 759 Passed Tests

Show 4238 Skipped Tests