This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 24 failed / 1108 succeeded
Started2019-07-07 09:10
Elapsed21h26m
Revision
Buildergke-prow-ssd-pool-1a225945-hpnl
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/e5ce0ff7-c83e-4339-950a-c3644e45ca02/targets/test'}}
pod09fbdfdb-a097-11e9-9536-2e3983d90744
resultstorehttps://source.cloud.google.com/results/invocations/e5ce0ff7-c83e-4339-950a-c3644e45ca02/targets/test
infra-commit63eb09459
job-versionv1.15.1-beta.0.17+1bb90b5835bbbf
master_os_imagecos-73-11647-163-0
node_os_imagecos-73-11647-163-0
pod09fbdfdb-a097-11e9-9536-2e3983d90744
revisionv1.15.1-beta.0.17+1bb90b5835bbbf

Test Failures


Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] 3m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\snot\sallow\sprivilege\sescalation\swhen\sfalse\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  7 23:48:16.308: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-xdhv"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:392
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] regular resource usage tracking resource tracking for 0 pods per node 20m11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sKubelet\s\[Serial\]\s\[Slow\]\s\[k8s\.io\]\s\[sig\-node\]\sregular\sresource\susage\stracking\sresource\stracking\sfor\s0\spods\sper\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet_perf.go:263
Jul  7 16:50:11.903: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-s5d6:
 container "runtime": expected RSS memory (MB) < 131072000; got 135905280
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet_perf.go:155
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] should run without error 2m17s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sNodeProblemDetector\s\[DisabledForLargeClusters\]\sshould\srun\swithout\serror$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:56
Timed out after 60.001s.
Expected success, but got an error:
    <*errors.errorString | 0xc002bdfc10>: {
        s: "expect event number 1, got 0: [{{ } {bootstrap-e2e-minion-group-mrcx.15ade546602dd737  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-mrcx.15ade546602dd737 af40b89a-f187-4d41-9998-b11a5badd817 255148 0 2019-07-07 13:41:10 +0000 UTC <nil> <nil> map[] map[] [] nil []  []} {Node  bootstrap-e2e-minion-group-mrcx bootstrap-e2e-minion-group-mrcx   } TaskHung kernel: INFO: task umount.aufs:21568 blocked for more than 120 seconds. {kernel-monitor bootstrap-e2e-minion-group-mrcx} 2019-07-03 12:25:36 +0000 UTC 2019-07-07 13:41:10 +0000 UTC 6 Warning 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-s5d6.15ade546f063f8ee  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-s5d6.15ade546f063f8ee c15a235d-5e03-417b-9c2c-ef1427c2e197 255149 0 2019-07-07 13:41:13 +0000 UTC <nil> <nil> map[] map[] [] nil []  []} {Node  bootstrap-e2e-minion-group-s5d6 bootstrap-e2e-minion-group-s5d6   } TaskHung kernel: INFO: task umount.aufs:21568 blocked for more than 120 seconds. {kernel-monitor bootstrap-e2e-minion-group-s5d6} 2019-07-03 12:25:38 +0000 UTC 2019-07-07 13:41:13 +0000 UTC 6 Warning 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-xdhv.15adb0438983e76f  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-xdhv.15adb0438983e76f 42e83a54-bc62-4990-8999-fa0e91a53b46 255150 0 2019-07-07 13:41:15 +0000 UTC <nil> <nil> map[] map[] [] nil []  []} {Node  bootstrap-e2e-minion-group-xdhv bootstrap-e2e-minion-group-xdhv   } TaskHung kernel: INFO: task umount.aufs:21568 blocked for more than 120 seconds. {kernel-monitor bootstrap-e2e-minion-group-xdhv} 2019-07-02 20:14:09 +0000 UTC 2019-07-07 13:41:15 +0000 UTC 7 Warning 0001-01-01 00:00:00 +0000 UTC nil  nil  }]",
    }
    expect event number 1, got 0: [{{ } {bootstrap-e2e-minion-group-mrcx.15ade546602dd737  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-mrcx.15ade546602dd737 af40b89a-f187-4d41-9998-b11a5badd817 255148 0 2019-07-07 13:41:10 +0000 UTC <nil> <nil> map[] map[] [] nil []  []} {Node  bootstrap-e2e-minion-group-mrcx bootstrap-e2e-minion-group-mrcx   } TaskHung kernel: INFO: task umount.aufs:21568 blocked for more than 120 seconds. {kernel-monitor bootstrap-e2e-minion-group-mrcx} 2019-07-03 12:25:36 +0000 UTC 2019-07-07 13:41:10 +0000 UTC 6 Warning 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-s5d6.15ade546f063f8ee  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-s5d6.15ade546f063f8ee c15a235d-5e03-417b-9c2c-ef1427c2e197 255149 0 2019-07-07 13:41:13 +0000 UTC <nil> <nil> map[] map[] [] nil []  []} {Node  bootstrap-e2e-minion-group-s5d6 bootstrap-e2e-minion-group-s5d6   } TaskHung kernel: INFO: task umount.aufs:21568 blocked for more than 120 seconds. {kernel-monitor bootstrap-e2e-minion-group-s5d6} 2019-07-03 12:25:38 +0000 UTC 2019-07-07 13:41:13 +0000 UTC 6 Warning 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-xdhv.15adb0438983e76f  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-xdhv.15adb0438983e76f 42e83a54-bc62-4990-8999-fa0e91a53b46 255150 0 2019-07-07 13:41:15 +0000 UTC <nil> <nil> map[] map[] [] nil []  []} {Node  bootstrap-e2e-minion-group-xdhv bootstrap-e2e-minion-group-xdhv   } TaskHung kernel: INFO: task umount.aufs:21568 blocked for more than 120 seconds. {kernel-monitor bootstrap-e2e-minion-group-xdhv} 2019-07-02 20:14:09 +0000 UTC 2019-07-07 13:41:15 +0000 UTC 7 Warning 0001-01-01 00:00:00 +0000 UTC nil  nil  }]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:129
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] crictl should be able to run crictl on the node 3m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\scrictl\sshould\sbe\sable\sto\srun\scrictl\son\sthe\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  8 00:32:42.042: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-xdhv"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:392
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] 3m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sGarbage\scollector\sshould\sdelete\spods\screated\sby\src\swhen\snot\sorphaning\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  7 23:59:18.691: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-xdhv"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:392
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] 3m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sWatchers\sshould\sobserve\san\sobject\sdeletion\sif\sit\sstops\smeeting\sthe\srequirements\sof\sthe\sselector\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  8 00:26:03.416: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-xdhv"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:392
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] 8m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDeployment\sRollingUpdateDeployment\sshould\sdelete\sold\spods\sand\screate\snew\sones\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
error in waiting for pods to come up: failed to wait for pods running: [timed out waiting for the condition]
Unexpected error:
    <*errors.errorString | 0xc002285170>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:278
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] DNS configMap nameserver [IPv4] Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial] 2m15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sDNS\sconfigMap\snameserver\s\[IPv4\]\sForward\sPTR\slookup\sshould\sforward\sPTR\srecords\slookup\sto\supstream\snameserver\s\[Slow\]\[Serial\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:497
Jul  7 20:06:06.143: dig result did not match: []string{"dns.google."} after 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:103
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] Services should release NodePorts on delete 3m19s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sServices\sshould\srelease\sNodePorts\son\sdelete$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  8 00:43:52.480: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-xdhv"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:392
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works 4m38s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-scheduling\]\sSchedulerPreemption\s\[Serial\]\svalidates\sbasic\spreemption\sworks$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  8 00:10:41.866: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-xdhv"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:392
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath-v0] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly] 3m33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\-v0\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\sfail\sif\ssubpath\sfile\sis\soutside\sthe\svolume\s\[Slow\]\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  8 00:47:25.584: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-xdhv"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:392
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly] 4m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\spd\.csi\.storage\.gke\.io\]\[Serial\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\sfile\sas\ssubpath\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  7 23:56:00.615: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-xdhv"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:392
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node 23m37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\spd\.csi\.storage\.gke\.io\]\[Serial\]\s\[Testpattern\:\sDynamic\sPV\s\(filesystem\svolmode\)\]\smultiVolume\s\[Slow\]\sshould\saccess\sto\stwo\svolumes\swith\sthe\ssame\svolume\smode\sand\sretain\sdata\sacross\spod\srecreation\son\sthe\ssame\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:118
Unexpected error:
    <*errors.errorString | 0xc005a19990>: {
        s: "pod \"security-context-6ddbf4e4-b8c7-4b2e-b5eb-5d1b7f22ddaa\" is not Running: timed out waiting for the condition",
    }
    pod "security-context-6ddbf4e4-b8c7-4b2e-b5eb-5d1b7f22ddaa" is not Running: timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:334
				
				Click to see stdout/stderrfrom junit_01.xml

Find security-context-6ddbf4e4-b8c7-4b2e-b5eb-5d1b7f22ddaa mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] GCP Volumes NFSv4 should be mountable for NFSv4 3m43s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sGCP\sVolumes\sNFSv4\sshould\sbe\smountable\sfor\sNFSv4$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  7 23:51:59.672: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-xdhv"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:392
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow] 3m45s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\semptydir\]\s\[Testpattern\:\sInline\-volume\s\(default\sfs\)\]\ssubPath\sshould\ssupport\screating\smultiple\ssubpath\sfrom\ssame\svolumes\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:270
Error getting Kubelet bootstrap-e2e-minion-group-xdhv metrics: the server is currently unable to handle the request (get nodes bootstrap-e2e-minion-group-xdhv:10250)
Unexpected error:
    <*errors.StatusError | 0xc0023e2280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "the server is currently unable to handle the request (get nodes bootstrap-e2e-minion-group-xdhv:10250)",
            Reason: "ServiceUnavailable",
            Details: {
                Name: "bootstrap-e2e-minion-group-xdhv:10250",
                Group: "",
                Kind: "nodes",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'net/http: TLS handshake timeout'\nTrying to reach: 'https://10.138.0.4:10250/metrics'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    the server is currently unable to handle the request (get nodes bootstrap-e2e-minion-group-xdhv:10250)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:540
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with defaults 3m44s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sgcepd\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\sprovisioning\sshould\sprovision\sstorage\swith\sdefaults$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:153
Error getting Kubelet bootstrap-e2e-minion-group-xdhv metrics: the server is currently unable to handle the request (get nodes bootstrap-e2e-minion-group-xdhv:10250)
Unexpected error:
    <*errors.StatusError | 0xc0002d63c0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVe