Result | FAILURE |
Tests | 7 failed / 180 succeeded |
Started | |
Elapsed | 4h9m |
Revision | |
Builder | gke-prow-ssd-pool-1a225945-9tvq |
links | {u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/c485ffb5-4bde-44df-a493-3d7d40d91b6b/targets/test'}} |
pod | dd9dfa18-0d09-11ea-b26a-065b5133c63f |
resultstore | https://source.cloud.google.com/results/invocations/c485ffb5-4bde-44df-a493-3d7d40d91b6b/targets/test |
infra-commit | 4ab1254b1 |
job-version | v1.17.0-beta.2.22+486425533b66fa |
master_os_image | cos-77-12371-89-0 |
node_os_image | ubuntu-gke-1804-d1703-0-v20191121 |
pod | dd9dfa18-0d09-11ea-b26a-065b5133c63f |
revision | v1.17.0-beta.2.22+486425533b66fa |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\spd\.csi\.storage\.gke\.io\]\[Serial\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\]\smultiVolume\s\[Slow\]\sshould\sconcurrently\saccess\sthe\ssingle\svolume\sfrom\spods\son\sthe\ssame\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:293 Nov 22 10:50:29.137: waiting for csi driver node registration on: timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/csi.go:452from junit_01.xml
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:101 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:88 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 22 10:49:27.864: INFO: >>> kubeConfig: /workspace/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename multivolume �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should concurrently access the single volume from pods on the same node /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:293 �[1mSTEP�[0m: deploying csi gce-pd driver Nov 22 10:49:28.073: INFO: Found CI service account key at /etc/service-account/service-account.json Nov 22 10:49:28.073: INFO: Running cp [/etc/service-account/service-account.json /tmp/8f411ebd-eaaa-4562-b6ec-468205c5e3c8/cloud-sa.json] Nov 22 10:49:28.117: INFO: Shredding file /tmp/8f411ebd-eaaa-4562-b6ec-468205c5e3c8/cloud-sa.json Nov 22 10:49:28.117: INFO: Running shred [--remove /tmp/8f411ebd-eaaa-4562-b6ec-468205c5e3c8/cloud-sa.json] Nov 22 10:49:28.147: INFO: File /tmp/8f411ebd-eaaa-4562-b6ec-468205c5e3c8/cloud-sa.json successfully shredded Nov 22 10:49:28.153: INFO: creating *v1.ServiceAccount: multivolume-4046/csi-attacher Nov 22 10:49:28.198: INFO: creating *v1.ClusterRole: external-attacher-runner-multivolume-4046 Nov 22 10:49:28.198: INFO: Define cluster role external-attacher-runner-multivolume-4046 Nov 22 10:49:28.237: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-multivolume-4046 Nov 22 10:49:28.277: INFO: creating *v1.Role: multivolume-4046/external-attacher-cfg-multivolume-4046 Nov 22 10:49:28.316: INFO: creating *v1.RoleBinding: multivolume-4046/csi-attacher-role-cfg Nov 22 10:49:28.355: INFO: creating *v1.ServiceAccount: multivolume-4046/csi-provisioner Nov 22 10:49:28.394: INFO: creating *v1.ClusterRole: external-provisioner-runner-multivolume-4046 Nov 22 10:49:28.394: INFO: Define cluster role external-provisioner-runner-multivolume-4046 Nov 22 10:49:28.434: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-multivolume-4046 Nov 22 10:49:28.474: INFO: creating *v1.Role: multivolume-4046/external-provisioner-cfg-multivolume-4046 Nov 22 10:49:28.514: INFO: creating *v1.RoleBinding: multivolume-4046/csi-provisioner-role-cfg Nov 22 10:49:28.553: INFO: creating *v1.ServiceAccount: multivolume-4046/csi-gce-pd-controller-sa Nov 22 10:49:28.592: INFO: creating *v1.ClusterRole: csi-gce-pd-provisioner-role-multivolume-4046 Nov 22 10:49:28.592: INFO: Define cluster role csi-gce-pd-provisioner-role-multivolume-4046 Nov 22 10:49:28.630: INFO: creating *v1.ClusterRoleBinding: csi-gce-pd-controller-provisioner-binding-multivolume-4046 Nov 22 10:49:28.670: INFO: creating *v1.ClusterRole: csi-gce-pd-attacher-role-multivolume-4046 Nov 22 10:49:28.670: INFO: Define cluster role csi-gce-pd-attacher-role-multivolume-4046 Nov 22 10:49:28.710: INFO: creating *v1.ClusterRoleBinding: csi-gce-pd-controller-attacher-binding-multivolume-4046 Nov 22 10:49:28.748: INFO: creating *v1.ClusterRole: csi-gce-pd-resizer-role-multivolume-4046 Nov 22 10:49:28.748: INFO: Define cluster role csi-gce-pd-resizer-role-multivolume-4046 Nov 22 10:49:28.789: INFO: creating *v1.ClusterRoleBinding: csi-gce-pd-resizer-binding-multivolume-4046 Nov 22 10:49:28.828: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-multivolume-4046 Nov 22 10:49:28.867: INFO: creating *v1.DaemonSet: multivolume-4046/csi-gce-pd-node Nov 22 10:49:28.909: INFO: creating *v1.StatefulSet: multivolume-4046/csi-gce-pd-controller Nov 22 10:50:29.137: FAIL: waiting for csi driver node registration on: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/storage/drivers.(*gcePDCSIDriver).PrepareTest(0xc00062c000, 0xc000bd7a40, 0xc002d07718, 0xc0000603e0) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/csi.go:452 +0x204 k8s.io/kubernetes/test/e2e/storage/testsuites.(*multiVolumeTestSuite).defineTests.func2() /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:108 +0xc6 k8s.io/kubernetes/test/e2e/storage/testsuites.(*multiVolumeTestSuite).defineTests.func8() /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:294 +0x77 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00095a100) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a k8s.io/kubernetes/test/e2e.TestE2E(0xc00095a100) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b testing.tRunner(0xc00095a100, 0x4c2fc20) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 [AfterEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "multivolume-4046". �[1mSTEP�[0m: Found 41 events. Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:28 +0000 UTC - event for csi-gce-pd-controller: {statefulset-controller } SuccessfulCreate: create Pod csi-gce-pd-controller-0 in StatefulSet csi-gce-pd-controller successful Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:28 +0000 UTC - event for csi-gce-pd-controller-0: {default-scheduler } Scheduled: Successfully assigned multivolume-4046/csi-gce-pd-controller-0 to test-6bbac58e9d-minion-group-1pk2 Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:28 +0000 UTC - event for csi-gce-pd-node: {daemonset-controller } SuccessfulCreate: Created pod: csi-gce-pd-node-5mvgc Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:28 +0000 UTC - event for csi-gce-pd-node: {daemonset-controller } SuccessfulCreate: Created pod: csi-gce-pd-node-rw62z Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:28 +0000 UTC - event for csi-gce-pd-node: {daemonset-controller } SuccessfulCreate: Created pod: csi-gce-pd-node-zc2px Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:28 +0000 UTC - event for csi-gce-pd-node-5mvgc: {default-scheduler } Scheduled: Successfully assigned multivolume-4046/csi-gce-pd-node-5mvgc to test-6bbac58e9d-minion-group-1pk2 Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:28 +0000 UTC - event for csi-gce-pd-node-rw62z: {default-scheduler } Scheduled: Successfully assigned multivolume-4046/csi-gce-pd-node-rw62z to test-6bbac58e9d-minion-group-dtt3 Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:28 +0000 UTC - event for csi-gce-pd-node-zc2px: {default-scheduler } Scheduled: Successfully assigned multivolume-4046/csi-gce-pd-node-zc2px to test-6bbac58e9d-minion-group-ldgb Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:29 +0000 UTC - event for csi-gce-pd-node-5mvgc: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulled: Container image "gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0" already present on machine Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:29 +0000 UTC - event for csi-gce-pd-node-rw62z: {kubelet test-6bbac58e9d-minion-group-dtt3} Pulled: Container image "gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0" already present on machine Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:29 +0000 UTC - event for csi-gce-pd-node-rw62z: {kubelet test-6bbac58e9d-minion-group-dtt3} Created: Created container csi-driver-registrar Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:29 +0000 UTC - event for csi-gce-pd-node-zc2px: {kubelet test-6bbac58e9d-minion-group-ldgb} Created: Created container csi-driver-registrar Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:29 +0000 UTC - event for csi-gce-pd-node-zc2px: {kubelet test-6bbac58e9d-minion-group-ldgb} Pulled: Container image "gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0" already present on machine Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:30 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulling: Pulling image "gcr.io/gke-release/csi-provisioner:v1.4.0-gke.0" Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:30 +0000 UTC - event for csi-gce-pd-node-5mvgc: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulled: Container image "gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0" already present on machine Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:30 +0000 UTC - event for csi-gce-pd-node-5mvgc: {kubelet test-6bbac58e9d-minion-group-1pk2} Started: Started container gce-pd-driver Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:30 +0000 UTC - event for csi-gce-pd-node-5mvgc: {kubelet test-6bbac58e9d-minion-group-1pk2} Created: Created container gce-pd-driver Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:30 +0000 UTC - event for csi-gce-pd-node-5mvgc: {kubelet test-6bbac58e9d-minion-group-1pk2} Created: Created container csi-driver-registrar Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:30 +0000 UTC - event for csi-gce-pd-node-5mvgc: {kubelet test-6bbac58e9d-minion-group-1pk2} Started: Started container csi-driver-registrar Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:30 +0000 UTC - event for csi-gce-pd-node-rw62z: {kubelet test-6bbac58e9d-minion-group-dtt3} Pulled: Container image "gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0" already present on machine Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:30 +0000 UTC - event for csi-gce-pd-node-rw62z: {kubelet test-6bbac58e9d-minion-group-dtt3} Created: Created container gce-pd-driver Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:30 +0000 UTC - event for csi-gce-pd-node-rw62z: {kubelet test-6bbac58e9d-minion-group-dtt3} Started: Started container csi-driver-registrar Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:30 +0000 UTC - event for csi-gce-pd-node-rw62z: {kubelet test-6bbac58e9d-minion-group-dtt3} Started: Started container gce-pd-driver Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:30 +0000 UTC - event for csi-gce-pd-node-zc2px: {kubelet test-6bbac58e9d-minion-group-ldgb} Started: Started container csi-driver-registrar Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:30 +0000 UTC - event for csi-gce-pd-node-zc2px: {kubelet test-6bbac58e9d-minion-group-ldgb} Pulled: Container image "gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0" already present on machine Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:30 +0000 UTC - event for csi-gce-pd-node-zc2px: {kubelet test-6bbac58e9d-minion-group-ldgb} Started: Started container gce-pd-driver Nov 22 10:50:29.177: INFO: At 2019-11-22 10:49:30 +0000 UTC - event for csi-gce-pd-node-zc2px: {kubelet test-6bbac58e9d-minion-group-ldgb} Created: Created container gce-pd-driver Nov 22 10:50:29.178: INFO: At 2019-11-22 10:49:32 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulled: Successfully pulled image "gcr.io/gke-release/csi-provisioner:v1.4.0-gke.0" Nov 22 10:50:29.178: INFO: At 2019-11-22 10:49:33 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Created: Created container csi-provisioner Nov 22 10:50:29.178: INFO: At 2019-11-22 10:49:33 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Started: Started container csi-provisioner Nov 22 10:50:29.178: INFO: At 2019-11-22 10:49:33 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulling: Pulling image "gcr.io/gke-release/csi-attacher:v2.0.0-gke.0" Nov 22 10:50:29.178: INFO: At 2019-11-22 10:49:35 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulling: Pulling image "gcr.io/gke-release/csi-resizer:v0.3.0-gke.0" Nov 22 10:50:29.178: INFO: At 2019-11-22 10:49:35 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Started: Started container csi-attacher Nov 22 10:50:29.178: INFO: At 2019-11-22 10:49:35 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulled: Successfully pulled image "gcr.io/gke-release/csi-attacher:v2.0.0-gke.0" Nov 22 10:50:29.178: INFO: At 2019-11-22 10:49:35 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Created: Created container csi-attacher Nov 22 10:50:29.178: INFO: At 2019-11-22 10:49:38 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulled: Successfully pulled image "gcr.io/gke-release/csi-resizer:v0.3.0-gke.0" Nov 22 10:50:29.178: INFO: At 2019-11-22 10:49:38 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Created: Created container csi-resizer Nov 22 10:50:29.178: INFO: At 2019-11-22 10:49:38 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Started: Started container csi-resizer Nov 22 10:50:29.178: INFO: At 2019-11-22 10:49:38 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulled: Container image "gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0" already present on machine Nov 22 10:50:29.178: INFO: At 2019-11-22 10:49:39 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Started: Started container gce-pd-driver Nov 22 10:50:29.178: INFO: At 2019-11-22 10:49:39 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Created: Created container gce-pd-driver Nov 22 10:50:29.218: INFO: POD NODE PHASE GRACE CONDITIONS Nov 22 10:50:29.218: INFO: csi-gce-pd-controller-0 test-6bbac58e9d-minion-group-1pk2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 10:49:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 10:49:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 10:49:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 10:49:28 +0000 UTC }] Nov 22 10:50:29.218: INFO: csi-gce-pd-node-5mvgc test-6bbac58e9d-minion-group-1pk2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 10:49:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 10:49:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 10:49:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 10:49:28 +0000 UTC }] Nov 22 10:50:29.218: INFO: csi-gce-pd-node-rw62z test-6bbac58e9d-minion-group-dtt3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 10:49:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 10:49:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 10:49:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 10:49:28 +0000 UTC }] Nov 22 10:50:29.218: INFO: csi-gce-pd-node-zc2px test-6bbac58e9d-minion-group-ldgb Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 10:49:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 10:49:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 10:49:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 10:49:28 +0000 UTC }] Nov 22 10:50:29.218: INFO: Nov 22 10:50:29.259: INFO: Logging node info for node test-6bbac58e9d-master Nov 22 10:50:29.297: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-master /api/v1/nodes/test-6bbac58e9d-master 8a7a430e-36f3-4dcf-b7dd-f2a903ca1fa5 20101 0 2019-11-22 09:29:31 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3876802560 0} {<nil>} 3785940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3614658560 0} {<nil>} 3529940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 10:45:35 +0000 UTC,LastTransitionTime:2019-11-22 09:29:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 10:45:35 +0000 UTC,LastTransitionTime:2019-11-22 09:29:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 10:45:35 +0000 UTC,LastTransitionTime:2019-11-22 09:29:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 10:45:35 +0000 UTC,LastTransitionTime:2019-11-22 09:29:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.227.175.21,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-master.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-master.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fa8a320b898c1d5588780170530d5cf8,SystemUUID:fa8a320b-898c-1d55-8878-0170530d5cf8,BootID:2730095f-f6ec-4217-a9ae-32ba996e1eed,KernelVersion:4.19.76+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://19.3.1,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:212137343,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:200623393,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:110377926,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/etcd-empty-dir-cleanup@sha256:484662e55e0705caed26c6fb8632097457f43ce685756531da7a76319a7dcee1 k8s.gcr.io/etcd-empty-dir-cleanup:3.4.3.0],SizeBytes:77408900,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-glbc-amd64@sha256:1a859d138b4874642e9a8709e7ab04324669c77742349b5b21b1ef8a25fef55f k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1],SizeBytes:76121176,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 10:50:29.298: INFO: Logging kubelet events for node test-6bbac58e9d-master Nov 22 10:50:29.339: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-master Nov 22 10:50:29.388: INFO: fluentd-gcp-v3.2.0-fxhtk started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 10:50:29.388: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 10:50:29.388: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 10:50:29.388: INFO: kube-scheduler-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:29.388: INFO: Container kube-scheduler ready: true, restart count 0 Nov 22 10:50:29.388: INFO: l7-lb-controller-test-6bbac58e9d-master started at 2019-11-22 09:29:06 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:29.388: INFO: Container l7-lb-controller ready: true, restart count 3 Nov 22 10:50:29.388: INFO: etcd-empty-dir-cleanup-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:29.388: INFO: Container etcd-empty-dir-cleanup ready: true, restart count 0 Nov 22 10:50:29.388: INFO: etcd-server-events-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:29.388: INFO: Container etcd-container ready: true, restart count 0 Nov 22 10:50:29.388: INFO: etcd-server-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:29.388: INFO: Container etcd-container ready: true, restart count 0 Nov 22 10:50:29.388: INFO: kube-addon-manager-test-6bbac58e9d-master started at 2019-11-22 09:29:05 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:29.388: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 22 10:50:29.388: INFO: metadata-proxy-v0.1-xr6wl started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 10:50:29.388: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 10:50:29.388: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 10:50:29.388: INFO: kube-apiserver-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:29.388: INFO: Container kube-apiserver ready: true, restart count 0 Nov 22 10:50:29.388: INFO: kube-controller-manager-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:29.388: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 22 10:50:29.534: INFO: Latency metrics for node test-6bbac58e9d-master Nov 22 10:50:29.534: INFO: Logging node info for node test-6bbac58e9d-minion-group-1pk2 Nov 22 10:50:29.574: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-minion-group-1pk2 /api/v1/nodes/test-6bbac58e9d-minion-group-1pk2 a4f21abc-d48a-4c0f-a26f-9e634bca825a 20980 0 2019-11-22 10:48:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-minion-group-1pk2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.gke.io/zone:us-west1-b topology.hostpath.csi/node:test-6bbac58e9d-minion-group-1pk2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-disruptive-1972":"test-6bbac58e9d-minion-group-1pk2","csi-hostpath-disruptive-9353":"test-6bbac58e9d-minion-group-1pk2","pd.csi.storage.gke.io":"projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/instances/test-6bbac58e9d-minion-group-1pk2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-minion-group-1pk2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103880232960 0} {<nil>} 101445540Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7836020736 0} {<nil>} 7652364Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93492209510 0} {<nil>} 93492209510 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7573876736 0} {<nil>} 7396364Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-11-22 10:48:23 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-11-22 10:48:23 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-11-22 10:48:23 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-11-22 10:48:23 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-11-22 10:48:23 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-11-22 10:48:23 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-11-22 10:48:23 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 10:49:42 +0000 UTC,LastTransitionTime:2019-11-22 10:21:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 10:49:42 +0000 UTC,LastTransitionTime:2019-11-22 10:21:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 10:49:42 +0000 UTC,LastTransitionTime:2019-11-22 10:21:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 10:49:42 +0000 UTC,LastTransitionTime:2019-11-22 10:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.6,},NodeAddress{Type:ExternalIP,Address:104.198.3.26,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-minion-group-1pk2.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-minion-group-1pk2.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c363001bc173de2779c31270a0a03e8d,SystemUUID:C363001B-C173-DE27-79C3-1270A0A03E8D,BootID:ecfbc66f-0a8c-4787-a7bf-8e0ebe1e8bb2,KernelVersion:4.15.0-1048-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:130115752,},ContainerImage{Names:[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:3846a258f98448f6586700a37ae5974a0969cc0bb43f75b4fde0f198bb314103 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0],SizeBytes:113118759,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:276335fa25a703615cc2f2cdc51ba693fac4bdd70baa63f9cbf228291defd776 k8s.gcr.io/node-problem-detector:v0.8.0],SizeBytes:108715243,},ContainerImage{Names:[k8s.gcr.io/fluentd-gcp-scaler@sha256:4f28f10fb89506768910b858f7a18ffb996824a16d70d5ac895e49687df9ff58 k8s.gcr.io/fluentd-gcp-scaler:0.5.2],SizeBytes:90498960,},ContainerImage{Names:[gcr.io/gke-release/csi-provisioner@sha256:3cd0f6b65a01d3b1e6fa9f2eb31448e7c15d79ee782249c206ad2710ac189cff gcr.io/gke-release/csi-provisioner:v1.4.0-gke.0],SizeBytes:60974770,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/event-exporter@sha256:ab71028f7cbc851d273bb00449e30ab743d4e3be21ed2093299f718b42df0748 k8s.gcr.io/event-exporter:v0.3.1],SizeBytes:51445475,},ContainerImage{Names:[gcr.io/gke-release/csi-attacher@sha256:2ffe521f6b59df3fa0dd444eef5f22472e3fe343efc1ab39aaa05e4f28832418 gcr.io/gke-release/csi-attacher:v2.0.0-gke.0],SizeBytes:51300540,},ContainerImage{Names:[gcr.io/gke-release/csi-resizer@sha256:650e2a11a7f877b51db5e23c5b7eae30b99714b535a59a6bba2aa9c165358cb1 gcr.io/gke-release/csi-resizer:v0.3.0-gke.0],SizeBytes:51125391,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:32b849c66869e0491116a333af3a7cafe226639d975818dfdb4d58fd7028a0b8 quay.io/k8scsi/csi-provisioner:v1.5.0-rc1],SizeBytes:50962879,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:48e9d3c4147ab5e4e070677a352f807a17e0f6bcf4cb19b8c34f6bfadf87781b quay.io/k8scsi/csi-snapshotter:v2.0.0-rc2],SizeBytes:50515643,},ContainerImage{Names:[quay.io/k8scsi/snapshot-controller@sha256:b3c1c484ffe4f0bbf000bda93fb745e1b3899b08d605a08581426a1963dd3e8a quay.io/k8scsi/snapshot-controller:v2.0.0-rc2],SizeBytes:47222712,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:d3e2c5fb1d887bed2e6e8f931c65209b16cf7c28b40eaf7812268f9839908790 quay.io/k8scsi/csi-attacher:v2.0.0],SizeBytes:46143101,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:eff2d6a215efd9450d90796265fc4d8832a54a3a098df06edae6ff3a5072b08f quay.io/k8scsi/csi-resizer:v0.3.0],SizeBytes:46014212,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:1d49fb3b108e6b42542e4a9b056dee308f06f88824326cde1636eea0472b799d k8s.gcr.io/prometheus-to-sd:v0.7.2],SizeBytes:42314030,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:2f80c87a67122448bd60c778dd26d4067e1f48d1ea3fefe9f92b8cd1961acfa0 quay.io/k8scsi/hostpathplugin:v1.3.0-rc1],SizeBytes:28736864,},ContainerImage{Names:[gcr.io/gke-release/csi-node-driver-registrar@sha256:c09c18e90fa65c1156ab449681c38a9da732e2bb20a724ad10f90b2b0eec97d2 gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0],SizeBytes:19358236,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:89cdb2a20bdec89b75e2fbd82a67567ea90b719524990e772f2704b19757188d quay.io/k8scsi/csi-node-driver-registrar:v1.2.0],SizeBytes:17057647,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64@sha256:d83d8a481145d0eb71f8bd71ae236d1c6a931dd3bdcaf80919a8ec4a4d8aff74 k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0],SizeBytes:13513083,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 10:50:29.575: INFO: Logging kubelet events for node test-6bbac58e9d-minion-group-1pk2 Nov 22 10:50:29.615: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-minion-group-1pk2 Nov 22 10:50:29.666: INFO: kube-proxy-test-6bbac58e9d-minion-group-1pk2 started at 2019-11-22 10:30:10 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:29.666: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 10:50:29.666: INFO: csi-gce-pd-node-5mvgc started at 2019-11-22 10:49:28 +0000 UTC (0+2 container statuses recorded) Nov 22 10:50:29.666: INFO: Container csi-driver-registrar ready: true, restart count 0 Nov 22 10:50:29.666: INFO: Container gce-pd-driver ready: true, restart count 0 Nov 22 10:50:29.666: INFO: csi-gce-pd-controller-0 started at 2019-11-22 10:49:28 +0000 UTC (0+4 container statuses recorded) Nov 22 10:50:29.666: INFO: Container csi-attacher ready: true, restart count 0 Nov 22 10:50:29.666: INFO: Container csi-provisioner ready: true, restart count 0 Nov 22 10:50:29.666: INFO: Container csi-resizer ready: true, restart count 0 Nov 22 10:50:29.666: INFO: Container gce-pd-driver ready: true, restart count 0 Nov 22 10:50:29.666: INFO: fluentd-gcp-v3.2.0-4fdmw started at 2019-11-22 10:48:20 +0000 UTC (0+2 container statuses recorded) Nov 22 10:50:29.666: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 10:50:29.666: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 10:50:29.666: INFO: npd-v0.8.0-224c2 started at 2019-11-22 10:48:20 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:29.666: INFO: Container node-problem-detector ready: true, restart count 0 Nov 22 10:50:29.666: INFO: metadata-proxy-v0.1-4bxj9 started at 2019-11-22 10:48:20 +0000 UTC (0+2 container statuses recorded) Nov 22 10:50:29.666: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 10:50:29.666: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 10:50:29.813: INFO: Latency metrics for node test-6bbac58e9d-minion-group-1pk2 Nov 22 10:50:29.813: INFO: Logging node info for node test-6bbac58e9d-minion-group-dtt3 Nov 22 10:50:29.852: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-minion-group-dtt3 /api/v1/nodes/test-6bbac58e9d-minion-group-dtt3 bbcaa4a7-21ed-4b1a-8d6c-097e686c368c 21045 0 2019-11-22 09:29:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-minion-group-dtt3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.gke.io/zone:us-west1-b topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"pd.csi.storage.gke.io":"projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/instances/test-6bbac58e9d-minion-group-dtt3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-minion-group-dtt3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103880232960 0} {<nil>} 101445540Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7836012544 0} {<nil>} 7652356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93492209510 0} {<nil>} 93492209510 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7573868544 0} {<nil>} 7396356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-11-22 10:50:02 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-11-22 10:50:02 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-11-22 10:50:02 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-11-22 10:50:02 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-11-22 10:50:02 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-11-22 10:50:02 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-11-22 10:50:02 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 10:48:18 +0000 UTC,LastTransitionTime:2019-11-22 09:39:45 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 10:48:18 +0000 UTC,LastTransitionTime:2019-11-22 09:39:45 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 10:48:18 +0000 UTC,LastTransitionTime:2019-11-22 09:39:45 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 10:48:18 +0000 UTC,LastTransitionTime:2019-11-22 09:40:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.227.160.250,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-minion-group-dtt3.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-minion-group-dtt3.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:015ba3833761f0b9cd8a2196bf6fb79d,SystemUUID:015BA383-3761-F0B9-CD8A-2196BF6FB79D,BootID:c9ec395e-18ec-40c2-b13c-49ae0567ad15,KernelVersion:4.15.0-1048-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:130115752,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:3846a258f98448f6586700a37ae5974a0969cc0bb43f75b4fde0f198bb314103 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0],SizeBytes:113118759,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:276335fa25a703615cc2f2cdc51ba693fac4bdd70baa63f9cbf228291defd776 k8s.gcr.io/node-problem-detector:v0.8.0],SizeBytes:108715243,},ContainerImage{Names:[k8s.gcr.io/heapster-amd64@sha256:9fae0af136ce0cf4f88393b3670f7139ffc464692060c374d2ae748e13144521 k8s.gcr.io/heapster-amd64:v1.6.0-beta.1],SizeBytes:76016169,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:2114a2f70d34fa2821fb7f9bf373be5f44c8cbfeb6097fb5ba8eaf73cd38b72a k8s.gcr.io/addon-resizer:1.8.6],SizeBytes:37928220,},ContainerImage{Names:[gcr.io/gke-release/csi-node-driver-registrar@sha256:c09c18e90fa65c1156ab449681c38a9da732e2bb20a724ad10f90b2b0eec97d2 gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0],SizeBytes:19358236,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 10:50:29.852: INFO: Logging kubelet events for node test-6bbac58e9d-minion-group-dtt3 Nov 22 10:50:29.892: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-minion-group-dtt3 Nov 22 10:50:29.944: INFO: metadata-proxy-v0.1-qj8lx started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 10:50:29.944: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 10:50:29.944: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 10:50:29.944: INFO: kubernetes-dashboard-7778f8b456-dwww9 started at 2019-11-22 09:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:29.944: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 22 10:50:29.944: INFO: kube-dns-autoscaler-65bc6d4889-kncqk started at 2019-11-22 10:43:03 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:29.944: INFO: Container autoscaler ready: true, restart count 0 Nov 22 10:50:29.944: INFO: csi-gce-pd-node-rw62z started at 2019-11-22 10:49:28 +0000 UTC (0+2 container statuses recorded) Nov 22 10:50:29.944: INFO: Container csi-driver-registrar ready: true, restart count 0 Nov 22 10:50:29.944: INFO: Container gce-pd-driver ready: true, restart count 0 Nov 22 10:50:29.944: INFO: kube-proxy-test-6bbac58e9d-minion-group-dtt3 started at 2019-11-22 09:29:30 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:29.944: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 10:50:29.944: INFO: heapster-v1.6.0-beta.1-859599df9f-9nl5x started at 2019-11-22 09:29:47 +0000 UTC (0+2 container statuses recorded) Nov 22 10:50:29.944: INFO: Container heapster ready: true, restart count 0 Nov 22 10:50:29.944: INFO: Container heapster-nanny ready: true, restart count 0 Nov 22 10:50:29.944: INFO: fluentd-gcp-v3.2.0-z4gtt started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 10:50:29.944: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 10:50:29.944: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 10:50:29.944: INFO: npd-v0.8.0-86sjk started at 2019-11-22 09:29:41 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:29.944: INFO: Container node-problem-detector ready: true, restart count 0 Nov 22 10:50:29.944: INFO: coredns-65567c7b57-vqz56 started at 2019-11-22 09:29:55 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:29.944: INFO: Container coredns ready: true, restart count 0 Nov 22 10:50:29.944: INFO: metrics-server-v0.3.6-7d96444597-lfv7c started at 2019-11-22 09:29:45 +0000 UTC (0+2 container statuses recorded) Nov 22 10:50:29.944: INFO: Container metrics-server ready: true, restart count 0 Nov 22 10:50:29.944: INFO: Container metrics-server-nanny ready: true, restart count 0 Nov 22 10:50:30.091: INFO: Latency metrics for node test-6bbac58e9d-minion-group-dtt3 Nov 22 10:50:30.091: INFO: Logging node info for node test-6bbac58e9d-minion-group-ldgb Nov 22 10:50:30.130: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-minion-group-ldgb /api/v1/nodes/test-6bbac58e9d-minion-group-ldgb 7af88a45-91da-49e2-aad1-693979aa273c 21051 0 2019-11-22 09:29:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-minion-group-ldgb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.gke.io/zone:us-west1-b topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"pd.csi.storage.gke.io":"projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/instances/test-6bbac58e9d-minion-group-ldgb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-minion-group-ldgb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103880232960 0} {<nil>} 101445540Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7836012544 0} {<nil>} 7652356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93492209510 0} {<nil>} 93492209510 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7573868544 0} {<nil>} 7396356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-11-22 10:50:04 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-11-22 10:50:04 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-11-22 10:50:04 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-11-22 10:50:04 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-11-22 10:50:04 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-11-22 10:50:04 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-11-22 10:50:04 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 10:48:50 +0000 UTC,LastTransitionTime:2019-11-22 10:33:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 10:48:50 +0000 UTC,LastTransitionTime:2019-11-22 10:33:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 10:48:50 +0000 UTC,LastTransitionTime:2019-11-22 10:33:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 10:48:50 +0000 UTC,LastTransitionTime:2019-11-22 10:40:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:104.199.127.196,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-minion-group-ldgb.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-minion-group-ldgb.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7e1c327ba82c05d274d059f31a030f91,SystemUUID:7E1C327B-A82C-05D2-74D0-59F31A030F91,BootID:153cc788-4fe4-4a95-a234-e7f53446bb04,KernelVersion:4.15.0-1048-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:130115752,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:3846a258f98448f6586700a37ae5974a0969cc0bb43f75b4fde0f198bb314103 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0],SizeBytes:113118759,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:276335fa25a703615cc2f2cdc51ba693fac4bdd70baa63f9cbf228291defd776 k8s.gcr.io/node-problem-detector:v0.8.0],SizeBytes:108715243,},ContainerImage{Names:[k8s.gcr.io/fluentd-gcp-scaler@sha256:4f28f10fb89506768910b858f7a18ffb996824a16d70d5ac895e49687df9ff58 k8s.gcr.io/fluentd-gcp-scaler:0.5.2],SizeBytes:90498960,},ContainerImage{Names:[gcr.io/gke-release/csi-provisioner@sha256:3cd0f6b65a01d3b1e6fa9f2eb31448e7c15d79ee782249c206ad2710ac189cff gcr.io/gke-release/csi-provisioner:v1.4.0-gke.0],SizeBytes:60974770,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/event-exporter@sha256:ab71028f7cbc851d273bb00449e30ab743d4e3be21ed2093299f718b42df0748 k8s.gcr.io/event-exporter:v0.3.1],SizeBytes:51445475,},ContainerImage{Names:[gcr.io/gke-release/csi-attacher@sha256:2ffe521f6b59df3fa0dd444eef5f22472e3fe343efc1ab39aaa05e4f28832418 gcr.io/gke-release/csi-attacher:v2.0.0-gke.0],SizeBytes:51300540,},ContainerImage{Names:[gcr.io/gke-release/csi-resizer@sha256:650e2a11a7f877b51db5e23c5b7eae30b99714b535a59a6bba2aa9c165358cb1 gcr.io/gke-release/csi-resizer:v0.3.0-gke.0],SizeBytes:51125391,},ContainerImage{Names:[quay.io/k8scsi/snapshot-controller@sha256:b3c1c484ffe4f0bbf000bda93fb745e1b3899b08d605a08581426a1963dd3e8a quay.io/k8scsi/snapshot-controller:v2.0.0-rc2],SizeBytes:47222712,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:1d49fb3b108e6b42542e4a9b056dee308f06f88824326cde1636eea0472b799d k8s.gcr.io/prometheus-to-sd:v0.7.2],SizeBytes:42314030,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[gcr.io/gke-release/csi-node-driver-registrar@sha256:c09c18e90fa65c1156ab449681c38a9da732e2bb20a724ad10f90b2b0eec97d2 gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0],SizeBytes:19358236,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64@sha256:d83d8a481145d0eb71f8bd71ae236d1c6a931dd3bdcaf80919a8ec4a4d8aff74 k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0],SizeBytes:13513083,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 10:50:30.131: INFO: Logging kubelet events for node test-6bbac58e9d-minion-group-ldgb Nov 22 10:50:30.171: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-minion-group-ldgb Nov 22 10:50:30.224: INFO: l7-default-backend-678889f899-sn2pt started at 2019-11-22 10:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:30.224: INFO: Container default-http-backend ready: true, restart count 0 Nov 22 10:50:30.224: INFO: coredns-65567c7b57-s9876 started at 2019-11-22 10:43:03 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:30.224: INFO: Container coredns ready: true, restart count 0 Nov 22 10:50:30.224: INFO: csi-gce-pd-node-zc2px started at 2019-11-22 10:49:28 +0000 UTC (0+2 container statuses recorded) Nov 22 10:50:30.224: INFO: Container csi-driver-registrar ready: true, restart count 0 Nov 22 10:50:30.224: INFO: Container gce-pd-driver ready: true, restart count 0 Nov 22 10:50:30.224: INFO: kube-proxy-test-6bbac58e9d-minion-group-ldgb started at 2019-11-22 09:29:30 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:30.224: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 10:50:30.224: INFO: fluentd-gcp-v3.2.0-f9q96 started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 10:50:30.224: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 10:50:30.224: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 10:50:30.224: INFO: event-exporter-v0.3.1-747b47fcd-8chbt started at 2019-11-22 10:43:02 +0000 UTC (0+2 container statuses recorded) Nov 22 10:50:30.224: INFO: Container event-exporter ready: true, restart count 0 Nov 22 10:50:30.224: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 10:50:30.224: INFO: fluentd-gcp-scaler-76d9c77b4d-wh4nt started at 2019-11-22 10:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:30.224: INFO: Container fluentd-gcp-scaler ready: true, restart count 0 Nov 22 10:50:30.224: INFO: volume-snapshot-controller-0 started at 2019-11-22 10:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:30.224: INFO: Container volume-snapshot-controller ready: true, restart count 0 Nov 22 10:50:30.224: INFO: metadata-proxy-v0.1-ptzjq started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 10:50:30.224: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 10:50:30.224: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 10:50:30.224: INFO: npd-v0.8.0-wmkxq started at 2019-11-22 09:29:42 +0000 UTC (0+1 container statuses recorded) Nov 22 10:50:30.224: INFO: Container node-problem-detector ready: true, restart count 0 Nov 22 10:50:30.365: INFO: Latency metrics for node test-6bbac58e9d-minion-group-ldgb Nov 22 10:50:30.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "multivolume-4046" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\spd\.csi\.storage\.gke\.io\]\[Serial\]\s\[Testpattern\:\sDynamic\sPV\s\(xfs\)\]\[Slow\]\svolumes\sshould\sallow\sexec\sof\sfiles\son\sthe\svolume$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:191 Nov 22 11:22:35.724: Unexpected error: <*errors.errorString | 0xc003dabd20>: { s: "expected pod \"exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq\" success: Gave up after waiting 5m0s for pod \"exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq\" to be \"success or failure\"", } expected pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq" success: Gave up after waiting 5m0s for pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq" to be "success or failure" occurred /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:894from junit_01.xml
[BeforeEach] [Testpattern: Dynamic PV (xfs)][Slow] volumes /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:101 [BeforeEach] [Testpattern: Dynamic PV (xfs)][Slow] volumes /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 22 11:17:26.093: INFO: >>> kubeConfig: /workspace/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename volume �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should allow exec of files on the volume /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:191 �[1mSTEP�[0m: deploying csi gce-pd driver Nov 22 11:17:26.286: INFO: Found CI service account key at /etc/service-account/service-account.json Nov 22 11:17:26.286: INFO: Running cp [/etc/service-account/service-account.json /tmp/60007c44-0d45-4076-8a04-2e98f608a0d2/cloud-sa.json] Nov 22 11:17:26.328: INFO: Shredding file /tmp/60007c44-0d45-4076-8a04-2e98f608a0d2/cloud-sa.json Nov 22 11:17:26.328: INFO: Running shred [--remove /tmp/60007c44-0d45-4076-8a04-2e98f608a0d2/cloud-sa.json] Nov 22 11:17:26.353: INFO: File /tmp/60007c44-0d45-4076-8a04-2e98f608a0d2/cloud-sa.json successfully shredded Nov 22 11:17:26.361: INFO: creating *v1.ServiceAccount: volume-6167/csi-attacher Nov 22 11:17:26.408: INFO: creating *v1.ClusterRole: external-attacher-runner-volume-6167 Nov 22 11:17:26.408: INFO: Define cluster role external-attacher-runner-volume-6167 Nov 22 11:17:26.452: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-volume-6167 Nov 22 11:17:26.491: INFO: creating *v1.Role: volume-6167/external-attacher-cfg-volume-6167 Nov 22 11:17:26.531: INFO: creating *v1.RoleBinding: volume-6167/csi-attacher-role-cfg Nov 22 11:17:26.571: INFO: creating *v1.ServiceAccount: volume-6167/csi-provisioner Nov 22 11:17:26.611: INFO: creating *v1.ClusterRole: external-provisioner-runner-volume-6167 Nov 22 11:17:26.611: INFO: Define cluster role external-provisioner-runner-volume-6167 Nov 22 11:17:26.651: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-volume-6167 Nov 22 11:17:26.692: INFO: creating *v1.Role: volume-6167/external-provisioner-cfg-volume-6167 Nov 22 11:17:26.731: INFO: creating *v1.RoleBinding: volume-6167/csi-provisioner-role-cfg Nov 22 11:17:26.775: INFO: creating *v1.ServiceAccount: volume-6167/csi-gce-pd-controller-sa Nov 22 11:17:26.816: INFO: creating *v1.ClusterRole: csi-gce-pd-provisioner-role-volume-6167 Nov 22 11:17:26.816: INFO: Define cluster role csi-gce-pd-provisioner-role-volume-6167 Nov 22 11:17:26.858: INFO: creating *v1.ClusterRoleBinding: csi-gce-pd-controller-provisioner-binding-volume-6167 Nov 22 11:17:26.910: INFO: creating *v1.ClusterRole: csi-gce-pd-attacher-role-volume-6167 Nov 22 11:17:26.910: INFO: Define cluster role csi-gce-pd-attacher-role-volume-6167 Nov 22 11:17:26.953: INFO: creating *v1.ClusterRoleBinding: csi-gce-pd-controller-attacher-binding-volume-6167 Nov 22 11:17:26.994: INFO: creating *v1.ClusterRole: csi-gce-pd-resizer-role-volume-6167 Nov 22 11:17:26.994: INFO: Define cluster role csi-gce-pd-resizer-role-volume-6167 Nov 22 11:17:27.035: INFO: creating *v1.ClusterRoleBinding: csi-gce-pd-resizer-binding-volume-6167 Nov 22 11:17:27.075: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-volume-6167 Nov 22 11:17:27.117: INFO: creating *v1.DaemonSet: volume-6167/csi-gce-pd-node Nov 22 11:17:27.159: INFO: creating *v1.StatefulSet: volume-6167/csi-gce-pd-controller Nov 22 11:17:27.406: INFO: Test running for native CSI Driver, not checking metrics Nov 22 11:17:27.406: INFO: Creating resource for dynamic PV Nov 22 11:17:27.407: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(pd.csi.storage.gke.io) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass volume-6167-pd.csi.storage.gke.io-scv6cjm �[1mSTEP�[0m: creating a claim Nov 22 11:17:27.447: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil �[1mSTEP�[0m: Creating pod exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq �[1mSTEP�[0m: Creating a pod to test exec-volume-test Nov 22 11:17:27.568: INFO: Waiting up to 5m0s for pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq" in namespace "volume-6167" to be "success or failure" Nov 22 11:17:27.607: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 38.30461ms Nov 22 11:17:29.647: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07852296s Nov 22 11:17:31.686: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11780544s Nov 22 11:17:33.725: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156528897s Nov 22 11:17:35.765: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.196411888s Nov 22 11:17:37.825: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.256936214s Nov 22 11:17:39.866: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 12.297468849s Nov 22 11:17:41.907: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 14.338632842s Nov 22 11:17:43.946: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 16.377612278s Nov 22 11:17:45.986: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 18.417223115s Nov 22 11:17:48.070: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 20.501448313s Nov 22 11:17:50.108: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 22.539869517s Nov 22 11:17:52.147: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 24.57871805s Nov 22 11:17:54.186: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 26.618083131s Nov 22 11:17:56.225: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 28.657034258s Nov 22 11:17:58.264: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 30.6959742s Nov 22 11:18:00.305: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 32.736442309s Nov 22 11:18:02.344: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 34.77614883s Nov 22 11:18:04.384: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 36.815221074s Nov 22 11:18:06.422: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 38.854193034s Nov 22 11:18:08.462: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 40.893568873s Nov 22 11:18:10.501: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 42.932221767s Nov 22 11:18:12.540: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 44.971473829s Nov 22 11:18:14.579: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 47.010629086s Nov 22 11:18:16.617: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 49.048882669s Nov 22 11:18:18.656: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 51.087483256s Nov 22 11:18:20.695: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 53.126881905s Nov 22 11:18:22.735: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 55.167014366s Nov 22 11:18:24.773: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 57.204741142s Nov 22 11:18:26.812: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 59.243607472s Nov 22 11:18:28.851: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.282230734s Nov 22 11:18:30.888: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.320052667s Nov 22 11:18:32.928: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.35960233s Nov 22 11:18:34.968: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.399449952s Nov 22 11:18:37.007: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.439163987s Nov 22 11:18:39.046: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.477910147s Nov 22 11:18:41.086: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m13.517400429s Nov 22 11:18:43.124: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.55592882s Nov 22 11:18:45.163: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m17.595026433s Nov 22 11:18:47.203: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.634387392s Nov 22 11:18:49.242: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m21.673949599s Nov 22 11:18:51.282: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m23.713498242s Nov 22 11:18:53.320: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m25.751918492s Nov 22 11:18:55.359: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m27.790996441s Nov 22 11:18:57.398: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m29.829681418s Nov 22 11:18:59.437: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m31.868870851s Nov 22 11:19:01.477: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m33.908275739s Nov 22 11:19:03.517: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m35.948354637s Nov 22 11:19:05.556: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m37.987353579s Nov 22 11:19:07.596: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.027334694s Nov 22 11:19:09.635: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.066936146s Nov 22 11:19:11.675: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.106594015s Nov 22 11:19:13.714: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.146062295s Nov 22 11:19:15.754: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.185734452s Nov 22 11:19:17.793: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.224813524s Nov 22 11:19:19.832: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.263538524s Nov 22 11:19:21.871: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.302235097s Nov 22 11:19:23.909: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.340412897s Nov 22 11:19:25.947: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.378806648s Nov 22 11:19:27.985: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.41719447s Nov 22 11:19:30.027: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.458447094s Nov 22 11:19:32.068: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.4997345s Nov 22 11:19:34.108: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.539455449s Nov 22 11:19:36.147: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.578441516s Nov 22 11:19:38.193: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.624324201s Nov 22 11:19:40.232: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.66351981s Nov 22 11:19:42.272: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.704000475s Nov 22 11:19:44.311: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.743033723s Nov 22 11:19:46.351: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.78275535s Nov 22 11:19:48.390: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.822083803s Nov 22 11:19:50.430: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.861242989s Nov 22 11:19:52.468: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.899706122s Nov 22 11:19:54.507: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.93862881s Nov 22 11:19:56.547: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.978327899s Nov 22 11:19:58.586: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m31.017738013s Nov 22 11:20:00.625: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m33.056927733s Nov 22 11:20:02.665: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m35.096668993s Nov 22 11:20:04.704: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m37.135616688s Nov 22 11:20:06.743: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m39.175185281s Nov 22 11:20:08.783: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m41.214715615s Nov 22 11:20:10.821: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m43.252950985s Nov 22 11:20:12.861: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m45.292935018s Nov 22 11:20:14.900: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m47.331583621s Nov 22 11:20:16.939: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m49.370336686s Nov 22 11:20:18.979: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m51.410424082s Nov 22 11:20:21.018: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m53.449887004s Nov 22 11:20:23.058: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m55.489960108s Nov 22 11:20:25.097: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m57.528801426s Nov 22 11:20:27.136: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m59.567698913s Nov 22 11:20:29.175: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m1.606915838s Nov 22 11:20:31.215: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m3.646353458s Nov 22 11:20:33.253: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m5.685035975s Nov 22 11:20:35.292: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m7.723656034s Nov 22 11:20:37.332: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m9.763299006s Nov 22 11:20:39.371: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m11.802420806s Nov 22 11:20:41.409: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m13.841172696s Nov 22 11:20:43.449: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m15.880991607s Nov 22 11:20:45.488: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m17.919640695s Nov 22 11:20:47.527: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m19.958399724s Nov 22 11:20:49.566: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m21.998189928s Nov 22 11:20:51.606: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.037655962s Nov 22 11:20:53.646: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.077291695s Nov 22 11:20:55.687: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.118342888s Nov 22 11:20:57.726: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.15775609s Nov 22 11:20:59.766: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.198004848s Nov 22 11:21:01.806: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.237816533s Nov 22 11:21:03.846: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.277683766s Nov 22 11:21:05.886: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.317413727s Nov 22 11:21:07.925: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.356451646s Nov 22 11:21:09.971: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.402677211s Nov 22 11:21:12.010: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.44162055s Nov 22 11:21:14.050: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.481635847s Nov 22 11:21:16.089: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.521080424s Nov 22 11:21:18.128: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.559947113s Nov 22 11:21:20.190: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.621248165s Nov 22 11:21:22.233: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.66442751s Nov 22 11:21:24.273: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.705146683s Nov 22 11:21:26.313: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.744852086s Nov 22 11:21:28.352: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.784174667s Nov 22 11:21:30.392: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.823969913s Nov 22 11:21:32.432: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.86373803s Nov 22 11:21:34.471: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.902912239s Nov 22 11:21:36.511: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.942872588s Nov 22 11:21:38.550: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.981920683s Nov 22 11:21:40.590: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m13.022161868s Nov 22 11:21:42.629: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m15.061221081s Nov 22 11:21:44.668: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m17.099743252s Nov 22 11:21:46.710: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m19.14191623s Nov 22 11:21:48.750: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m21.181656093s Nov 22 11:21:50.790: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m23.222031507s Nov 22 11:21:52.830: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m25.261712122s Nov 22 11:21:54.868: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m27.299945253s Nov 22 11:21:56.907: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m29.33864978s Nov 22 11:21:58.946: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m31.378029573s Nov 22 11:22:00.986: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m33.418041167s Nov 22 11:22:03.025: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m35.456535491s Nov 22 11:22:05.065: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m37.496527642s Nov 22 11:22:07.104: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m39.53582081s Nov 22 11:22:09.149: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m41.5805754s Nov 22 11:22:11.189: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m43.620423287s Nov 22 11:22:13.228: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m45.660062138s Nov 22 11:22:15.269: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m47.700437831s Nov 22 11:22:17.308: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m49.739991919s Nov 22 11:22:19.348: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m51.779884538s Nov 22 11:22:21.387: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.818453004s Nov 22 11:22:23.425: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m55.857174097s Nov 22 11:22:25.465: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.896382623s Nov 22 11:22:27.505: INFO: Pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.936430854s Nov 22 11:22:29.597: INFO: Failed to get logs from node "test-6bbac58e9d-minion-group-dtt3" pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq" container "exec-container-pd-csi-storage-gke-io-dynamicpv-bdrq": the server rejected our request for an unknown reason (get pods exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq) �[1mSTEP�[0m: delete the pod Nov 22 11:22:29.638: INFO: Waiting for pod exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq to disappear Nov 22 11:22:29.681: INFO: Pod exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq still exists Nov 22 11:22:31.682: INFO: Waiting for pod exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq to disappear Nov 22 11:22:31.722: INFO: Pod exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq still exists Nov 22 11:22:33.682: INFO: Waiting for pod exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq to disappear Nov 22 11:22:33.723: INFO: Pod exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq still exists Nov 22 11:22:35.682: INFO: Waiting for pod exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq to disappear Nov 22 11:22:35.723: INFO: Pod exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq no longer exists Nov 22 11:22:35.724: FAIL: Unexpected error: <*errors.errorString | 0xc003dabd20>: { s: "expected pod \"exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq\" success: Gave up after waiting 5m0s for pod \"exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq\" to be \"success or failure\"", } expected pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq" success: Gave up after waiting 5m0s for pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq" to be "success or failure" occurred �[1mSTEP�[0m: Deleting pvc Nov 22 11:22:35.725: INFO: Deleting PersistentVolumeClaim "pd.csi.storage.gke.ios8zs5" �[1mSTEP�[0m: Deleting sc �[1mSTEP�[0m: uninstalling gce-pd driver Nov 22 11:22:35.809: INFO: deleting *v1.ServiceAccount: volume-6167/csi-attacher Nov 22 11:22:35.850: INFO: deleting *v1.ClusterRole: external-attacher-runner-volume-6167 Nov 22 11:22:35.892: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-volume-6167 Nov 22 11:22:35.934: INFO: deleting *v1.Role: volume-6167/external-attacher-cfg-volume-6167 Nov 22 11:22:35.981: INFO: deleting *v1.RoleBinding: volume-6167/csi-attacher-role-cfg Nov 22 11:22:36.035: INFO: deleting *v1.ServiceAccount: volume-6167/csi-provisioner Nov 22 11:22:36.076: INFO: deleting *v1.ClusterRole: external-provisioner-runner-volume-6167 Nov 22 11:22:36.121: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-volume-6167 Nov 22 11:22:36.162: INFO: deleting *v1.Role: volume-6167/external-provisioner-cfg-volume-6167 Nov 22 11:22:36.204: INFO: deleting *v1.RoleBinding: volume-6167/csi-provisioner-role-cfg Nov 22 11:22:36.248: INFO: deleting *v1.ServiceAccount: volume-6167/csi-gce-pd-controller-sa Nov 22 11:22:36.289: INFO: deleting *v1.ClusterRole: csi-gce-pd-provisioner-role-volume-6167 Nov 22 11:22:36.340: INFO: deleting *v1.ClusterRoleBinding: csi-gce-pd-controller-provisioner-binding-volume-6167 Nov 22 11:22:36.381: INFO: deleting *v1.ClusterRole: csi-gce-pd-attacher-role-volume-6167 Nov 22 11:22:36.426: INFO: deleting *v1.ClusterRoleBinding: csi-gce-pd-controller-attacher-binding-volume-6167 Nov 22 11:22:36.469: INFO: deleting *v1.ClusterRole: csi-gce-pd-resizer-role-volume-6167 Nov 22 11:22:36.511: INFO: deleting *v1.ClusterRoleBinding: csi-gce-pd-resizer-binding-volume-6167 Nov 22 11:22:36.555: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-volume-6167 Nov 22 11:22:36.597: INFO: deleting *v1.DaemonSet: volume-6167/csi-gce-pd-node Nov 22 11:22:36.640: INFO: deleting *v1.StatefulSet: volume-6167/csi-gce-pd-controller [AfterEach] [Testpattern: Dynamic PV (xfs)][Slow] volumes /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "volume-6167". �[1mSTEP�[0m: Found 47 events. Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:27 +0000 UTC - event for csi-gce-pd-controller: {statefulset-controller } SuccessfulCreate: create Pod csi-gce-pd-controller-0 in StatefulSet csi-gce-pd-controller successful Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:27 +0000 UTC - event for csi-gce-pd-controller-0: {default-scheduler } Scheduled: Successfully assigned volume-6167/csi-gce-pd-controller-0 to test-6bbac58e9d-minion-group-1pk2 Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:27 +0000 UTC - event for csi-gce-pd-node: {daemonset-controller } SuccessfulCreate: Created pod: csi-gce-pd-node-zjqsj Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:27 +0000 UTC - event for csi-gce-pd-node: {daemonset-controller } SuccessfulCreate: Created pod: csi-gce-pd-node-5zb8b Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:27 +0000 UTC - event for csi-gce-pd-node: {daemonset-controller } SuccessfulCreate: Created pod: csi-gce-pd-node-lpxh4 Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:27 +0000 UTC - event for csi-gce-pd-node-5zb8b: {default-scheduler } Scheduled: Successfully assigned volume-6167/csi-gce-pd-node-5zb8b to test-6bbac58e9d-minion-group-dtt3 Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:27 +0000 UTC - event for csi-gce-pd-node-lpxh4: {default-scheduler } Scheduled: Successfully assigned volume-6167/csi-gce-pd-node-lpxh4 to test-6bbac58e9d-minion-group-ldgb Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:27 +0000 UTC - event for csi-gce-pd-node-zjqsj: {default-scheduler } Scheduled: Successfully assigned volume-6167/csi-gce-pd-node-zjqsj to test-6bbac58e9d-minion-group-1pk2 Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:27 +0000 UTC - event for pd.csi.storage.gke.ios8zs5: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:27 +0000 UTC - event for pd.csi.storage.gke.ios8zs5: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "pd.csi.storage.gke.io" or manually created by system administrator Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulled: Container image "gcr.io/gke-release/csi-resizer:v0.3.0-gke.0" already present on machine Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Started: Started container csi-attacher Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulled: Container image "gcr.io/gke-release/csi-attacher:v2.0.0-gke.0" already present on machine Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Started: Started container csi-provisioner Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Created: Created container csi-provisioner Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulled: Container image "gcr.io/gke-release/csi-provisioner:v1.4.0-gke.0" already present on machine Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Created: Created container csi-attacher Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-node-5zb8b: {kubelet test-6bbac58e9d-minion-group-dtt3} Pulled: Container image "gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0" already present on machine Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-node-lpxh4: {kubelet test-6bbac58e9d-minion-group-ldgb} Created: Created container gce-pd-driver Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-node-lpxh4: {kubelet test-6bbac58e9d-minion-group-ldgb} Started: Started container gce-pd-driver Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-node-lpxh4: {kubelet test-6bbac58e9d-minion-group-ldgb} Pulled: Container image "gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0" already present on machine Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-node-lpxh4: {kubelet test-6bbac58e9d-minion-group-ldgb} Started: Started container csi-driver-registrar Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-node-lpxh4: {kubelet test-6bbac58e9d-minion-group-ldgb} Pulled: Container image "gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0" already present on machine Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-node-lpxh4: {kubelet test-6bbac58e9d-minion-group-ldgb} Created: Created container csi-driver-registrar Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-node-zjqsj: {kubelet test-6bbac58e9d-minion-group-1pk2} Created: Created container gce-pd-driver Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-node-zjqsj: {kubelet test-6bbac58e9d-minion-group-1pk2} Started: Started container csi-driver-registrar Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-node-zjqsj: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulled: Container image "gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0" already present on machine Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-node-zjqsj: {kubelet test-6bbac58e9d-minion-group-1pk2} Started: Started container gce-pd-driver Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-node-zjqsj: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulled: Container image "gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0" already present on machine Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:28 +0000 UTC - event for csi-gce-pd-node-zjqsj: {kubelet test-6bbac58e9d-minion-group-1pk2} Created: Created container csi-driver-registrar Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:29 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Started: Started container gce-pd-driver Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:29 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Created: Created container gce-pd-driver Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:29 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Created: Created container csi-resizer Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:29 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulled: Container image "gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0" already present on machine Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:29 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-1pk2} Started: Started container csi-resizer Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:29 +0000 UTC - event for csi-gce-pd-node-5zb8b: {kubelet test-6bbac58e9d-minion-group-dtt3} Pulled: Container image "gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0" already present on machine Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:29 +0000 UTC - event for csi-gce-pd-node-5zb8b: {kubelet test-6bbac58e9d-minion-group-dtt3} Created: Created container csi-driver-registrar Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:29 +0000 UTC - event for csi-gce-pd-node-5zb8b: {kubelet test-6bbac58e9d-minion-group-dtt3} Started: Started container csi-driver-registrar Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:29 +0000 UTC - event for csi-gce-pd-node-5zb8b: {kubelet test-6bbac58e9d-minion-group-dtt3} Created: Created container gce-pd-driver Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:29 +0000 UTC - event for csi-gce-pd-node-5zb8b: {kubelet test-6bbac58e9d-minion-group-dtt3} Started: Started container gce-pd-driver Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:30 +0000 UTC - event for pd.csi.storage.gke.ios8zs5: {pd.csi.storage.gke.io_csi-gce-pd-controller-0_963dc73f-e82a-4ae2-8dd9-389d0c9e9c34 } Provisioning: External provisioner is provisioning volume for claim "volume-6167/pd.csi.storage.gke.ios8zs5" Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:30 +0000 UTC - event for pd.csi.storage.gke.ios8zs5: {pd.csi.storage.gke.io_csi-gce-pd-controller-0_963dc73f-e82a-4ae2-8dd9-389d0c9e9c34 } ProvisioningFailed: failed to provision volume with StorageClass "volume-6167-pd.csi.storage.gke.io-scv6cjm": error generating accessibility requirements: no topology key found on CSINode test-6bbac58e9d-minion-group-dtt3 Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:35 +0000 UTC - event for pd.csi.storage.gke.ios8zs5: {pd.csi.storage.gke.io_csi-gce-pd-controller-0_963dc73f-e82a-4ae2-8dd9-389d0c9e9c34 } ProvisioningSucceeded: Successfully provisioned volume pvc-3de78008-706a-42be-9cec-db3ae3d0ff49 Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:36 +0000 UTC - event for exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq: {default-scheduler } Scheduled: Successfully assigned volume-6167/exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq to test-6bbac58e9d-minion-group-dtt3 Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:48 +0000 UTC - event for exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-3de78008-706a-42be-9cec-db3ae3d0ff49" Nov 22 11:22:36.724: INFO: At 2019-11-22 11:17:52 +0000 UTC - event for exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq: {kubelet test-6bbac58e9d-minion-group-dtt3} FailedMount: MountVolume.MountDevice failed for volume "pvc-3de78008-706a-42be-9cec-db3ae3d0ff49" : rpc error: code = Internal desc = Failed to format and mount device from ("/dev/disk/by-id/google-pvc-3de78008-706a-42be-9cec-db3ae3d0ff49") to ("/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3de78008-706a-42be-9cec-db3ae3d0ff49/globalmount") with fstype ("xfs") and options ([]): executable file not found in $PATH Nov 22 11:22:36.724: INFO: At 2019-11-22 11:19:39 +0000 UTC - event for exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq: {kubelet test-6bbac58e9d-minion-group-dtt3} FailedMount: Unable to attach or mount volumes: unmounted volumes=[vol1], unattached volumes=[vol1 default-token-jwscr]: timed out waiting for the condition Nov 22 11:22:36.764: INFO: POD NODE PHASE GRACE CONDITIONS Nov 22 11:22:36.764: INFO: csi-gce-pd-controller-0 test-6bbac58e9d-minion-group-1pk2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 11:17:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 11:17:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 11:17:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 11:17:27 +0000 UTC }] Nov 22 11:22:36.764: INFO: csi-gce-pd-node-5zb8b test-6bbac58e9d-minion-group-dtt3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 11:17:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 11:17:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 11:17:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 11:17:27 +0000 UTC }] Nov 22 11:22:36.765: INFO: csi-gce-pd-node-lpxh4 test-6bbac58e9d-minion-group-ldgb Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 11:17:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 11:17:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 11:17:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 11:17:27 +0000 UTC }] Nov 22 11:22:36.765: INFO: csi-gce-pd-node-zjqsj test-6bbac58e9d-minion-group-1pk2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 11:17:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 11:17:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 11:17:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 11:17:27 +0000 UTC }] Nov 22 11:22:36.765: INFO: Nov 22 11:22:36.805: INFO: Logging node info for node test-6bbac58e9d-master Nov 22 11:22:36.847: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-master /api/v1/nodes/test-6bbac58e9d-master 8a7a430e-36f3-4dcf-b7dd-f2a903ca1fa5 31174 0 2019-11-22 09:29:31 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3876802560 0} {<nil>} 3785940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3614658560 0} {<nil>} 3529940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 11:20:46 +0000 UTC,LastTransitionTime:2019-11-22 09:29:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 11:20:46 +0000 UTC,LastTransitionTime:2019-11-22 09:29:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 11:20:46 +0000 UTC,LastTransitionTime:2019-11-22 09:29:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 11:20:46 +0000 UTC,LastTransitionTime:2019-11-22 09:29:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.227.175.21,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-master.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-master.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fa8a320b898c1d5588780170530d5cf8,SystemUUID:fa8a320b-898c-1d55-8878-0170530d5cf8,BootID:2730095f-f6ec-4217-a9ae-32ba996e1eed,KernelVersion:4.19.76+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://19.3.1,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:212137343,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:200623393,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:110377926,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/etcd-empty-dir-cleanup@sha256:484662e55e0705caed26c6fb8632097457f43ce685756531da7a76319a7dcee1 k8s.gcr.io/etcd-empty-dir-cleanup:3.4.3.0],SizeBytes:77408900,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-glbc-amd64@sha256:1a859d138b4874642e9a8709e7ab04324669c77742349b5b21b1ef8a25fef55f k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1],SizeBytes:76121176,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 11:22:36.848: INFO: Logging kubelet events for node test-6bbac58e9d-master Nov 22 11:22:36.889: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-master Nov 22 11:22:36.939: INFO: etcd-server-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:36.939: INFO: Container etcd-container ready: true, restart count 1 Nov 22 11:22:36.939: INFO: kube-addon-manager-test-6bbac58e9d-master started at 2019-11-22 09:29:05 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:36.939: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 22 11:22:36.939: INFO: metadata-proxy-v0.1-xr6wl started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:22:36.939: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 11:22:36.939: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:22:36.939: INFO: kube-apiserver-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:36.939: INFO: Container kube-apiserver ready: true, restart count 0 Nov 22 11:22:36.939: INFO: kube-controller-manager-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:36.939: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 22 11:22:36.939: INFO: etcd-empty-dir-cleanup-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:36.939: INFO: Container etcd-empty-dir-cleanup ready: true, restart count 1 Nov 22 11:22:36.939: INFO: etcd-server-events-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:36.939: INFO: Container etcd-container ready: true, restart count 1 Nov 22 11:22:36.939: INFO: kube-scheduler-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:36.939: INFO: Container kube-scheduler ready: true, restart count 0 Nov 22 11:22:36.939: INFO: l7-lb-controller-test-6bbac58e9d-master started at 2019-11-22 09:29:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:36.939: INFO: Container l7-lb-controller ready: true, restart count 3 Nov 22 11:22:36.939: INFO: fluentd-gcp-v3.2.0-fxhtk started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:22:36.939: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 11:22:36.939: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:22:37.079: INFO: Latency metrics for node test-6bbac58e9d-master Nov 22 11:22:37.079: INFO: Logging node info for node test-6bbac58e9d-minion-group-1pk2 Nov 22 11:22:37.119: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-minion-group-1pk2 /api/v1/nodes/test-6bbac58e9d-minion-group-1pk2 a4f21abc-d48a-4c0f-a26f-9e634bca825a 31501 0 2019-11-22 10:48:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-minion-group-1pk2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.gke.io/zone:us-west1-b topology.hostpath.csi/node:test-6bbac58e9d-minion-group-1pk2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-disruptive-1972":"test-6bbac58e9d-minion-group-1pk2","csi-hostpath-disruptive-9353":"test-6bbac58e9d-minion-group-1pk2","pd.csi.storage.gke.io":"projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/instances/test-6bbac58e9d-minion-group-1pk2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-minion-group-1pk2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103880232960 0} {<nil>} 101445540Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7836020736 0} {<nil>} 7652364Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93492209510 0} {<nil>} 93492209510 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7573876736 0} {<nil>} 7396364Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-11-22 11:18:26 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-11-22 11:18:26 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-11-22 11:18:26 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-11-22 11:18:26 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-11-22 11:18:26 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-11-22 11:18:26 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-11-22 11:18:26 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 11:22:35 +0000 UTC,LastTransitionTime:2019-11-22 11:13:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 11:22:35 +0000 UTC,LastTransitionTime:2019-11-22 11:13:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 11:22:35 +0000 UTC,LastTransitionTime:2019-11-22 11:13:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 11:22:35 +0000 UTC,LastTransitionTime:2019-11-22 11:13:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.6,},NodeAddress{Type:ExternalIP,Address:104.198.3.26,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-minion-group-1pk2.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-minion-group-1pk2.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c363001bc173de2779c31270a0a03e8d,SystemUUID:C363001B-C173-DE27-79C3-1270A0A03E8D,BootID:ecfbc66f-0a8c-4787-a7bf-8e0ebe1e8bb2,KernelVersion:4.15.0-1048-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:130115752,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:3846a258f98448f6586700a37ae5974a0969cc0bb43f75b4fde0f198bb314103 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0],SizeBytes:113118759,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:276335fa25a703615cc2f2cdc51ba693fac4bdd70baa63f9cbf228291defd776 k8s.gcr.io/node-problem-detector:v0.8.0],SizeBytes:108715243,},ContainerImage{Names:[k8s.gcr.io/fluentd-gcp-scaler@sha256:4f28f10fb89506768910b858f7a18ffb996824a16d70d5ac895e49687df9ff58 k8s.gcr.io/fluentd-gcp-scaler:0.5.2],SizeBytes:90498960,},ContainerImage{Names:[gcr.io/gke-release/csi-provisioner@sha256:3cd0f6b65a01d3b1e6fa9f2eb31448e7c15d79ee782249c206ad2710ac189cff gcr.io/gke-release/csi-provisioner:v1.4.0-gke.0],SizeBytes:60974770,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/event-exporter@sha256:ab71028f7cbc851d273bb00449e30ab743d4e3be21ed2093299f718b42df0748 k8s.gcr.io/event-exporter:v0.3.1],SizeBytes:51445475,},ContainerImage{Names:[gcr.io/gke-release/csi-attacher@sha256:2ffe521f6b59df3fa0dd444eef5f22472e3fe343efc1ab39aaa05e4f28832418 gcr.io/gke-release/csi-attacher:v2.0.0-gke.0],SizeBytes:51300540,},ContainerImage{Names:[gcr.io/gke-release/csi-resizer@sha256:650e2a11a7f877b51db5e23c5b7eae30b99714b535a59a6bba2aa9c165358cb1 gcr.io/gke-release/csi-resizer:v0.3.0-gke.0],SizeBytes:51125391,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:32b849c66869e0491116a333af3a7cafe226639d975818dfdb4d58fd7028a0b8 quay.io/k8scsi/csi-provisioner:v1.5.0-rc1],SizeBytes:50962879,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:48e9d3c4147ab5e4e070677a352f807a17e0f6bcf4cb19b8c34f6bfadf87781b quay.io/k8scsi/csi-snapshotter:v2.0.0-rc2],SizeBytes:50515643,},ContainerImage{Names:[quay.io/k8scsi/snapshot-controller@sha256:b3c1c484ffe4f0bbf000bda93fb745e1b3899b08d605a08581426a1963dd3e8a quay.io/k8scsi/snapshot-controller:v2.0.0-rc2],SizeBytes:47222712,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:d3e2c5fb1d887bed2e6e8f931c65209b16cf7c28b40eaf7812268f9839908790 quay.io/k8scsi/csi-attacher:v2.0.0],SizeBytes:46143101,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:eff2d6a215efd9450d90796265fc4d8832a54a3a098df06edae6ff3a5072b08f quay.io/k8scsi/csi-resizer:v0.3.0],SizeBytes:46014212,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:1d49fb3b108e6b42542e4a9b056dee308f06f88824326cde1636eea0472b799d k8s.gcr.io/prometheus-to-sd:v0.7.2],SizeBytes:42314030,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:2f80c87a67122448bd60c778dd26d4067e1f48d1ea3fefe9f92b8cd1961acfa0 quay.io/k8scsi/hostpathplugin:v1.3.0-rc1],SizeBytes:28736864,},ContainerImage{Names:[gcr.io/gke-release/csi-node-driver-registrar@sha256:c09c18e90fa65c1156ab449681c38a9da732e2bb20a724ad10f90b2b0eec97d2 gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0],SizeBytes:19358236,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:89cdb2a20bdec89b75e2fbd82a67567ea90b719524990e772f2704b19757188d quay.io/k8scsi/csi-node-driver-registrar:v1.2.0],SizeBytes:17057647,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64@sha256:d83d8a481145d0eb71f8bd71ae236d1c6a931dd3bdcaf80919a8ec4a4d8aff74 k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0],SizeBytes:13513083,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 11:22:37.120: INFO: Logging kubelet events for node test-6bbac58e9d-minion-group-1pk2 Nov 22 11:22:37.161: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-minion-group-1pk2 Nov 22 11:22:37.216: INFO: kube-proxy-test-6bbac58e9d-minion-group-1pk2 started at 2019-11-22 11:13:35 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:37.216: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 11:22:37.216: INFO: metadata-proxy-v0.1-4bxj9 started at 2019-11-22 10:48:20 +0000 UTC (0+2 container statuses recorded) Nov 22 11:22:37.216: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 11:22:37.216: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:22:37.216: INFO: npd-v0.8.0-224c2 started at 2019-11-22 10:48:20 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:37.216: INFO: Container node-problem-detector ready: true, restart count 0 Nov 22 11:22:37.216: INFO: fluentd-gcp-v3.2.0-4fdmw started at 2019-11-22 10:48:20 +0000 UTC (0+2 container statuses recorded) Nov 22 11:22:37.216: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 11:22:37.216: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:22:37.216: INFO: csi-gce-pd-node-zjqsj started at 2019-11-22 11:17:27 +0000 UTC (0+2 container statuses recorded) Nov 22 11:22:37.216: INFO: Container csi-driver-registrar ready: true, restart count 0 Nov 22 11:22:37.216: INFO: Container gce-pd-driver ready: true, restart count 0 Nov 22 11:22:37.216: INFO: csi-gce-pd-controller-0 started at 2019-11-22 11:17:27 +0000 UTC (0+4 container statuses recorded) Nov 22 11:22:37.216: INFO: Container csi-attacher ready: true, restart count 0 Nov 22 11:22:37.216: INFO: Container csi-provisioner ready: true, restart count 0 Nov 22 11:22:37.216: INFO: Container csi-resizer ready: true, restart count 0 Nov 22 11:22:37.216: INFO: Container gce-pd-driver ready: true, restart count 0 Nov 22 11:22:37.413: INFO: Latency metrics for node test-6bbac58e9d-minion-group-1pk2 Nov 22 11:22:37.413: INFO: Logging node info for node test-6bbac58e9d-minion-group-dtt3 Nov 22 11:22:37.452: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-minion-group-dtt3 /api/v1/nodes/test-6bbac58e9d-minion-group-dtt3 bbcaa4a7-21ed-4b1a-8d6c-097e686c368c 31046 0 2019-11-22 09:29:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-minion-group-dtt3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.gke.io/zone:us-west1-b topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"pd.csi.storage.gke.io":"projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/instances/test-6bbac58e9d-minion-group-dtt3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-minion-group-dtt3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103880232960 0} {<nil>} 101445540Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7836012544 0} {<nil>} 7652356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93492209510 0} {<nil>} 93492209510 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7573868544 0} {<nil>} 7396356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-11-22 11:20:04 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-11-22 11:20:04 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-11-22 11:20:04 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-11-22 11:20:04 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-11-22 11:20:04 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-11-22 11:20:04 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-11-22 11:20:04 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 11:17:45 +0000 UTC,LastTransitionTime:2019-11-22 09:39:45 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 11:17:45 +0000 UTC,LastTransitionTime:2019-11-22 09:39:45 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 11:17:45 +0000 UTC,LastTransitionTime:2019-11-22 09:39:45 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 11:17:45 +0000 UTC,LastTransitionTime:2019-11-22 11:05:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.227.160.250,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-minion-group-dtt3.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-minion-group-dtt3.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:015ba3833761f0b9cd8a2196bf6fb79d,SystemUUID:015BA383-3761-F0B9-CD8A-2196BF6FB79D,BootID:c9ec395e-18ec-40c2-b13c-49ae0567ad15,KernelVersion:4.15.0-1048-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:130115752,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:3846a258f98448f6586700a37ae5974a0969cc0bb43f75b4fde0f198bb314103 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0],SizeBytes:113118759,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:276335fa25a703615cc2f2cdc51ba693fac4bdd70baa63f9cbf228291defd776 k8s.gcr.io/node-problem-detector:v0.8.0],SizeBytes:108715243,},ContainerImage{Names:[k8s.gcr.io/heapster-amd64@sha256:9fae0af136ce0cf4f88393b3670f7139ffc464692060c374d2ae748e13144521 k8s.gcr.io/heapster-amd64:v1.6.0-beta.1],SizeBytes:76016169,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:2114a2f70d34fa2821fb7f9bf373be5f44c8cbfeb6097fb5ba8eaf73cd38b72a k8s.gcr.io/addon-resizer:1.8.6],SizeBytes:37928220,},ContainerImage{Names:[gcr.io/gke-release/csi-node-driver-registrar@sha256:c09c18e90fa65c1156ab449681c38a9da732e2bb20a724ad10f90b2b0eec97d2 gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0],SizeBytes:19358236,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[kubernetes.io/csi/pd.csi.storage.gke.io^projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/disks/pvc-3de78008-706a-42be-9cec-db3ae3d0ff49],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/pd.csi.storage.gke.io^projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/disks/pvc-3de78008-706a-42be-9cec-db3ae3d0ff49,DevicePath:,},},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 11:22:37.453: INFO: Logging kubelet events for node test-6bbac58e9d-minion-group-dtt3 Nov 22 11:22:37.495: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-minion-group-dtt3 Nov 22 11:22:37.540: INFO: heapster-v1.6.0-beta.1-859599df9f-9nl5x started at 2019-11-22 09:29:47 +0000 UTC (0+2 container statuses recorded) Nov 22 11:22:37.540: INFO: Container heapster ready: true, restart count 0 Nov 22 11:22:37.540: INFO: Container heapster-nanny ready: true, restart count 0 Nov 22 11:22:37.540: INFO: npd-v0.8.0-86sjk started at 2019-11-22 09:29:41 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:37.540: INFO: Container node-problem-detector ready: true, restart count 0 Nov 22 11:22:37.540: INFO: metadata-proxy-v0.1-qj8lx started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:22:37.540: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 11:22:37.540: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:22:37.540: INFO: fluentd-gcp-v3.2.0-z4gtt started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:22:37.540: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 11:22:37.540: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:22:37.540: INFO: coredns-65567c7b57-vqz56 started at 2019-11-22 09:29:55 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:37.540: INFO: Container coredns ready: true, restart count 0 Nov 22 11:22:37.540: INFO: kubernetes-dashboard-7778f8b456-dwww9 started at 2019-11-22 09:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:37.540: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 22 11:22:37.540: INFO: kube-dns-autoscaler-65bc6d4889-kncqk started at 2019-11-22 10:43:03 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:37.540: INFO: Container autoscaler ready: true, restart count 0 Nov 22 11:22:37.540: INFO: kube-proxy-test-6bbac58e9d-minion-group-dtt3 started at 2019-11-22 09:29:30 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:37.540: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 11:22:37.540: INFO: metrics-server-v0.3.6-7d96444597-lfv7c started at 2019-11-22 09:29:45 +0000 UTC (0+2 container statuses recorded) Nov 22 11:22:37.540: INFO: Container metrics-server ready: true, restart count 0 Nov 22 11:22:37.540: INFO: Container metrics-server-nanny ready: true, restart count 0 Nov 22 11:22:37.540: INFO: csi-gce-pd-node-5zb8b started at 2019-11-22 11:17:27 +0000 UTC (0+2 container statuses recorded) Nov 22 11:22:37.540: INFO: Container csi-driver-registrar ready: true, restart count 0 Nov 22 11:22:37.540: INFO: Container gce-pd-driver ready: true, restart count 0 Nov 22 11:22:37.678: INFO: Latency metrics for node test-6bbac58e9d-minion-group-dtt3 Nov 22 11:22:37.678: INFO: Logging node info for node test-6bbac58e9d-minion-group-ldgb Nov 22 11:22:37.719: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-minion-group-ldgb /api/v1/nodes/test-6bbac58e9d-minion-group-ldgb 7af88a45-91da-49e2-aad1-693979aa273c 31062 0 2019-11-22 09:29:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-minion-group-ldgb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.gke.io/zone:us-west1-b topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"pd.csi.storage.gke.io":"projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/instances/test-6bbac58e9d-minion-group-ldgb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-minion-group-ldgb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103880232960 0} {<nil>} 101445540Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7836012544 0} {<nil>} 7652356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93492209510 0} {<nil>} 93492209510 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7573868544 0} {<nil>} 7396356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-11-22 11:20:08 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-11-22 11:20:08 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-11-22 11:20:08 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-11-22 11:20:08 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-11-22 11:20:08 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-11-22 11:20:08 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-11-22 11:20:08 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 11:19:55 +0000 UTC,LastTransitionTime:2019-11-22 10:33:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 11:19:55 +0000 UTC,LastTransitionTime:2019-11-22 10:33:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 11:19:55 +0000 UTC,LastTransitionTime:2019-11-22 10:33:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 11:19:55 +0000 UTC,LastTransitionTime:2019-11-22 11:04:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:104.199.127.196,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-minion-group-ldgb.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-minion-group-ldgb.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7e1c327ba82c05d274d059f31a030f91,SystemUUID:7E1C327B-A82C-05D2-74D0-59F31A030F91,BootID:153cc788-4fe4-4a95-a234-e7f53446bb04,KernelVersion:4.15.0-1048-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:130115752,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:3846a258f98448f6586700a37ae5974a0969cc0bb43f75b4fde0f198bb314103 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0],SizeBytes:113118759,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:276335fa25a703615cc2f2cdc51ba693fac4bdd70baa63f9cbf228291defd776 k8s.gcr.io/node-problem-detector:v0.8.0],SizeBytes:108715243,},ContainerImage{Names:[k8s.gcr.io/fluentd-gcp-scaler@sha256:4f28f10fb89506768910b858f7a18ffb996824a16d70d5ac895e49687df9ff58 k8s.gcr.io/fluentd-gcp-scaler:0.5.2],SizeBytes:90498960,},ContainerImage{Names:[gcr.io/gke-release/csi-provisioner@sha256:3cd0f6b65a01d3b1e6fa9f2eb31448e7c15d79ee782249c206ad2710ac189cff gcr.io/gke-release/csi-provisioner:v1.4.0-gke.0],SizeBytes:60974770,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/event-exporter@sha256:ab71028f7cbc851d273bb00449e30ab743d4e3be21ed2093299f718b42df0748 k8s.gcr.io/event-exporter:v0.3.1],SizeBytes:51445475,},ContainerImage{Names:[gcr.io/gke-release/csi-attacher@sha256:2ffe521f6b59df3fa0dd444eef5f22472e3fe343efc1ab39aaa05e4f28832418 gcr.io/gke-release/csi-attacher:v2.0.0-gke.0],SizeBytes:51300540,},ContainerImage{Names:[gcr.io/gke-release/csi-resizer@sha256:650e2a11a7f877b51db5e23c5b7eae30b99714b535a59a6bba2aa9c165358cb1 gcr.io/gke-release/csi-resizer:v0.3.0-gke.0],SizeBytes:51125391,},ContainerImage{Names:[quay.io/k8scsi/snapshot-controller@sha256:b3c1c484ffe4f0bbf000bda93fb745e1b3899b08d605a08581426a1963dd3e8a quay.io/k8scsi/snapshot-controller:v2.0.0-rc2],SizeBytes:47222712,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:1d49fb3b108e6b42542e4a9b056dee308f06f88824326cde1636eea0472b799d k8s.gcr.io/prometheus-to-sd:v0.7.2],SizeBytes:42314030,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[gcr.io/gke-release/csi-node-driver-registrar@sha256:c09c18e90fa65c1156ab449681c38a9da732e2bb20a724ad10f90b2b0eec97d2 gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0],SizeBytes:19358236,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64@sha256:d83d8a481145d0eb71f8bd71ae236d1c6a931dd3bdcaf80919a8ec4a4d8aff74 k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0],SizeBytes:13513083,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 11:22:37.719: INFO: Logging kubelet events for node test-6bbac58e9d-minion-group-ldgb Nov 22 11:22:37.761: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-minion-group-ldgb Nov 22 11:22:37.821: INFO: fluentd-gcp-v3.2.0-f9q96 started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:22:37.821: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 11:22:37.821: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:22:37.821: INFO: metadata-proxy-v0.1-ptzjq started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:22:37.821: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 11:22:37.821: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:22:37.821: INFO: l7-default-backend-678889f899-sn2pt started at 2019-11-22 10:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:37.821: INFO: Container default-http-backend ready: true, restart count 0 Nov 22 11:22:37.821: INFO: volume-snapshot-controller-0 started at 2019-11-22 10:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:37.821: INFO: Container volume-snapshot-controller ready: true, restart count 0 Nov 22 11:22:37.821: INFO: event-exporter-v0.3.1-747b47fcd-8chbt started at 2019-11-22 10:43:02 +0000 UTC (0+2 container statuses recorded) Nov 22 11:22:37.821: INFO: Container event-exporter ready: true, restart count 0 Nov 22 11:22:37.821: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:22:37.821: INFO: fluentd-gcp-scaler-76d9c77b4d-wh4nt started at 2019-11-22 10:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:37.821: INFO: Container fluentd-gcp-scaler ready: true, restart count 0 Nov 22 11:22:37.821: INFO: coredns-65567c7b57-s9876 started at 2019-11-22 10:43:03 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:37.821: INFO: Container coredns ready: true, restart count 0 Nov 22 11:22:37.821: INFO: kube-proxy-test-6bbac58e9d-minion-group-ldgb started at 2019-11-22 09:29:30 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:37.821: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 11:22:37.821: INFO: npd-v0.8.0-wmkxq started at 2019-11-22 09:29:42 +0000 UTC (0+1 container statuses recorded) Nov 22 11:22:37.821: INFO: Container node-problem-detector ready: true, restart count 0 Nov 22 11:22:37.821: INFO: csi-gce-pd-node-lpxh4 started at 2019-11-22 11:17:27 +0000 UTC (0+2 container statuses recorded) Nov 22 11:22:37.821: INFO: Container csi-driver-registrar ready: true, restart count 0 Nov 22 11:22:37.821: INFO: Container gce-pd-driver ready: true, restart count 0 Nov 22 11:22:37.961: INFO: Latency metrics for node test-6bbac58e9d-minion-group-ldgb Nov 22 11:22:37.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "volume-6167" for this suite.
Find exec-volume-test-pd-csi-storage-gke-io-dynamicpv-bdrq mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\spd\.csi\.storage\.gke\.io\]\[Serial\]\s\[Testpattern\:\sDynamic\sPV\s\(xfs\)\]\[Slow\]\svolumes\sshould\sstore\sdata$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:150 Nov 22 12:26:33.370: Failed to create injector pod: timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:597from junit_01.xml
[BeforeEach] [Testpattern: Dynamic PV (xfs)][Slow] volumes /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:101 [BeforeEach] [Testpattern: Dynamic PV (xfs)][Slow] volumes /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 22 12:16:31.491: INFO: >>> kubeConfig: /workspace/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename volume �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should store data /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:150 �[1mSTEP�[0m: deploying csi gce-pd driver Nov 22 12:16:31.686: INFO: Found CI service account key at /etc/service-account/service-account.json Nov 22 12:16:31.687: INFO: Running cp [/etc/service-account/service-account.json /tmp/b98a0f0a-bed0-4704-8e4e-5e720deb01ec/cloud-sa.json] Nov 22 12:16:31.730: INFO: Shredding file /tmp/b98a0f0a-bed0-4704-8e4e-5e720deb01ec/cloud-sa.json Nov 22 12:16:31.730: INFO: Running shred [--remove /tmp/b98a0f0a-bed0-4704-8e4e-5e720deb01ec/cloud-sa.json] Nov 22 12:16:31.773: INFO: File /tmp/b98a0f0a-bed0-4704-8e4e-5e720deb01ec/cloud-sa.json successfully shredded Nov 22 12:16:31.780: INFO: creating *v1.ServiceAccount: volume-6427/csi-attacher Nov 22 12:16:31.821: INFO: creating *v1.ClusterRole: external-attacher-runner-volume-6427 Nov 22 12:16:31.821: INFO: Define cluster role external-attacher-runner-volume-6427 Nov 22 12:16:31.864: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-volume-6427 Nov 22 12:16:31.904: INFO: creating *v1.Role: volume-6427/external-attacher-cfg-volume-6427 Nov 22 12:16:31.944: INFO: creating *v1.RoleBinding: volume-6427/csi-attacher-role-cfg Nov 22 12:16:31.986: INFO: creating *v1.ServiceAccount: volume-6427/csi-provisioner Nov 22 12:16:32.027: INFO: creating *v1.ClusterRole: external-provisioner-runner-volume-6427 Nov 22 12:16:32.027: INFO: Define cluster role external-provisioner-runner-volume-6427 Nov 22 12:16:32.068: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-volume-6427 Nov 22 12:16:32.109: INFO: creating *v1.Role: volume-6427/external-provisioner-cfg-volume-6427 Nov 22 12:16:32.156: INFO: creating *v1.RoleBinding: volume-6427/csi-provisioner-role-cfg Nov 22 12:16:32.197: INFO: creating *v1.ServiceAccount: volume-6427/csi-gce-pd-controller-sa Nov 22 12:16:32.239: INFO: creating *v1.ClusterRole: csi-gce-pd-provisioner-role-volume-6427 Nov 22 12:16:32.239: INFO: Define cluster role csi-gce-pd-provisioner-role-volume-6427 Nov 22 12:16:32.281: INFO: creating *v1.ClusterRoleBinding: csi-gce-pd-controller-provisioner-binding-volume-6427 Nov 22 12:16:32.322: INFO: creating *v1.ClusterRole: csi-gce-pd-attacher-role-volume-6427 Nov 22 12:16:32.322: INFO: Define cluster role csi-gce-pd-attacher-role-volume-6427 Nov 22 12:16:32.364: INFO: creating *v1.ClusterRoleBinding: csi-gce-pd-controller-attacher-binding-volume-6427 Nov 22 12:16:32.405: INFO: creating *v1.ClusterRole: csi-gce-pd-resizer-role-volume-6427 Nov 22 12:16:32.405: INFO: Define cluster role csi-gce-pd-resizer-role-volume-6427 Nov 22 12:16:32.448: INFO: creating *v1.ClusterRoleBinding: csi-gce-pd-resizer-binding-volume-6427 Nov 22 12:16:32.490: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-volume-6427 Nov 22 12:16:32.530: INFO: creating *v1.DaemonSet: volume-6427/csi-gce-pd-node Nov 22 12:16:32.591: INFO: creating *v1.StatefulSet: volume-6427/csi-gce-pd-controller Nov 22 12:16:32.868: INFO: Test running for native CSI Driver, not checking metrics Nov 22 12:16:32.868: INFO: Creating resource for dynamic PV Nov 22 12:16:32.868: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(pd.csi.storage.gke.io) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass volume-6427-pd.csi.storage.gke.io-sc2qc75 �[1mSTEP�[0m: creating a claim Nov 22 12:16:32.908: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil �[1mSTEP�[0m: starting gcepd-injector Nov 22 12:21:33.154: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:21:33.197: INFO: Pod gcepd-injector still exists Nov 22 12:21:35.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:21:35.241: INFO: Pod gcepd-injector still exists Nov 22 12:21:37.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:21:37.263: INFO: Pod gcepd-injector still exists Nov 22 12:21:39.198: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:21:39.246: INFO: Pod gcepd-injector still exists Nov 22 12:21:41.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:21:41.241: INFO: Pod gcepd-injector still exists Nov 22 12:21:43.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:21:43.244: INFO: Pod gcepd-injector still exists Nov 22 12:21:45.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:21:45.242: INFO: Pod gcepd-injector still exists Nov 22 12:21:47.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:21:47.241: INFO: Pod gcepd-injector still exists Nov 22 12:21:49.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:21:49.242: INFO: Pod gcepd-injector still exists Nov 22 12:21:51.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:21:51.242: INFO: Pod gcepd-injector still exists Nov 22 12:21:53.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:21:53.242: INFO: Pod gcepd-injector still exists Nov 22 12:21:55.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:21:55.243: INFO: Pod gcepd-injector still exists Nov 22 12:21:57.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:21:57.242: INFO: Pod gcepd-injector still exists Nov 22 12:21:59.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:21:59.242: INFO: Pod gcepd-injector still exists Nov 22 12:22:01.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:01.241: INFO: Pod gcepd-injector still exists Nov 22 12:22:03.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:03.241: INFO: Pod gcepd-injector still exists Nov 22 12:22:05.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:05.242: INFO: Pod gcepd-injector still exists Nov 22 12:22:07.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:07.242: INFO: Pod gcepd-injector still exists Nov 22 12:22:09.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:09.246: INFO: Pod gcepd-injector still exists Nov 22 12:22:11.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:11.243: INFO: Pod gcepd-injector still exists Nov 22 12:22:13.198: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:13.245: INFO: Pod gcepd-injector still exists Nov 22 12:22:15.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:15.242: INFO: Pod gcepd-injector still exists Nov 22 12:22:17.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:17.244: INFO: Pod gcepd-injector still exists Nov 22 12:22:19.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:19.242: INFO: Pod gcepd-injector still exists Nov 22 12:22:21.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:21.242: INFO: Pod gcepd-injector still exists Nov 22 12:22:23.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:23.241: INFO: Pod gcepd-injector still exists Nov 22 12:22:25.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:25.244: INFO: Pod gcepd-injector still exists Nov 22 12:22:27.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:27.245: INFO: Pod gcepd-injector still exists Nov 22 12:22:29.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:29.241: INFO: Pod gcepd-injector still exists Nov 22 12:22:31.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:31.249: INFO: Pod gcepd-injector still exists Nov 22 12:22:33.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:33.242: INFO: Pod gcepd-injector still exists Nov 22 12:22:35.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:35.241: INFO: Pod gcepd-injector still exists Nov 22 12:22:37.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:37.242: INFO: Pod gcepd-injector still exists Nov 22 12:22:39.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:39.244: INFO: Pod gcepd-injector still exists Nov 22 12:22:41.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:41.257: INFO: Pod gcepd-injector still exists Nov 22 12:22:43.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:43.242: INFO: Pod gcepd-injector still exists Nov 22 12:22:45.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:45.243: INFO: Pod gcepd-injector still exists Nov 22 12:22:47.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:47.242: INFO: Pod gcepd-injector still exists Nov 22 12:22:49.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:49.241: INFO: Pod gcepd-injector still exists Nov 22 12:22:51.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:51.241: INFO: Pod gcepd-injector still exists Nov 22 12:22:53.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:53.246: INFO: Pod gcepd-injector still exists Nov 22 12:22:55.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:55.242: INFO: Pod gcepd-injector still exists Nov 22 12:22:57.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:57.242: INFO: Pod gcepd-injector still exists Nov 22 12:22:59.198: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:22:59.242: INFO: Pod gcepd-injector still exists Nov 22 12:23:01.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:01.244: INFO: Pod gcepd-injector still exists Nov 22 12:23:03.198: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:03.242: INFO: Pod gcepd-injector still exists Nov 22 12:23:05.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:05.243: INFO: Pod gcepd-injector still exists Nov 22 12:23:07.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:07.242: INFO: Pod gcepd-injector still exists Nov 22 12:23:09.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:09.242: INFO: Pod gcepd-injector still exists Nov 22 12:23:11.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:11.244: INFO: Pod gcepd-injector still exists Nov 22 12:23:13.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:13.242: INFO: Pod gcepd-injector still exists Nov 22 12:23:15.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:15.241: INFO: Pod gcepd-injector still exists Nov 22 12:23:17.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:17.242: INFO: Pod gcepd-injector still exists Nov 22 12:23:19.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:19.241: INFO: Pod gcepd-injector still exists Nov 22 12:23:21.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:21.246: INFO: Pod gcepd-injector still exists Nov 22 12:23:23.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:23.242: INFO: Pod gcepd-injector still exists Nov 22 12:23:25.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:25.241: INFO: Pod gcepd-injector still exists Nov 22 12:23:27.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:27.240: INFO: Pod gcepd-injector still exists Nov 22 12:23:29.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:29.239: INFO: Pod gcepd-injector still exists Nov 22 12:23:31.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:31.241: INFO: Pod gcepd-injector still exists Nov 22 12:23:33.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:33.242: INFO: Pod gcepd-injector still exists Nov 22 12:23:35.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:35.262: INFO: Pod gcepd-injector still exists Nov 22 12:23:37.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:37.264: INFO: Pod gcepd-injector still exists Nov 22 12:23:39.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:39.263: INFO: Pod gcepd-injector still exists Nov 22 12:23:41.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:41.265: INFO: Pod gcepd-injector still exists Nov 22 12:23:43.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:43.263: INFO: Pod gcepd-injector still exists Nov 22 12:23:45.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:45.264: INFO: Pod gcepd-injector still exists Nov 22 12:23:47.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:47.264: INFO: Pod gcepd-injector still exists Nov 22 12:23:49.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:49.264: INFO: Pod gcepd-injector still exists Nov 22 12:23:51.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:51.264: INFO: Pod gcepd-injector still exists Nov 22 12:23:53.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:53.264: INFO: Pod gcepd-injector still exists Nov 22 12:23:55.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:55.262: INFO: Pod gcepd-injector still exists Nov 22 12:23:57.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:57.264: INFO: Pod gcepd-injector still exists Nov 22 12:23:59.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:23:59.265: INFO: Pod gcepd-injector still exists Nov 22 12:24:01.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:01.264: INFO: Pod gcepd-injector still exists Nov 22 12:24:03.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:03.264: INFO: Pod gcepd-injector still exists Nov 22 12:24:05.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:05.263: INFO: Pod gcepd-injector still exists Nov 22 12:24:07.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:07.264: INFO: Pod gcepd-injector still exists Nov 22 12:24:09.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:09.266: INFO: Pod gcepd-injector still exists Nov 22 12:24:11.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:11.265: INFO: Pod gcepd-injector still exists Nov 22 12:24:13.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:13.267: INFO: Pod gcepd-injector still exists Nov 22 12:24:15.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:15.265: INFO: Pod gcepd-injector still exists Nov 22 12:24:17.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:17.263: INFO: Pod gcepd-injector still exists Nov 22 12:24:19.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:19.265: INFO: Pod gcepd-injector still exists Nov 22 12:24:21.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:21.262: INFO: Pod gcepd-injector still exists Nov 22 12:24:23.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:23.263: INFO: Pod gcepd-injector still exists Nov 22 12:24:25.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:25.265: INFO: Pod gcepd-injector still exists Nov 22 12:24:27.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:27.262: INFO: Pod gcepd-injector still exists Nov 22 12:24:29.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:29.277: INFO: Pod gcepd-injector still exists Nov 22 12:24:31.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:31.267: INFO: Pod gcepd-injector still exists Nov 22 12:24:33.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:33.268: INFO: Pod gcepd-injector still exists Nov 22 12:24:35.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:35.264: INFO: Pod gcepd-injector still exists Nov 22 12:24:37.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:37.262: INFO: Pod gcepd-injector still exists Nov 22 12:24:39.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:39.263: INFO: Pod gcepd-injector still exists Nov 22 12:24:41.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:41.262: INFO: Pod gcepd-injector still exists Nov 22 12:24:43.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:43.263: INFO: Pod gcepd-injector still exists Nov 22 12:24:45.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:45.264: INFO: Pod gcepd-injector still exists Nov 22 12:24:47.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:47.264: INFO: Pod gcepd-injector still exists Nov 22 12:24:49.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:49.264: INFO: Pod gcepd-injector still exists Nov 22 12:24:51.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:51.265: INFO: Pod gcepd-injector still exists Nov 22 12:24:53.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:53.266: INFO: Pod gcepd-injector still exists Nov 22 12:24:55.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:55.263: INFO: Pod gcepd-injector still exists Nov 22 12:24:57.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:57.266: INFO: Pod gcepd-injector still exists Nov 22 12:24:59.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:24:59.264: INFO: Pod gcepd-injector still exists Nov 22 12:25:01.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:01.267: INFO: Pod gcepd-injector still exists Nov 22 12:25:03.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:03.264: INFO: Pod gcepd-injector still exists Nov 22 12:25:05.202: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:05.265: INFO: Pod gcepd-injector still exists Nov 22 12:25:07.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:07.262: INFO: Pod gcepd-injector still exists Nov 22 12:25:09.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:09.264: INFO: Pod gcepd-injector still exists Nov 22 12:25:11.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:11.264: INFO: Pod gcepd-injector still exists Nov 22 12:25:13.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:13.264: INFO: Pod gcepd-injector still exists Nov 22 12:25:15.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:15.264: INFO: Pod gcepd-injector still exists Nov 22 12:25:17.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:17.266: INFO: Pod gcepd-injector still exists Nov 22 12:25:19.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:19.265: INFO: Pod gcepd-injector still exists Nov 22 12:25:21.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:21.266: INFO: Pod gcepd-injector still exists Nov 22 12:25:23.198: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:23.284: INFO: Pod gcepd-injector still exists Nov 22 12:25:25.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:25.265: INFO: Pod gcepd-injector still exists Nov 22 12:25:27.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:27.263: INFO: Pod gcepd-injector still exists Nov 22 12:25:29.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:29.266: INFO: Pod gcepd-injector still exists Nov 22 12:25:31.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:31.265: INFO: Pod gcepd-injector still exists Nov 22 12:25:33.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:33.265: INFO: Pod gcepd-injector still exists Nov 22 12:25:35.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:35.265: INFO: Pod gcepd-injector still exists Nov 22 12:25:37.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:37.264: INFO: Pod gcepd-injector still exists Nov 22 12:25:39.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:39.266: INFO: Pod gcepd-injector still exists Nov 22 12:25:41.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:41.264: INFO: Pod gcepd-injector still exists Nov 22 12:25:43.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:43.264: INFO: Pod gcepd-injector still exists Nov 22 12:25:45.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:45.264: INFO: Pod gcepd-injector still exists Nov 22 12:25:47.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:47.265: INFO: Pod gcepd-injector still exists Nov 22 12:25:49.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:49.265: INFO: Pod gcepd-injector still exists Nov 22 12:25:51.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:51.264: INFO: Pod gcepd-injector still exists Nov 22 12:25:53.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:53.264: INFO: Pod gcepd-injector still exists Nov 22 12:25:55.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:55.265: INFO: Pod gcepd-injector still exists Nov 22 12:25:57.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:57.264: INFO: Pod gcepd-injector still exists Nov 22 12:25:59.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:25:59.264: INFO: Pod gcepd-injector still exists Nov 22 12:26:01.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:26:01.264: INFO: Pod gcepd-injector still exists Nov 22 12:26:03.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:26:03.266: INFO: Pod gcepd-injector still exists Nov 22 12:26:05.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:26:05.265: INFO: Pod gcepd-injector still exists Nov 22 12:26:07.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:26:07.264: INFO: Pod gcepd-injector still exists Nov 22 12:26:09.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:26:09.264: INFO: Pod gcepd-injector still exists Nov 22 12:26:11.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:26:11.264: INFO: Pod gcepd-injector still exists Nov 22 12:26:13.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:26:13.265: INFO: Pod gcepd-injector still exists Nov 22 12:26:15.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:26:15.265: INFO: Pod gcepd-injector still exists Nov 22 12:26:17.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:26:17.264: INFO: Pod gcepd-injector still exists Nov 22 12:26:19.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:26:19.265: INFO: Pod gcepd-injector still exists Nov 22 12:26:21.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:26:21.265: INFO: Pod gcepd-injector still exists Nov 22 12:26:23.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:26:23.268: INFO: Pod gcepd-injector still exists Nov 22 12:26:25.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:26:25.262: INFO: Pod gcepd-injector still exists Nov 22 12:26:27.198: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:26:27.264: INFO: Pod gcepd-injector still exists Nov 22 12:26:29.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:26:29.264: INFO: Pod gcepd-injector still exists Nov 22 12:26:31.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:26:31.265: INFO: Pod gcepd-injector still exists Nov 22 12:26:33.197: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:26:33.265: INFO: Pod gcepd-injector still exists Nov 22 12:26:33.266: INFO: Waiting for pod gcepd-injector to disappear Nov 22 12:26:33.368: INFO: Pod gcepd-injector still exists Nov 22 12:26:33.369: FAIL: Failed to create injector pod: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/volume.InjectContent(0xc00099d180, 0xc0026caaa0, 0xb, 0x4a49bc8, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:597 +0x944 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).defineTests.func3() /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:181 +0x3c9 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00095a100) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a k8s.io/kubernetes/test/e2e.TestE2E(0xc00095a100) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b testing.tRunner(0xc00095a100, 0x4c2fc20) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 �[1mSTEP�[0m: cleaning the environment after gcepd Nov 22 12:26:33.371: INFO: Deleting pod "gcepd-client" in namespace "volume-6427" �[1mSTEP�[0m: Deleting pvc Nov 22 12:26:33.460: INFO: Deleting PersistentVolumeClaim "pd.csi.storage.gke.iobg4xn" �[1mSTEP�[0m: Deleting sc �[1mSTEP�[0m: uninstalling gce-pd driver Nov 22 12:26:33.664: INFO: deleting *v1.ServiceAccount: volume-6427/csi-attacher Nov 22 12:26:33.764: INFO: deleting *v1.ClusterRole: external-attacher-runner-volume-6427 Nov 22 12:26:33.864: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-volume-6427 Nov 22 12:26:33.963: INFO: deleting *v1.Role: volume-6427/external-attacher-cfg-volume-6427 Nov 22 12:26:34.063: INFO: deleting *v1.RoleBinding: volume-6427/csi-attacher-role-cfg Nov 22 12:26:34.162: INFO: deleting *v1.ServiceAccount: volume-6427/csi-provisioner Nov 22 12:26:34.262: INFO: deleting *v1.ClusterRole: external-provisioner-runner-volume-6427 Nov 22 12:26:34.364: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-volume-6427 Nov 22 12:26:34.514: INFO: deleting *v1.Role: volume-6427/external-provisioner-cfg-volume-6427 Nov 22 12:26:34.566: INFO: deleting *v1.RoleBinding: volume-6427/csi-provisioner-role-cfg Nov 22 12:26:34.664: INFO: deleting *v1.ServiceAccount: volume-6427/csi-gce-pd-controller-sa Nov 22 12:26:34.769: INFO: deleting *v1.ClusterRole: csi-gce-pd-provisioner-role-volume-6427 Nov 22 12:26:34.862: INFO: deleting *v1.ClusterRoleBinding: csi-gce-pd-controller-provisioner-binding-volume-6427 Nov 22 12:26:34.962: INFO: deleting *v1.ClusterRole: csi-gce-pd-attacher-role-volume-6427 Nov 22 12:26:35.065: INFO: deleting *v1.ClusterRoleBinding: csi-gce-pd-controller-attacher-binding-volume-6427 Nov 22 12:26:35.165: INFO: deleting *v1.ClusterRole: csi-gce-pd-resizer-role-volume-6427 Nov 22 12:26:35.265: INFO: deleting *v1.ClusterRoleBinding: csi-gce-pd-resizer-binding-volume-6427 Nov 22 12:26:35.364: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-volume-6427 Nov 22 12:26:35.464: INFO: deleting *v1.DaemonSet: volume-6427/csi-gce-pd-node Nov 22 12:26:35.562: INFO: deleting *v1.StatefulSet: volume-6427/csi-gce-pd-controller [AfterEach] [Testpattern: Dynamic PV (xfs)][Slow] volumes /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "volume-6427". �[1mSTEP�[0m: Found 47 events. Nov 22 12:26:35.765: INFO: At 2019-11-22 12:16:32 +0000 UTC - event for csi-gce-pd-controller: {statefulset-controller } SuccessfulCreate: create Pod csi-gce-pd-controller-0 in StatefulSet csi-gce-pd-controller successful Nov 22 12:26:35.765: INFO: At 2019-11-22 12:16:32 +0000 UTC - event for csi-gce-pd-controller-0: {default-scheduler } Scheduled: Successfully assigned volume-6427/csi-gce-pd-controller-0 to test-6bbac58e9d-minion-group-3r4k Nov 22 12:26:35.765: INFO: At 2019-11-22 12:16:32 +0000 UTC - event for csi-gce-pd-node: {daemonset-controller } SuccessfulCreate: Created pod: csi-gce-pd-node-fhth4 Nov 22 12:26:35.765: INFO: At 2019-11-22 12:16:32 +0000 UTC - event for csi-gce-pd-node: {daemonset-controller } SuccessfulCreate: Created pod: csi-gce-pd-node-tqtbt Nov 22 12:26:35.765: INFO: At 2019-11-22 12:16:32 +0000 UTC - event for csi-gce-pd-node: {daemonset-controller } SuccessfulCreate: Created pod: csi-gce-pd-node-dtdvr Nov 22 12:26:35.765: INFO: At 2019-11-22 12:16:32 +0000 UTC - event for csi-gce-pd-node-dtdvr: {default-scheduler } Scheduled: Successfully assigned volume-6427/csi-gce-pd-node-dtdvr to test-6bbac58e9d-minion-group-dtt3 Nov 22 12:26:35.765: INFO: At 2019-11-22 12:16:32 +0000 UTC - event for csi-gce-pd-node-fhth4: {default-scheduler } Scheduled: Successfully assigned volume-6427/csi-gce-pd-node-fhth4 to test-6bbac58e9d-minion-group-ldgb Nov 22 12:26:35.765: INFO: At 2019-11-22 12:16:32 +0000 UTC - event for csi-gce-pd-node-tqtbt: {default-scheduler } Scheduled: Successfully assigned volume-6427/csi-gce-pd-node-tqtbt to test-6bbac58e9d-minion-group-3r4k Nov 22 12:26:35.765: INFO: At 2019-11-22 12:16:32 +0000 UTC - event for pd.csi.storage.gke.iobg4xn: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding Nov 22 12:26:35.765: INFO: At 2019-11-22 12:16:33 +0000 UTC - event for pd.csi.storage.gke.iobg4xn: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "pd.csi.storage.gke.io" or manually created by system administrator Nov 22 12:26:35.765: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-3r4k} Pulled: Container image "gcr.io/gke-release/csi-attacher:v2.0.0-gke.0" already present on machine Nov 22 12:26:35.765: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-3r4k} Started: Started container csi-provisioner Nov 22 12:26:35.765: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-3r4k} Created: Created container csi-attacher Nov 22 12:26:35.765: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-3r4k} Created: Created container csi-provisioner Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-3r4k} Pulled: Container image "gcr.io/gke-release/csi-provisioner:v1.4.0-gke.0" already present on machine Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-node-dtdvr: {kubelet test-6bbac58e9d-minion-group-dtt3} Started: Started container gce-pd-driver Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-node-dtdvr: {kubelet test-6bbac58e9d-minion-group-dtt3} Created: Created container csi-driver-registrar Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-node-dtdvr: {kubelet test-6bbac58e9d-minion-group-dtt3} Started: Started container csi-driver-registrar Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-node-dtdvr: {kubelet test-6bbac58e9d-minion-group-dtt3} Pulled: Container image "gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0" already present on machine Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-node-dtdvr: {kubelet test-6bbac58e9d-minion-group-dtt3} Created: Created container gce-pd-driver Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-node-dtdvr: {kubelet test-6bbac58e9d-minion-group-dtt3} Pulled: Container image "gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0" already present on machine Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-node-fhth4: {kubelet test-6bbac58e9d-minion-group-ldgb} Started: Started container gce-pd-driver Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-node-fhth4: {kubelet test-6bbac58e9d-minion-group-ldgb} Pulled: Container image "gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0" already present on machine Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-node-fhth4: {kubelet test-6bbac58e9d-minion-group-ldgb} Started: Started container csi-driver-registrar Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-node-fhth4: {kubelet test-6bbac58e9d-minion-group-ldgb} Pulled: Container image "gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0" already present on machine Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-node-fhth4: {kubelet test-6bbac58e9d-minion-group-ldgb} Created: Created container gce-pd-driver Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-node-fhth4: {kubelet test-6bbac58e9d-minion-group-ldgb} Created: Created container csi-driver-registrar Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-node-tqtbt: {kubelet test-6bbac58e9d-minion-group-3r4k} Started: Started container gce-pd-driver Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-node-tqtbt: {kubelet test-6bbac58e9d-minion-group-3r4k} Created: Created container gce-pd-driver Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-node-tqtbt: {kubelet test-6bbac58e9d-minion-group-3r4k} Pulled: Container image "gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0" already present on machine Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-node-tqtbt: {kubelet test-6bbac58e9d-minion-group-3r4k} Created: Created container csi-driver-registrar Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-node-tqtbt: {kubelet test-6bbac58e9d-minion-group-3r4k} Started: Started container csi-driver-registrar Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:34 +0000 UTC - event for csi-gce-pd-node-tqtbt: {kubelet test-6bbac58e9d-minion-group-3r4k} Pulled: Container image "gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0" already present on machine Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:35 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-3r4k} Created: Created container csi-resizer Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:35 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-3r4k} Created: Created container gce-pd-driver Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:35 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-3r4k} Pulled: Container image "gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0" already present on machine Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:35 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-3r4k} Started: Started container csi-attacher Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:35 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-3r4k} Started: Started container csi-resizer Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:35 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-3r4k} Pulled: Container image "gcr.io/gke-release/csi-resizer:v0.3.0-gke.0" already present on machine Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:35 +0000 UTC - event for csi-gce-pd-controller-0: {kubelet test-6bbac58e9d-minion-group-3r4k} Started: Started container gce-pd-driver Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:36 +0000 UTC - event for pd.csi.storage.gke.iobg4xn: {pd.csi.storage.gke.io_csi-gce-pd-controller-0_d519f3ec-acdd-4e26-a06f-9f5ebba40821 } Provisioning: External provisioner is provisioning volume for claim "volume-6427/pd.csi.storage.gke.iobg4xn" Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:40 +0000 UTC - event for pd.csi.storage.gke.iobg4xn: {pd.csi.storage.gke.io_csi-gce-pd-controller-0_d519f3ec-acdd-4e26-a06f-9f5ebba40821 } ProvisioningSucceeded: Successfully provisioned volume pvc-f406d9bf-9c9d-4bfa-98f4-44a25976f26d Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:41 +0000 UTC - event for gcepd-injector: {default-scheduler } Scheduled: Successfully assigned volume-6427/gcepd-injector to test-6bbac58e9d-minion-group-3r4k Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:53 +0000 UTC - event for gcepd-injector: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-f406d9bf-9c9d-4bfa-98f4-44a25976f26d" Nov 22 12:26:35.766: INFO: At 2019-11-22 12:16:56 +0000 UTC - event for gcepd-injector: {kubelet test-6bbac58e9d-minion-group-3r4k} FailedMount: MountVolume.MountDevice failed for volume "pvc-f406d9bf-9c9d-4bfa-98f4-44a25976f26d" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name pd.csi.storage.gke.io not found in the list of registered CSI drivers Nov 22 12:26:35.766: INFO: At 2019-11-22 12:17:28 +0000 UTC - event for gcepd-injector: {kubelet test-6bbac58e9d-minion-group-3r4k} FailedMount: MountVolume.MountDevice failed for volume "pvc-f406d9bf-9c9d-4bfa-98f4-44a25976f26d" : rpc error: code = Internal desc = Failed to format and mount device from ("/dev/disk/by-id/google-pvc-f406d9bf-9c9d-4bfa-98f4-44a25976f26d") to ("/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f406d9bf-9c9d-4bfa-98f4-44a25976f26d/globalmount") with fstype ("xfs") and options ([]): executable file not found in $PATH Nov 22 12:26:35.766: INFO: At 2019-11-22 12:18:44 +0000 UTC - event for gcepd-injector: {kubelet test-6bbac58e9d-minion-group-3r4k} FailedMount: Unable to attach or mount volumes: unmounted volumes=[gcepd-volume-0], unattached volumes=[gcepd-volume-0 default-token-wpplp]: timed out waiting for the condition Nov 22 12:26:35.814: INFO: POD NODE PHASE GRACE CONDITIONS Nov 22 12:26:35.814: INFO: csi-gce-pd-controller-0 test-6bbac58e9d-minion-group-3r4k Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:32 +0000 UTC }] Nov 22 12:26:35.814: INFO: csi-gce-pd-node-dtdvr test-6bbac58e9d-minion-group-dtt3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:32 +0000 UTC }] Nov 22 12:26:35.814: INFO: csi-gce-pd-node-fhth4 test-6bbac58e9d-minion-group-ldgb Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:32 +0000 UTC }] Nov 22 12:26:35.814: INFO: csi-gce-pd-node-tqtbt test-6bbac58e9d-minion-group-3r4k Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:32 +0000 UTC }] Nov 22 12:26:35.814: INFO: gcepd-injector test-6bbac58e9d-minion-group-3r4k Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:41 +0000 UTC ContainersNotReady containers with unready status: [gcepd-injector]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:41 +0000 UTC ContainersNotReady containers with unready status: [gcepd-injector]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-22 12:16:41 +0000 UTC }] Nov 22 12:26:35.814: INFO: Nov 22 12:26:35.866: INFO: Logging node info for node test-6bbac58e9d-master Nov 22 12:26:35.911: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-master /api/v1/nodes/test-6bbac58e9d-master 8a7a430e-36f3-4dcf-b7dd-f2a903ca1fa5 49774 0 2019-11-22 09:29:31 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3876802560 0} {<nil>} 3785940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3614658560 0} {<nil>} 3529940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 12:26:07 +0000 UTC,LastTransitionTime:2019-11-22 09:29:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 12:26:07 +0000 UTC,LastTransitionTime:2019-11-22 09:29:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 12:26:07 +0000 UTC,LastTransitionTime:2019-11-22 09:29:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 12:26:07 +0000 UTC,LastTransitionTime:2019-11-22 09:29:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.227.175.21,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-master.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-master.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fa8a320b898c1d5588780170530d5cf8,SystemUUID:fa8a320b-898c-1d55-8878-0170530d5cf8,BootID:2730095f-f6ec-4217-a9ae-32ba996e1eed,KernelVersion:4.19.76+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://19.3.1,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:212137343,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:200623393,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:110377926,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/etcd-empty-dir-cleanup@sha256:484662e55e0705caed26c6fb8632097457f43ce685756531da7a76319a7dcee1 k8s.gcr.io/etcd-empty-dir-cleanup:3.4.3.0],SizeBytes:77408900,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-glbc-amd64@sha256:1a859d138b4874642e9a8709e7ab04324669c77742349b5b21b1ef8a25fef55f k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1],SizeBytes:76121176,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 12:26:35.911: INFO: Logging kubelet events for node test-6bbac58e9d-master Nov 22 12:26:35.961: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-master Nov 22 12:26:36.031: INFO: fluentd-gcp-v3.2.0-fxhtk started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 12:26:36.031: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 12:26:36.031: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 12:26:36.031: INFO: kube-scheduler-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:36.031: INFO: Container kube-scheduler ready: true, restart count 0 Nov 22 12:26:36.031: INFO: l7-lb-controller-test-6bbac58e9d-master started at 2019-11-22 09:29:06 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:36.031: INFO: Container l7-lb-controller ready: true, restart count 3 Nov 22 12:26:36.031: INFO: etcd-empty-dir-cleanup-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:36.031: INFO: Container etcd-empty-dir-cleanup ready: true, restart count 1 Nov 22 12:26:36.031: INFO: etcd-server-events-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:36.031: INFO: Container etcd-container ready: true, restart count 1 Nov 22 12:26:36.031: INFO: etcd-server-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:36.031: INFO: Container etcd-container ready: true, restart count 1 Nov 22 12:26:36.031: INFO: kube-addon-manager-test-6bbac58e9d-master started at 2019-11-22 09:29:05 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:36.031: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 22 12:26:36.031: INFO: metadata-proxy-v0.1-xr6wl started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 12:26:36.031: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 12:26:36.031: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 12:26:36.031: INFO: kube-apiserver-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:36.031: INFO: Container kube-apiserver ready: true, restart count 1 Nov 22 12:26:36.031: INFO: kube-controller-manager-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:36.031: INFO: Container kube-controller-manager ready: true, restart count 1 Nov 22 12:26:36.182: INFO: Latency metrics for node test-6bbac58e9d-master Nov 22 12:26:36.182: INFO: Logging node info for node test-6bbac58e9d-minion-group-3r4k Nov 22 12:26:36.238: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-minion-group-3r4k /api/v1/nodes/test-6bbac58e9d-minion-group-3r4k 395f2243-fea8-4878-a059-0529f3825e0b 49709 0 2019-11-22 12:00:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-minion-group-3r4k kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.gke.io/zone:us-west1-b topology.hostpath.csi/node:test-6bbac58e9d-minion-group-3r4k topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-disruptive-2266":"test-6bbac58e9d-minion-group-3r4k","pd.csi.storage.gke.io":"projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/instances/test-6bbac58e9d-minion-group-3r4k"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.4.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-minion-group-3r4k,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.4.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103880232960 0} {<nil>} 101445540Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7836012544 0} {<nil>} 7652356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93492209510 0} {<nil>} 93492209510 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7573868544 0} {<nil>} 7396356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-11-22 12:25:45 +0000 UTC,LastTransitionTime:2019-11-22 12:00:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-11-22 12:25:45 +0000 UTC,LastTransitionTime:2019-11-22 12:00:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-11-22 12:25:45 +0000 UTC,LastTransitionTime:2019-11-22 12:00:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-11-22 12:25:45 +0000 UTC,LastTransitionTime:2019-11-22 12:00:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-11-22 12:25:45 +0000 UTC,LastTransitionTime:2019-11-22 12:00:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-11-22 12:25:45 +0000 UTC,LastTransitionTime:2019-11-22 12:00:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-11-22 12:25:45 +0000 UTC,LastTransitionTime:2019-11-22 12:00:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 12:00:30 +0000 UTC,LastTransitionTime:2019-11-22 12:00:30 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 12:21:41 +0000 UTC,LastTransitionTime:2019-11-22 12:13:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 12:21:41 +0000 UTC,LastTransitionTime:2019-11-22 12:13:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 12:21:41 +0000 UTC,LastTransitionTime:2019-11-22 12:13:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 12:21:41 +0000 UTC,LastTransitionTime:2019-11-22 12:13:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.7,},NodeAddress{Type:ExternalIP,Address:35.233.171.245,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-minion-group-3r4k.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-minion-group-3r4k.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e4b178dc1d0235381676312c1fab2fde,SystemUUID:E4B178DC-1D02-3538-1676-312C1FAB2FDE,BootID:462bf9e2-fbf8-40ff-ba6c-5bd09c1e7c81,KernelVersion:4.15.0-1048-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:130115752,},ContainerImage{Names:[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:3846a258f98448f6586700a37ae5974a0969cc0bb43f75b4fde0f198bb314103 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0],SizeBytes:113118759,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:276335fa25a703615cc2f2cdc51ba693fac4bdd70baa63f9cbf228291defd776 k8s.gcr.io/node-problem-detector:v0.8.0],SizeBytes:108715243,},ContainerImage{Names:[gcr.io/gke-release/csi-provisioner@sha256:3cd0f6b65a01d3b1e6fa9f2eb31448e7c15d79ee782249c206ad2710ac189cff gcr.io/gke-release/csi-provisioner:v1.4.0-gke.0],SizeBytes:60974770,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[gcr.io/gke-release/csi-attacher@sha256:2ffe521f6b59df3fa0dd444eef5f22472e3fe343efc1ab39aaa05e4f28832418 gcr.io/gke-release/csi-attacher:v2.0.0-gke.0],SizeBytes:51300540,},ContainerImage{Names:[gcr.io/gke-release/csi-resizer@sha256:650e2a11a7f877b51db5e23c5b7eae30b99714b535a59a6bba2aa9c165358cb1 gcr.io/gke-release/csi-resizer:v0.3.0-gke.0],SizeBytes:51125391,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:32b849c66869e0491116a333af3a7cafe226639d975818dfdb4d58fd7028a0b8 quay.io/k8scsi/csi-provisioner:v1.5.0-rc1],SizeBytes:50962879,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:48e9d3c4147ab5e4e070677a352f807a17e0f6bcf4cb19b8c34f6bfadf87781b quay.io/k8scsi/csi-snapshotter:v2.0.0-rc2],SizeBytes:50515643,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:d3e2c5fb1d887bed2e6e8f931c65209b16cf7c28b40eaf7812268f9839908790 quay.io/k8scsi/csi-attacher:v2.0.0],SizeBytes:46143101,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:eff2d6a215efd9450d90796265fc4d8832a54a3a098df06edae6ff3a5072b08f quay.io/k8scsi/csi-resizer:v0.3.0],SizeBytes:46014212,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:2f80c87a67122448bd60c778dd26d4067e1f48d1ea3fefe9f92b8cd1961acfa0 quay.io/k8scsi/hostpathplugin:v1.3.0-rc1],SizeBytes:28736864,},ContainerImage{Names:[gcr.io/gke-release/csi-node-driver-registrar@sha256:c09c18e90fa65c1156ab449681c38a9da732e2bb20a724ad10f90b2b0eec97d2 gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0],SizeBytes:19358236,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:89cdb2a20bdec89b75e2fbd82a67567ea90b719524990e772f2704b19757188d quay.io/k8scsi/csi-node-driver-registrar:v1.2.0],SizeBytes:17057647,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[kubernetes.io/csi/pd.csi.storage.gke.io^projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/disks/pvc-f406d9bf-9c9d-4bfa-98f4-44a25976f26d],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/pd.csi.storage.gke.io^projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/disks/pvc-f406d9bf-9c9d-4bfa-98f4-44a25976f26d,DevicePath:,},},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 12:26:36.239: INFO: Logging kubelet events for node test-6bbac58e9d-minion-group-3r4k Nov 22 12:26:36.279: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-minion-group-3r4k Nov 22 12:26:36.347: INFO: gcepd-injector started at 2019-11-22 12:16:41 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:36.347: INFO: Container gcepd-injector ready: false, restart count 0 Nov 22 12:26:36.347: INFO: fluentd-gcp-v3.2.0-fskv4 started at 2019-11-22 12:00:22 +0000 UTC (0+2 container statuses recorded) Nov 22 12:26:36.347: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 12:26:36.347: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 12:26:36.347: INFO: metadata-proxy-v0.1-4sh9w started at 2019-11-22 12:00:22 +0000 UTC (0+2 container statuses recorded) Nov 22 12:26:36.347: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 12:26:36.347: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 12:26:36.347: INFO: npd-v0.8.0-ddswt started at 2019-11-22 12:00:32 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:36.347: INFO: Container node-problem-detector ready: true, restart count 0 Nov 22 12:26:36.347: INFO: csi-gce-pd-node-tqtbt started at 2019-11-22 12:16:32 +0000 UTC (0+2 container statuses recorded) Nov 22 12:26:36.347: INFO: Container csi-driver-registrar ready: true, restart count 0 Nov 22 12:26:36.347: INFO: Container gce-pd-driver ready: true, restart count 0 Nov 22 12:26:36.347: INFO: csi-gce-pd-controller-0 started at 2019-11-22 12:16:32 +0000 UTC (0+4 container statuses recorded) Nov 22 12:26:36.347: INFO: Container csi-attacher ready: true, restart count 0 Nov 22 12:26:36.347: INFO: Container csi-provisioner ready: true, restart count 0 Nov 22 12:26:36.347: INFO: Container csi-resizer ready: true, restart count 0 Nov 22 12:26:36.347: INFO: Container gce-pd-driver ready: true, restart count 0 Nov 22 12:26:36.347: INFO: kube-proxy-test-6bbac58e9d-minion-group-3r4k started at 2019-11-22 12:09:36 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:36.347: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 12:26:36.545: INFO: Latency metrics for node test-6bbac58e9d-minion-group-3r4k Nov 22 12:26:36.546: INFO: Logging node info for node test-6bbac58e9d-minion-group-dtt3 Nov 22 12:26:36.586: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-minion-group-dtt3 /api/v1/nodes/test-6bbac58e9d-minion-group-dtt3 bbcaa4a7-21ed-4b1a-8d6c-097e686c368c 49604 0 2019-11-22 09:29:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-minion-group-dtt3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.gke.io/zone:us-west1-b topology.hostpath.csi/node:test-6bbac58e9d-minion-group-dtt3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-3611":"test-6bbac58e9d-minion-group-dtt3","pd.csi.storage.gke.io":"projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/instances/test-6bbac58e9d-minion-group-dtt3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-minion-group-dtt3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103880232960 0} {<nil>} 101445540Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7836012544 0} {<nil>} 7652356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93492209510 0} {<nil>} 93492209510 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7573868544 0} {<nil>} 7396356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-11-22 12:25:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-11-22 12:25:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-11-22 12:25:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-11-22 12:25:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-11-22 12:25:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-11-22 12:25:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-11-22 12:25:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 12:21:57 +0000 UTC,LastTransitionTime:2019-11-22 12:11:46 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 12:21:57 +0000 UTC,LastTransitionTime:2019-11-22 12:11:46 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 12:21:57 +0000 UTC,LastTransitionTime:2019-11-22 12:11:46 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 12:21:57 +0000 UTC,LastTransitionTime:2019-11-22 12:11:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.227.160.250,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-minion-group-dtt3.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-minion-group-dtt3.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:015ba3833761f0b9cd8a2196bf6fb79d,SystemUUID:015BA383-3761-F0B9-CD8A-2196BF6FB79D,BootID:c9ec395e-18ec-40c2-b13c-49ae0567ad15,KernelVersion:4.15.0-1048-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:130115752,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:3846a258f98448f6586700a37ae5974a0969cc0bb43f75b4fde0f198bb314103 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0],SizeBytes:113118759,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:276335fa25a703615cc2f2cdc51ba693fac4bdd70baa63f9cbf228291defd776 k8s.gcr.io/node-problem-detector:v0.8.0],SizeBytes:108715243,},ContainerImage{Names:[k8s.gcr.io/heapster-amd64@sha256:9fae0af136ce0cf4f88393b3670f7139ffc464692060c374d2ae748e13144521 k8s.gcr.io/heapster-amd64:v1.6.0-beta.1],SizeBytes:76016169,},ContainerImage{Names:[gcr.io/gke-release/csi-provisioner@sha256:3cd0f6b65a01d3b1e6fa9f2eb31448e7c15d79ee782249c206ad2710ac189cff gcr.io/gke-release/csi-provisioner:v1.4.0-gke.0],SizeBytes:60974770,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[gcr.io/gke-release/csi-attacher@sha256:2ffe521f6b59df3fa0dd444eef5f22472e3fe343efc1ab39aaa05e4f28832418 gcr.io/gke-release/csi-attacher:v2.0.0-gke.0],SizeBytes:51300540,},ContainerImage{Names:[gcr.io/gke-release/csi-resizer@sha256:650e2a11a7f877b51db5e23c5b7eae30b99714b535a59a6bba2aa9c165358cb1 gcr.io/gke-release/csi-resizer:v0.3.0-gke.0],SizeBytes:51125391,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:32b849c66869e0491116a333af3a7cafe226639d975818dfdb4d58fd7028a0b8 quay.io/k8scsi/csi-provisioner:v1.5.0-rc1],SizeBytes:50962879,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:48e9d3c4147ab5e4e070677a352f807a17e0f6bcf4cb19b8c34f6bfadf87781b quay.io/k8scsi/csi-snapshotter:v2.0.0-rc2],SizeBytes:50515643,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:d3e2c5fb1d887bed2e6e8f931c65209b16cf7c28b40eaf7812268f9839908790 quay.io/k8scsi/csi-attacher:v2.0.0],SizeBytes:46143101,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:eff2d6a215efd9450d90796265fc4d8832a54a3a098df06edae6ff3a5072b08f quay.io/k8scsi/csi-resizer:v0.3.0],SizeBytes:46014212,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:2114a2f70d34fa2821fb7f9bf373be5f44c8cbfeb6097fb5ba8eaf73cd38b72a k8s.gcr.io/addon-resizer:1.8.6],SizeBytes:37928220,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:2f80c87a67122448bd60c778dd26d4067e1f48d1ea3fefe9f92b8cd1961acfa0 quay.io/k8scsi/hostpathplugin:v1.3.0-rc1],SizeBytes:28736864,},ContainerImage{Names:[gcr.io/gke-release/csi-node-driver-registrar@sha256:c09c18e90fa65c1156ab449681c38a9da732e2bb20a724ad10f90b2b0eec97d2 gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0],SizeBytes:19358236,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:89cdb2a20bdec89b75e2fbd82a67567ea90b719524990e772f2704b19757188d quay.io/k8scsi/csi-node-driver-registrar:v1.2.0],SizeBytes:17057647,},ContainerImage{Names:[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 12:26:36.587: INFO: Logging kubelet events for node test-6bbac58e9d-minion-group-dtt3 Nov 22 12:26:36.627: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-minion-group-dtt3 Nov 22 12:26:36.692: INFO: metrics-server-v0.3.6-7d96444597-lfv7c started at 2019-11-22 09:29:45 +0000 UTC (0+2 container statuses recorded) Nov 22 12:26:36.692: INFO: Container metrics-server ready: true, restart count 0 Nov 22 12:26:36.692: INFO: Container metrics-server-nanny ready: true, restart count 0 Nov 22 12:26:36.692: INFO: npd-v0.8.0-86sjk started at 2019-11-22 09:29:41 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:36.692: INFO: Container node-problem-detector ready: true, restart count 0 Nov 22 12:26:36.692: INFO: coredns-65567c7b57-vqz56 started at 2019-11-22 09:29:55 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:36.692: INFO: Container coredns ready: true, restart count 0 Nov 22 12:26:36.692: INFO: kube-dns-autoscaler-65bc6d4889-kncqk started at 2019-11-22 10:43:03 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:36.692: INFO: Container autoscaler ready: true, restart count 0 Nov 22 12:26:36.692: INFO: csi-gce-pd-node-dtdvr started at 2019-11-22 12:16:32 +0000 UTC (0+2 container statuses recorded) Nov 22 12:26:36.693: INFO: Container csi-driver-registrar ready: true, restart count 0 Nov 22 12:26:36.693: INFO: Container gce-pd-driver ready: true, restart count 0 Nov 22 12:26:36.693: INFO: kube-proxy-test-6bbac58e9d-minion-group-dtt3 started at 2019-11-22 11:33:24 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:36.693: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 12:26:36.693: INFO: heapster-v1.6.0-beta.1-859599df9f-9nl5x started at 2019-11-22 09:29:47 +0000 UTC (0+2 container statuses recorded) Nov 22 12:26:36.693: INFO: Container heapster ready: true, restart count 0 Nov 22 12:26:36.693: INFO: Container heapster-nanny ready: true, restart count 0 Nov 22 12:26:36.693: INFO: fluentd-gcp-v3.2.0-z4gtt started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 12:26:36.693: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 12:26:36.693: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 12:26:36.693: INFO: metadata-proxy-v0.1-qj8lx started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 12:26:36.693: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 12:26:36.693: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 12:26:36.693: INFO: kubernetes-dashboard-7778f8b456-dwww9 started at 2019-11-22 09:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:36.693: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 22 12:26:36.855: INFO: Latency metrics for node test-6bbac58e9d-minion-group-dtt3 Nov 22 12:26:36.855: INFO: Logging node info for node test-6bbac58e9d-minion-group-ldgb Nov 22 12:26:36.897: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-minion-group-ldgb /api/v1/nodes/test-6bbac58e9d-minion-group-ldgb 7af88a45-91da-49e2-aad1-693979aa273c 49627 0 2019-11-22 09:29:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-minion-group-ldgb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.gke.io/zone:us-west1-b topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"pd.csi.storage.gke.io":"projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/instances/test-6bbac58e9d-minion-group-ldgb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-minion-group-ldgb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103880232960 0} {<nil>} 101445540Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7836012544 0} {<nil>} 7652356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93492209510 0} {<nil>} 93492209510 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7573868544 0} {<nil>} 7396356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-11-22 12:25:18 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-11-22 12:25:18 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-11-22 12:25:18 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-11-22 12:25:18 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-11-22 12:25:18 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-11-22 12:25:18 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-11-22 12:25:18 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 12:23:02 +0000 UTC,LastTransitionTime:2019-11-22 12:07:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 12:23:02 +0000 UTC,LastTransitionTime:2019-11-22 12:07:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 12:23:02 +0000 UTC,LastTransitionTime:2019-11-22 12:07:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 12:23:02 +0000 UTC,LastTransitionTime:2019-11-22 12:07:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:104.199.127.196,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-minion-group-ldgb.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-minion-group-ldgb.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7e1c327ba82c05d274d059f31a030f91,SystemUUID:7E1C327B-A82C-05D2-74D0-59F31A030F91,BootID:153cc788-4fe4-4a95-a234-e7f53446bb04,KernelVersion:4.15.0-1048-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:130115752,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:3846a258f98448f6586700a37ae5974a0969cc0bb43f75b4fde0f198bb314103 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0],SizeBytes:113118759,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:276335fa25a703615cc2f2cdc51ba693fac4bdd70baa63f9cbf228291defd776 k8s.gcr.io/node-problem-detector:v0.8.0],SizeBytes:108715243,},ContainerImage{Names:[k8s.gcr.io/fluentd-gcp-scaler@sha256:4f28f10fb89506768910b858f7a18ffb996824a16d70d5ac895e49687df9ff58 k8s.gcr.io/fluentd-gcp-scaler:0.5.2],SizeBytes:90498960,},ContainerImage{Names:[gcr.io/gke-release/csi-provisioner@sha256:3cd0f6b65a01d3b1e6fa9f2eb31448e7c15d79ee782249c206ad2710ac189cff gcr.io/gke-release/csi-provisioner:v1.4.0-gke.0],SizeBytes:60974770,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/event-exporter@sha256:ab71028f7cbc851d273bb00449e30ab743d4e3be21ed2093299f718b42df0748 k8s.gcr.io/event-exporter:v0.3.1],SizeBytes:51445475,},ContainerImage{Names:[gcr.io/gke-release/csi-attacher@sha256:2ffe521f6b59df3fa0dd444eef5f22472e3fe343efc1ab39aaa05e4f28832418 gcr.io/gke-release/csi-attacher:v2.0.0-gke.0],SizeBytes:51300540,},ContainerImage{Names:[gcr.io/gke-release/csi-resizer@sha256:650e2a11a7f877b51db5e23c5b7eae30b99714b535a59a6bba2aa9c165358cb1 gcr.io/gke-release/csi-resizer:v0.3.0-gke.0],SizeBytes:51125391,},ContainerImage{Names:[quay.io/k8scsi/snapshot-controller@sha256:b3c1c484ffe4f0bbf000bda93fb745e1b3899b08d605a08581426a1963dd3e8a quay.io/k8scsi/snapshot-controller:v2.0.0-rc2],SizeBytes:47222712,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:1d49fb3b108e6b42542e4a9b056dee308f06f88824326cde1636eea0472b799d k8s.gcr.io/prometheus-to-sd:v0.7.2],SizeBytes:42314030,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[gcr.io/gke-release/csi-node-driver-registrar@sha256:c09c18e90fa65c1156ab449681c38a9da732e2bb20a724ad10f90b2b0eec97d2 gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0],SizeBytes:19358236,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64@sha256:d83d8a481145d0eb71f8bd71ae236d1c6a931dd3bdcaf80919a8ec4a4d8aff74 k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0],SizeBytes:13513083,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 12:26:36.897: INFO: Logging kubelet events for node test-6bbac58e9d-minion-group-ldgb Nov 22 12:26:36.940: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-minion-group-ldgb Nov 22 12:26:37.008: INFO: npd-v0.8.0-wmkxq started at 2019-11-22 09:29:42 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:37.008: INFO: Container node-problem-detector ready: true, restart count 0 Nov 22 12:26:37.008: INFO: volume-snapshot-controller-0 started at 2019-11-22 10:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:37.008: INFO: Container volume-snapshot-controller ready: true, restart count 0 Nov 22 12:26:37.008: INFO: coredns-65567c7b57-s9876 started at 2019-11-22 10:43:03 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:37.008: INFO: Container coredns ready: true, restart count 0 Nov 22 12:26:37.008: INFO: l7-default-backend-678889f899-sn2pt started at 2019-11-22 10:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:37.008: INFO: Container default-http-backend ready: true, restart count 0 Nov 22 12:26:37.009: INFO: metadata-proxy-v0.1-ptzjq started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 12:26:37.009: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 12:26:37.009: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 12:26:37.009: INFO: fluentd-gcp-v3.2.0-f9q96 started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 12:26:37.009: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 12:26:37.009: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 12:26:37.009: INFO: kube-proxy-test-6bbac58e9d-minion-group-ldgb started at 2019-11-22 09:29:30 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:37.009: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 12:26:37.009: INFO: csi-gce-pd-node-fhth4 started at 2019-11-22 12:16:32 +0000 UTC (0+2 container statuses recorded) Nov 22 12:26:37.009: INFO: Container csi-driver-registrar ready: false, restart count 0 Nov 22 12:26:37.009: INFO: Container gce-pd-driver ready: false, restart count 0 Nov 22 12:26:37.009: INFO: fluentd-gcp-scaler-76d9c77b4d-wh4nt started at 2019-11-22 10:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 12:26:37.009: INFO: Container fluentd-gcp-scaler ready: true, restart count 0 Nov 22 12:26:37.009: INFO: event-exporter-v0.3.1-747b47fcd-8chbt started at 2019-11-22 10:43:02 +0000 UTC (0+2 container statuses recorded) Nov 22 12:26:37.009: INFO: Container event-exporter ready: true, restart count 0 Nov 22 12:26:37.009: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 12:26:37.164: INFO: Latency metrics for node test-6bbac58e9d-minion-group-ldgb Nov 22 12:26:37.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "volume-6427" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sblockfs\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\sunmount\sif\spod\sis\sgracefully\sdeleted\swhile\skubelet\sis\sdown\s\[Disruptive\]\[Slow\]\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:340 Nov 22 11:42:33.291: Encountered SSH error. Unexpected error: <*errors.errorString | 0xc003bf0890>: { s: "error getting SSH client to prow@104.198.3.26:22: 'ssh: handshake failed: EOF'", } error getting SSH client to prow@104.198.3.26:22: 'ssh: handshake failed: EOF' occurred /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/utils.go:323from junit_01.xml
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:101 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 22 11:40:41.756: INFO: >>> kubeConfig: /workspace/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename provisioning �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:340 Nov 22 11:40:41.990: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/local-volume Nov 22 11:40:42.116: INFO: Node name not specified for getVolumeOpCounts, falling back to listing nodes from API Server �[1mSTEP�[0m: Creating block device on node "test-6bbac58e9d-minion-group-1pk2" using path "/tmp/local-driver-104659ab-fc0c-4447-bc9e-228147c90de4" Nov 22 11:40:44.739: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-driver-104659ab-fc0c-4447-bc9e-228147c90de4 && dd if=/dev/zero of=/tmp/local-driver-104659ab-fc0c-4447-bc9e-228147c90de4/file bs=4096 count=5120 && losetup -f /tmp/local-driver-104659ab-fc0c-4447-bc9e-228147c90de4/file] Namespace:provisioning-7530 PodName:hostexec-test-6bbac58e9d-minion-group-1pk2-wq5n6 ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Nov 22 11:40:44.739: INFO: >>> kubeConfig: /workspace/.kube/config Nov 22 11:40:45.065: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-driver-104659ab-fc0c-4447-bc9e-228147c90de4/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:provisioning-7530 PodName:hostexec-test-6bbac58e9d-minion-group-1pk2-wq5n6 ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Nov 22 11:40:45.065: INFO: >>> kubeConfig: /workspace/.kube/config Nov 22 11:40:45.385: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-driver-104659ab-fc0c-4447-bc9e-228147c90de4 && chmod o+rwx /tmp/local-driver-104659ab-fc0c-4447-bc9e-228147c90de4] Namespace:provisioning-7530 PodName:hostexec-test-6bbac58e9d-minion-group-1pk2-wq5n6 ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Nov 22 11:40:45.385: INFO: >>> kubeConfig: /workspace/.kube/config Nov 22 11:40:45.999: INFO: Creating resource for pre-provisioned PV Nov 22 11:40:46.000: INFO: Creating PVC and PV �[1mSTEP�[0m: Creating a PVC followed by a PV Nov 22 11:40:46.091: INFO: Waiting for PV local-thhgk to bind to PVC pvc-8glvb Nov 22 11:40:46.091: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-8glvb] to have phase Bound Nov 22 11:40:46.130: INFO: PersistentVolumeClaim pvc-8glvb found but phase is Pending instead of Bound. Nov 22 11:40:48.168: INFO: PersistentVolumeClaim pvc-8glvb found but phase is Pending instead of Bound. Nov 22 11:40:50.208: INFO: PersistentVolumeClaim pvc-8glvb found but phase is Pending instead of Bound. Nov 22 11:40:52.250: INFO: PersistentVolumeClaim pvc-8glvb found but phase is Pending instead of Bound. Nov 22 11:40:54.293: INFO: PersistentVolumeClaim pvc-8glvb found but phase is Pending instead of Bound. Nov 22 11:40:56.335: INFO: PersistentVolumeClaim pvc-8glvb found but phase is Pending instead of Bound. Nov 22 11:40:58.373: INFO: PersistentVolumeClaim pvc-8glvb found but phase is Pending instead of Bound. Nov 22 11:41:00.412: INFO: PersistentVolumeClaim pvc-8glvb found and phase=Bound (14.321450449s) Nov 22 11:41:00.413: INFO: Waiting up to 3m0s for PersistentVolume local-thhgk to have phase Bound Nov 22 11:41:00.452: INFO: PersistentVolume local-thhgk found and phase=Bound (38.384603ms) Nov 22 11:41:02.717: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c find /var/lib/kubelet/plugins -type d -exec mountpoint {} \; | grep 'is a mountpoint$' || true] Namespace:provisioning-7530 PodName:hostexec-test-6bbac58e9d-minion-group-1pk2-zr86q ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Nov 22 11:41:02.718: INFO: >>> kubeConfig: /workspace/.kube/config Nov 22 11:41:05.184: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c find /var/lib/kubelet/plugins -type d -exec mountpoint {} \; | grep 'is a mountpoint$' || true] Namespace:provisioning-7530 PodName:hostexec-test-6bbac58e9d-minion-group-dtt3-vjhcl ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Nov 22 11:41:05.184: INFO: >>> kubeConfig: /workspace/.kube/config Nov 22 11:41:09.686: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c find /var/lib/kubelet/plugins -type d -exec mountpoint {} \; | grep 'is a mountpoint$' || true] Namespace:provisioning-7530 PodName:hostexec-test-6bbac58e9d-minion-group-ldgb-rvmrf ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Nov 22 11:41:09.687: INFO: >>> kubeConfig: /workspace/.kube/config �[1mSTEP�[0m: Creating pod pod-subpath-test-local-preprovisionedpv-rslq �[1mSTEP�[0m: Expecting the volume mount to be found. Nov 22 11:41:12.903: INFO: ssh prow@104.198.3.26:22: command: mount | grep d64f79cc-a1f6-478b-8754-029518cb30f3 | grep -v volume-subpaths Nov 22 11:41:12.903: INFO: ssh prow@104.198.3.26:22: stdout: "/dev/loop0 on /var/lib/kubelet/pods/d64f79cc-a1f6-478b-8754-029518cb30f3/volumes/kubernetes.io~local-volume/local-thhgk type ext4 (rw,relatime,data=ordered)\n/dev/loop0 on /var/lib/kubelet/pods/d64f79cc-a1f6-478b-8754-029518cb30f3/volumes/kubernetes.io~local-volume/local-thhgk type ext4 (rw,relatime,data=ordered)\n/dev/loop0 on /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/pods/d64f79cc-a1f6-478b-8754-029518cb30f3/volumes/kubernetes.io~local-volume/local-thhgk type ext4 (rw,relatime,data=ordered)\n/dev/loop0 on /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/pods/d64f79cc-a1f6-478b-8754-029518cb30f3/volumes/kubernetes.io~local-volume/local-thhgk type ext4 (rw,relatime,data=ordered)\ntmpfs on /var/lib/kubelet/pods/d64f79cc-a1f6-478b-8754-029518cb30f3/volumes/kubernetes.io~secret/default-token-tq674 type tmpfs (rw,relatime)\ntmpfs on /var/lib/kubelet/pods/d64f79cc-a1f6-478b-8754-029518cb30f3/volumes/kubernetes.io~secret/default-token-tq674 type tmpfs (rw,relatime)\ntmpfs on /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/pods/d64f79cc-a1f6-478b-8754-029518cb30f3/volumes/kubernetes.io~secret/default-token-tq674 type tmpfs (rw,relatime)\ntmpfs on /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/pods/d64f79cc-a1f6-478b-8754-029518cb30f3/volumes/kubernetes.io~secret/default-token-tq674 type tmpfs (rw,relatime)\n" Nov 22 11:41:12.905: INFO: ssh prow@104.198.3.26:22: stderr: "" Nov 22 11:41:12.905: INFO: ssh prow@104.198.3.26:22: exit code: 0 �[1mSTEP�[0m: Expecting the volume subpath mount to be found. Nov 22 11:41:13.471: INFO: ssh prow@104.198.3.26:22: command: cat /proc/self/mountinfo | grep d64f79cc-a1f6-478b-8754-029518cb30f3 | grep volume-subpaths Nov 22 11:41:13.471: INFO: ssh prow@104.198.3.26:22: stdout: "1356 329 7:0 /provisioning-7530 /var/lib/kubelet/pods/d64f79cc-a1f6-478b-8754-029518cb30f3/volume-subpaths/local-thhgk/test-container-subpath-local-preprovisionedpv-rslq/0 rw,relatime shared:281 - ext4 /dev/loop0 rw,data=ordered\n1498 27 7:0 /provisioning-7530 /var/lib/kubelet/pods/d64f79cc-a1f6-478b-8754-029518cb30f3/volume-subpaths/local-thhgk/test-container-subpath-local-preprovisionedpv-rslq/0 rw,relatime shared:281 - ext4 /dev/loop0 rw,data=ordered\n1497 370 7:0 /provisioning-7530 /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/pods/d64f79cc-a1f6-478b-8754-029518cb30f3/volume-subpaths/local-thhgk/test-container-subpath-local-preprovisionedpv-rslq/0 rw,relatime shared:281 - ext4 /dev/loop0 rw,data=ordered\n1496 369 7:0 /provisioning-7530 /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/pods/d64f79cc-a1f6-478b-8754-029518cb30f3/volume-subpaths/local-thhgk/test-container-subpath-local-preprovisionedpv-rslq/0 rw,relatime shared:281 - ext4 /dev/loop0 rw,data=ordered\n" Nov 22 11:41:13.473: INFO: ssh prow@104.198.3.26:22: stderr: "" Nov 22 11:41:13.473: INFO: ssh prow@104.198.3.26:22: exit code: 0 �[1mSTEP�[0m: Stopping the kubelet. Nov 22 11:41:13.514: INFO: Checking if systemctl command is present Nov 22 11:41:14.084: INFO: Checking if sudo command is present Nov 22 11:41:14.650: INFO: Attempting `sudo systemctl stop kubelet` Nov 22 11:41:15.297: INFO: ssh prow@104.198.3.26:22: command: sudo systemctl stop kubelet Nov 22 11:41:15.297: INFO: ssh prow@104.198.3.26:22: stdout: "" Nov 22 11:41:15.297: INFO: ssh prow@104.198.3.26:22: stderr: "" Nov 22 11:41:15.297: INFO: ssh prow@104.198.3.26:22: exit code: 0 Nov 22 11:41:15.297: INFO: Waiting up to 1m0s for node test-6bbac58e9d-minion-group-1pk2 condition Ready to be false Nov 22 11:41:15.338: INFO: Condition Ready of node test-6bbac58e9d-minion-group-1pk2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 22 11:41:17.383: INFO: Condition Ready of node test-6bbac58e9d-minion-group-1pk2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 22 11:41:19.423: INFO: Condition Ready of node test-6bbac58e9d-minion-group-1pk2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 22 11:41:21.463: INFO: Condition Ready of node test-6bbac58e9d-minion-group-1pk2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 22 11:41:23.505: INFO: Condition Ready of node test-6bbac58e9d-minion-group-1pk2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 22 11:41:25.545: INFO: Condition Ready of node test-6bbac58e9d-minion-group-1pk2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 22 11:41:27.586: INFO: Condition Ready of node test-6bbac58e9d-minion-group-1pk2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 22 11:41:29.626: INFO: Condition Ready of node test-6bbac58e9d-minion-group-1pk2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 22 11:41:31.668: INFO: Condition Ready of node test-6bbac58e9d-minion-group-1pk2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 22 11:41:33.713: INFO: Condition Ready of node test-6bbac58e9d-minion-group-1pk2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 22 11:41:35.757: INFO: Condition Ready of node test-6bbac58e9d-minion-group-1pk2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 22 11:41:37.800: INFO: Condition Ready of node test-6bbac58e9d-minion-group-1pk2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 22 11:41:39.842: INFO: Condition Ready of node test-6bbac58e9d-minion-group-1pk2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 22 11:41:41.904: INFO: Condition Ready of node test-6bbac58e9d-minion-group-1pk2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 22 11:41:43.945: INFO: Condition Ready of node test-6bbac58e9d-minion-group-1pk2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 22 11:41:45.987: INFO: Condition Ready of node test-6bbac58e9d-minion-group-1pk2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 22 11:41:48.034: INFO: Condition Ready of node test-6bbac58e9d-minion-group-1pk2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 22 11:41:50.077: INFO: Condition Ready of node test-6bbac58e9d-minion-group-1pk2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled �[1mSTEP�[0m: Deleting Pod "pod-subpath-test-local-preprovisionedpv-rslq" �[1mSTEP�[0m: Starting the kubelet and waiting for pod to delete. Nov 22 11:41:52.209: INFO: Checking if systemctl command is present Nov 22 11:41:52.794: INFO: Checking if sudo command is present Nov 22 11:41:53.384: INFO: Attempting `sudo systemctl start kubelet` Nov 22 11:41:53.990: INFO: ssh prow@104.198.3.26:22: command: sudo systemctl start kubelet Nov 22 11:41:53.990: INFO: ssh prow@104.198.3.26:22: stdout: "" Nov 22 11:41:53.990: INFO: ssh prow@104.198.3.26:22: stderr: "" Nov 22 11:41:53.990: INFO: ssh prow@104.198.3.26:22: exit code: 0 Nov 22 11:41:53.990: INFO: Waiting up to 1m0s for node test-6bbac58e9d-minion-group-1pk2 condition Ready to be true Nov 22 11:41:54.033: INFO: Condition Ready of node test-6bbac58e9d-minion-group-1pk2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. �[1mSTEP�[0m: Expecting the volume mount not to be found. Nov 22 11:42:33.291: INFO: ssh prow@104.198.3.26:22: command: mount | grep d64f79cc-a1f6-478b-8754-029518cb30f3 | grep -v volume-subpaths Nov 22 11:42:33.291: INFO: ssh prow@104.198.3.26:22: stdout: "" Nov 22 11:42:33.291: INFO: ssh prow@104.198.3.26:22: stderr: "" Nov 22 11:42:33.291: INFO: ssh prow@104.198.3.26:22: exit code: 0 Nov 22 11:42:33.291: FAIL: Encountered SSH error. Unexpected error: <*errors.errorString | 0xc003bf0890>: { s: "error getting SSH client to prow@104.198.3.26:22: 'ssh: handshake failed: EOF'", } error getting SSH client to prow@104.198.3.26:22: 'ssh: handshake failed: EOF' occurred Nov 22 11:42:33.332: INFO: Checking if systemctl command is present Nov 22 11:42:33.892: INFO: Checking if sudo command is present Nov 22 11:42:39.564: INFO: Attempting `sudo systemctl start kubelet` Nov 22 11:42:44.694: FAIL: SSH to Node "test-6bbac58e9d-minion-group-1pk2" errored. Unexpected error: <*errors.errorString | 0xc002f33530>: { s: "error getting SSH client to prow@104.198.3.26:22: 'ssh: handshake failed: EOF'", } error getting SSH client to prow@104.198.3.26:22: 'ssh: handshake failed: EOF' occurred �[1mSTEP�[0m: Deleting pod Nov 22 11:42:44.696: INFO: Deleting pod "pod-subpath-test-local-preprovisionedpv-rslq" in namespace "provisioning-7530" �[1mSTEP�[0m: Deleting pv and pvc Nov 22 11:42:44.736: INFO: Deleting PersistentVolumeClaim "pvc-8glvb" Nov 22 11:42:44.781: INFO: Deleting PersistentVolume "local-thhgk" Nov 22 11:42:44.824: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-driver-104659ab-fc0c-4447-bc9e-228147c90de4] Namespace:provisioning-7530 PodName:hostexec-test-6bbac58e9d-minion-group-1pk2-wq5n6 ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Nov 22 11:42:44.824: INFO: >>> kubeConfig: /workspace/.kube/config Nov 22 11:42:45.135: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-driver-104659ab-fc0c-4447-bc9e-228147c90de4/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:provisioning-7530 PodName:hostexec-test-6bbac58e9d-minion-group-1pk2-wq5n6 ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Nov 22 11:42:45.136: INFO: >>> kubeConfig: /workspace/.kube/config �[1mSTEP�[0m: Tear down block device "/dev/loop0" on node "test-6bbac58e9d-minion-group-1pk2" at path /tmp/local-driver-104659ab-fc0c-4447-bc9e-228147c90de4/file Nov 22 11:42:45.435: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:provisioning-7530 PodName:hostexec-test-6bbac58e9d-minion-group-1pk2-wq5n6 ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Nov 22 11:42:45.435: INFO: >>> kubeConfig: /workspace/.kube/config �[1mSTEP�[0m: Removing the test directory /tmp/local-driver-104659ab-fc0c-4447-bc9e-228147c90de4 Nov 22 11:42:45.725: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-driver-104659ab-fc0c-4447-bc9e-228147c90de4] Namespace:provisioning-7530 PodName:hostexec-test-6bbac58e9d-minion-group-1pk2-wq5n6 ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Nov 22 11:42:45.726: INFO: >>> kubeConfig: /workspace/.kube/config �[1mSTEP�[0m: Deleting pod hostexec-test-6bbac58e9d-minion-group-1pk2-wq5n6 in namespace provisioning-7530 �[1mSTEP�[0m: Deleting pod hostexec-test-6bbac58e9d-minion-group-ldgb-rvmrf in namespace provisioning-7530 �[1mSTEP�[0m: Deleting pod hostexec-test-6bbac58e9d-minion-group-1pk2-zr86q in namespace provisioning-7530 �[1mSTEP�[0m: Deleting pod hostexec-test-6bbac58e9d-minion-group-dtt3-vjhcl in namespace provisioning-7530 Nov 22 11:42:46.282: INFO: In-tree plugin kubernetes.io/local-volume is not migrated, not validating any metrics [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "provisioning-7530". �[1mSTEP�[0m: Found 25 events. Nov 22 11:42:46.323: INFO: At 2019-11-22 11:40:43 +0000 UTC - event for hostexec-test-6bbac58e9d-minion-group-1pk2-wq5n6: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Nov 22 11:42:46.323: INFO: At 2019-11-22 11:40:43 +0000 UTC - event for hostexec-test-6bbac58e9d-minion-group-1pk2-wq5n6: {kubelet test-6bbac58e9d-minion-group-1pk2} Created: Created container agnhost Nov 22 11:42:46.323: INFO: At 2019-11-22 11:40:43 +0000 UTC - event for hostexec-test-6bbac58e9d-minion-group-1pk2-wq5n6: {kubelet test-6bbac58e9d-minion-group-1pk2} Started: Started container agnhost Nov 22 11:42:46.323: INFO: At 2019-11-22 11:40:46 +0000 UTC - event for pvc-8glvb: {persistentvolume-controller } ProvisioningFailed: storageclass.storage.k8s.io "provisioning-7530" not found Nov 22 11:42:46.323: INFO: At 2019-11-22 11:41:01 +0000 UTC - event for hostexec-test-6bbac58e9d-minion-group-1pk2-zr86q: {kubelet test-6bbac58e9d-minion-group-1pk2} Created: Created container agnhost Nov 22 11:42:46.323: INFO: At 2019-11-22 11:41:01 +0000 UTC - event for hostexec-test-6bbac58e9d-minion-group-1pk2-zr86q: {kubelet test-6bbac58e9d-minion-group-1pk2} Started: Started container agnhost Nov 22 11:42:46.323: INFO: At 2019-11-22 11:41:01 +0000 UTC - event for hostexec-test-6bbac58e9d-minion-group-1pk2-zr86q: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Nov 22 11:42:46.323: INFO: At 2019-11-22 11:41:03 +0000 UTC - event for hostexec-test-6bbac58e9d-minion-group-dtt3-vjhcl: {kubelet test-6bbac58e9d-minion-group-dtt3} Started: Started container agnhost Nov 22 11:42:46.323: INFO: At 2019-11-22 11:41:03 +0000 UTC - event for hostexec-test-6bbac58e9d-minion-group-dtt3-vjhcl: {kubelet test-6bbac58e9d-minion-group-dtt3} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Nov 22 11:42:46.323: INFO: At 2019-11-22 11:41:03 +0000 UTC - event for hostexec-test-6bbac58e9d-minion-group-dtt3-vjhcl: {kubelet test-6bbac58e9d-minion-group-dtt3} Created: Created container agnhost Nov 22 11:42:46.323: INFO: At 2019-11-22 11:41:07 +0000 UTC - event for hostexec-test-6bbac58e9d-minion-group-ldgb-rvmrf: {kubelet test-6bbac58e9d-minion-group-ldgb} Created: Created container agnhost Nov 22 11:42:46.323: INFO: At 2019-11-22 11:41:07 +0000 UTC - event for hostexec-test-6bbac58e9d-minion-group-ldgb-rvmrf: {kubelet test-6bbac58e9d-minion-group-ldgb} Started: Started container agnhost Nov 22 11:42:46.323: INFO: At 2019-11-22 11:41:07 +0000 UTC - event for hostexec-test-6bbac58e9d-minion-group-ldgb-rvmrf: {kubelet test-6bbac58e9d-minion-group-ldgb} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Nov 22 11:42:46.323: INFO: At 2019-11-22 11:41:10 +0000 UTC - event for pod-subpath-test-local-preprovisionedpv-rslq: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Nov 22 11:42:46.323: INFO: At 2019-11-22 11:41:10 +0000 UTC - event for pod-subpath-test-local-preprovisionedpv-rslq: {kubelet test-6bbac58e9d-minion-group-1pk2} Created: Created container test-container-subpath-local-preprovisionedpv-rslq Nov 22 11:42:46.323: INFO: At 2019-11-22 11:41:11 +0000 UTC - event for pod-subpath-test-local-preprovisionedpv-rslq: {kubelet test-6bbac58e9d-minion-group-1pk2} Started: Started container test-container-volume-local-preprovisionedpv-rslq Nov 22 11:42:46.323: INFO: At 2019-11-22 11:41:11 +0000 UTC - event for pod-subpath-test-local-preprovisionedpv-rslq: {kubelet test-6bbac58e9d-minion-group-1pk2} Created: Created container test-container-volume-local-preprovisionedpv-rslq Nov 22 11:42:46.323: INFO: At 2019-11-22 11:41:11 +0000 UTC - event for pod-subpath-test-local-preprovisionedpv-rslq: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Nov 22 11:42:46.323: INFO: At 2019-11-22 11:41:11 +0000 UTC - event for pod-subpath-test-local-preprovisionedpv-rslq: {kubelet test-6bbac58e9d-minion-group-1pk2} Started: Started container test-container-subpath-local-preprovisionedpv-rslq Nov 22 11:42:46.323: INFO: At 2019-11-22 11:41:54 +0000 UTC - event for pod-subpath-test-local-preprovisionedpv-rslq: {kubelet test-6bbac58e9d-minion-group-1pk2} Killing: Stopping container test-container-subpath-local-preprovisionedpv-rslq Nov 22 11:42:46.323: INFO: At 2019-11-22 11:41:54 +0000 UTC - event for pod-subpath-test-local-preprovisionedpv-rslq: {kubelet test-6bbac58e9d-minion-group-1pk2} Killing: Stopping container test-container-volume-local-preprovisionedpv-rslq Nov 22 11:42:46.323: INFO: At 2019-11-22 11:42:46 +0000 UTC - event for hostexec-test-6bbac58e9d-minion-group-1pk2-wq5n6: {kubelet test-6bbac58e9d-minion-group-1pk2} Killing: Stopping container agnhost Nov 22 11:42:46.323: INFO: At 2019-11-22 11:42:46 +0000 UTC - event for hostexec-test-6bbac58e9d-minion-group-1pk2-zr86q: {kubelet test-6bbac58e9d-minion-group-1pk2} Killing: Stopping container agnhost Nov 22 11:42:46.323: INFO: At 2019-11-22 11:42:46 +0000 UTC - event for hostexec-test-6bbac58e9d-minion-group-dtt3-vjhcl: {kubelet test-6bbac58e9d-minion-group-dtt3} Killing: Stopping container agnhost Nov 22 11:42:46.323: INFO: At 2019-11-22 11:42:46 +0000 UTC - event for hostexec-test-6bbac58e9d-minion-group-ldgb-rvmrf: {kubelet test-6bbac58e9d-minion-group-ldgb} Killing: Stopping container agnhost Nov 22 11:42:46.362: INFO: POD NODE PHASE GRACE CONDITIONS Nov 22 11:42:46.362: INFO: Nov 22 11:42:46.411: INFO: Logging node info for node test-6bbac58e9d-master Nov 22 11:42:46.450: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-master /api/v1/nodes/test-6bbac58e9d-master 8a7a430e-36f3-4dcf-b7dd-f2a903ca1fa5 36516 0 2019-11-22 09:29:31 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3876802560 0} {<nil>} 3785940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3614658560 0} {<nil>} 3529940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 11:40:52 +0000 UTC,LastTransitionTime:2019-11-22 09:29:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 11:40:52 +0000 UTC,LastTransitionTime:2019-11-22 09:29:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 11:40:52 +0000 UTC,LastTransitionTime:2019-11-22 09:29:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 11:40:52 +0000 UTC,LastTransitionTime:2019-11-22 09:29:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.227.175.21,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-master.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-master.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fa8a320b898c1d5588780170530d5cf8,SystemUUID:fa8a320b-898c-1d55-8878-0170530d5cf8,BootID:2730095f-f6ec-4217-a9ae-32ba996e1eed,KernelVersion:4.19.76+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://19.3.1,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:212137343,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:200623393,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:110377926,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/etcd-empty-dir-cleanup@sha256:484662e55e0705caed26c6fb8632097457f43ce685756531da7a76319a7dcee1 k8s.gcr.io/etcd-empty-dir-cleanup:3.4.3.0],SizeBytes:77408900,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-glbc-amd64@sha256:1a859d138b4874642e9a8709e7ab04324669c77742349b5b21b1ef8a25fef55f k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1],SizeBytes:76121176,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 11:42:46.451: INFO: Logging kubelet events for node test-6bbac58e9d-master Nov 22 11:42:46.496: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-master Nov 22 11:42:46.547: INFO: kube-controller-manager-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:46.547: INFO: Container kube-controller-manager ready: true, restart count 1 Nov 22 11:42:46.547: INFO: etcd-empty-dir-cleanup-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:46.547: INFO: Container etcd-empty-dir-cleanup ready: true, restart count 1 Nov 22 11:42:46.547: INFO: etcd-server-events-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:46.547: INFO: Container etcd-container ready: true, restart count 1 Nov 22 11:42:46.547: INFO: etcd-server-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:46.547: INFO: Container etcd-container ready: true, restart count 1 Nov 22 11:42:46.547: INFO: kube-addon-manager-test-6bbac58e9d-master started at 2019-11-22 09:29:05 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:46.547: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 22 11:42:46.547: INFO: metadata-proxy-v0.1-xr6wl started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:42:46.547: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 11:42:46.547: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:42:46.547: INFO: kube-apiserver-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:46.547: INFO: Container kube-apiserver ready: true, restart count 0 Nov 22 11:42:46.547: INFO: l7-lb-controller-test-6bbac58e9d-master started at 2019-11-22 09:29:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:46.547: INFO: Container l7-lb-controller ready: true, restart count 3 Nov 22 11:42:46.547: INFO: fluentd-gcp-v3.2.0-fxhtk started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:42:46.547: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 11:42:46.547: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:42:46.547: INFO: kube-scheduler-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:46.547: INFO: Container kube-scheduler ready: true, restart count 0 Nov 22 11:42:46.697: INFO: Latency metrics for node test-6bbac58e9d-master Nov 22 11:42:46.697: INFO: Logging node info for node test-6bbac58e9d-minion-group-1pk2 Nov 22 11:42:46.743: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-minion-group-1pk2 /api/v1/nodes/test-6bbac58e9d-minion-group-1pk2 a4f21abc-d48a-4c0f-a26f-9e634bca825a 36733 0 2019-11-22 10:48:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-minion-group-1pk2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.gke.io/zone:us-west1-b topology.hostpath.csi/node:test-6bbac58e9d-minion-group-1pk2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-disruptive-1972":"test-6bbac58e9d-minion-group-1pk2","csi-hostpath-disruptive-9353":"test-6bbac58e9d-minion-group-1pk2","pd.csi.storage.gke.io":"projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/instances/test-6bbac58e9d-minion-group-1pk2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-minion-group-1pk2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103880232960 0} {<nil>} 101445540Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7836020736 0} {<nil>} 7652364Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93492209510 0} {<nil>} 93492209510 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7573876736 0} {<nil>} 7396364Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-11-22 11:38:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-11-22 11:38:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-11-22 11:38:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-11-22 11:38:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-11-22 11:38:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-11-22 11:38:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-11-22 11:38:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 11:41:54 +0000 UTC,LastTransitionTime:2019-11-22 11:41:54 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 11:41:54 +0000 UTC,LastTransitionTime:2019-11-22 11:41:54 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 11:41:54 +0000 UTC,LastTransitionTime:2019-11-22 11:41:54 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 11:41:54 +0000 UTC,LastTransitionTime:2019-11-22 11:41:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.6,},NodeAddress{Type:ExternalIP,Address:104.198.3.26,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-minion-group-1pk2.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-minion-group-1pk2.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c363001bc173de2779c31270a0a03e8d,SystemUUID:C363001B-C173-DE27-79C3-1270A0A03E8D,BootID:ecfbc66f-0a8c-4787-a7bf-8e0ebe1e8bb2,KernelVersion:4.15.0-1048-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:130115752,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:3846a258f98448f6586700a37ae5974a0969cc0bb43f75b4fde0f198bb314103 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0],SizeBytes:113118759,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:276335fa25a703615cc2f2cdc51ba693fac4bdd70baa63f9cbf228291defd776 k8s.gcr.io/node-problem-detector:v0.8.0],SizeBytes:108715243,},ContainerImage{Names:[k8s.gcr.io/fluentd-gcp-scaler@sha256:4f28f10fb89506768910b858f7a18ffb996824a16d70d5ac895e49687df9ff58 k8s.gcr.io/fluentd-gcp-scaler:0.5.2],SizeBytes:90498960,},ContainerImage{Names:[gcr.io/gke-release/csi-provisioner@sha256:3cd0f6b65a01d3b1e6fa9f2eb31448e7c15d79ee782249c206ad2710ac189cff gcr.io/gke-release/csi-provisioner:v1.4.0-gke.0],SizeBytes:60974770,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/event-exporter@sha256:ab71028f7cbc851d273bb00449e30ab743d4e3be21ed2093299f718b42df0748 k8s.gcr.io/event-exporter:v0.3.1],SizeBytes:51445475,},ContainerImage{Names:[gcr.io/gke-release/csi-attacher@sha256:2ffe521f6b59df3fa0dd444eef5f22472e3fe343efc1ab39aaa05e4f28832418 gcr.io/gke-release/csi-attacher:v2.0.0-gke.0],SizeBytes:51300540,},ContainerImage{Names:[gcr.io/gke-release/csi-resizer@sha256:650e2a11a7f877b51db5e23c5b7eae30b99714b535a59a6bba2aa9c165358cb1 gcr.io/gke-release/csi-resizer:v0.3.0-gke.0],SizeBytes:51125391,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:32b849c66869e0491116a333af3a7cafe226639d975818dfdb4d58fd7028a0b8 quay.io/k8scsi/csi-provisioner:v1.5.0-rc1],SizeBytes:50962879,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:48e9d3c4147ab5e4e070677a352f807a17e0f6bcf4cb19b8c34f6bfadf87781b quay.io/k8scsi/csi-snapshotter:v2.0.0-rc2],SizeBytes:50515643,},ContainerImage{Names:[quay.io/k8scsi/snapshot-controller@sha256:b3c1c484ffe4f0bbf000bda93fb745e1b3899b08d605a08581426a1963dd3e8a quay.io/k8scsi/snapshot-controller:v2.0.0-rc2],SizeBytes:47222712,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:d3e2c5fb1d887bed2e6e8f931c65209b16cf7c28b40eaf7812268f9839908790 quay.io/k8scsi/csi-attacher:v2.0.0],SizeBytes:46143101,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:eff2d6a215efd9450d90796265fc4d8832a54a3a098df06edae6ff3a5072b08f quay.io/k8scsi/csi-resizer:v0.3.0],SizeBytes:46014212,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:1d49fb3b108e6b42542e4a9b056dee308f06f88824326cde1636eea0472b799d k8s.gcr.io/prometheus-to-sd:v0.7.2],SizeBytes:42314030,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:2f80c87a67122448bd60c778dd26d4067e1f48d1ea3fefe9f92b8cd1961acfa0 quay.io/k8scsi/hostpathplugin:v1.3.0-rc1],SizeBytes:28736864,},ContainerImage{Names:[gcr.io/gke-release/csi-node-driver-registrar@sha256:c09c18e90fa65c1156ab449681c38a9da732e2bb20a724ad10f90b2b0eec97d2 gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0],SizeBytes:19358236,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:89cdb2a20bdec89b75e2fbd82a67567ea90b719524990e772f2704b19757188d quay.io/k8scsi/csi-node-driver-registrar:v1.2.0],SizeBytes:17057647,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64@sha256:d83d8a481145d0eb71f8bd71ae236d1c6a931dd3bdcaf80919a8ec4a4d8aff74 k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0],SizeBytes:13513083,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 11:42:46.743: INFO: Logging kubelet events for node test-6bbac58e9d-minion-group-1pk2 Nov 22 11:42:46.811: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-minion-group-1pk2 Nov 22 11:42:46.874: INFO: kube-proxy-test-6bbac58e9d-minion-group-1pk2 started at 2019-11-22 11:41:54 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:46.874: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 11:42:46.874: INFO: metadata-proxy-v0.1-4bxj9 started at 2019-11-22 10:48:20 +0000 UTC (0+2 container statuses recorded) Nov 22 11:42:46.874: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 11:42:46.874: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:42:46.874: INFO: npd-v0.8.0-224c2 started at 2019-11-22 10:48:20 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:46.874: INFO: Container node-problem-detector ready: true, restart count 0 Nov 22 11:42:46.874: INFO: fluentd-gcp-v3.2.0-4fdmw started at 2019-11-22 10:48:20 +0000 UTC (0+2 container statuses recorded) Nov 22 11:42:46.874: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 11:42:46.874: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:42:47.040: INFO: Latency metrics for node test-6bbac58e9d-minion-group-1pk2 Nov 22 11:42:47.040: INFO: Logging node info for node test-6bbac58e9d-minion-group-dtt3 Nov 22 11:42:47.098: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-minion-group-dtt3 /api/v1/nodes/test-6bbac58e9d-minion-group-dtt3 bbcaa4a7-21ed-4b1a-8d6c-097e686c368c 36329 0 2019-11-22 09:29:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-minion-group-dtt3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.gke.io/zone:us-west1-b topology.hostpath.csi/node:test-6bbac58e9d-minion-group-dtt3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-3611":"test-6bbac58e9d-minion-group-dtt3","pd.csi.storage.gke.io":"projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/instances/test-6bbac58e9d-minion-group-dtt3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-minion-group-dtt3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103880232960 0} {<nil>} 101445540Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7836012544 0} {<nil>} 7652356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93492209510 0} {<nil>} 93492209510 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7573868544 0} {<nil>} 7396356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 11:37:49 +0000 UTC,LastTransitionTime:2019-11-22 11:37:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 11:37:49 +0000 UTC,LastTransitionTime:2019-11-22 11:37:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 11:37:49 +0000 UTC,LastTransitionTime:2019-11-22 11:37:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 11:37:49 +0000 UTC,LastTransitionTime:2019-11-22 11:37:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.227.160.250,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-minion-group-dtt3.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-minion-group-dtt3.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:015ba3833761f0b9cd8a2196bf6fb79d,SystemUUID:015BA383-3761-F0B9-CD8A-2196BF6FB79D,BootID:c9ec395e-18ec-40c2-b13c-49ae0567ad15,KernelVersion:4.15.0-1048-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:130115752,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:3846a258f98448f6586700a37ae5974a0969cc0bb43f75b4fde0f198bb314103 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0],SizeBytes:113118759,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:276335fa25a703615cc2f2cdc51ba693fac4bdd70baa63f9cbf228291defd776 k8s.gcr.io/node-problem-detector:v0.8.0],SizeBytes:108715243,},ContainerImage{Names:[k8s.gcr.io/heapster-amd64@sha256:9fae0af136ce0cf4f88393b3670f7139ffc464692060c374d2ae748e13144521 k8s.gcr.io/heapster-amd64:v1.6.0-beta.1],SizeBytes:76016169,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:32b849c66869e0491116a333af3a7cafe226639d975818dfdb4d58fd7028a0b8 quay.io/k8scsi/csi-provisioner:v1.5.0-rc1],SizeBytes:50962879,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:48e9d3c4147ab5e4e070677a352f807a17e0f6bcf4cb19b8c34f6bfadf87781b quay.io/k8scsi/csi-snapshotter:v2.0.0-rc2],SizeBytes:50515643,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:d3e2c5fb1d887bed2e6e8f931c65209b16cf7c28b40eaf7812268f9839908790 quay.io/k8scsi/csi-attacher:v2.0.0],SizeBytes:46143101,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:eff2d6a215efd9450d90796265fc4d8832a54a3a098df06edae6ff3a5072b08f quay.io/k8scsi/csi-resizer:v0.3.0],SizeBytes:46014212,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:2114a2f70d34fa2821fb7f9bf373be5f44c8cbfeb6097fb5ba8eaf73cd38b72a k8s.gcr.io/addon-resizer:1.8.6],SizeBytes:37928220,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:2f80c87a67122448bd60c778dd26d4067e1f48d1ea3fefe9f92b8cd1961acfa0 quay.io/k8scsi/hostpathplugin:v1.3.0-rc1],SizeBytes:28736864,},ContainerImage{Names:[gcr.io/gke-release/csi-node-driver-registrar@sha256:c09c18e90fa65c1156ab449681c38a9da732e2bb20a724ad10f90b2b0eec97d2 gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0],SizeBytes:19358236,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:89cdb2a20bdec89b75e2fbd82a67567ea90b719524990e772f2704b19757188d quay.io/k8scsi/csi-node-driver-registrar:v1.2.0],SizeBytes:17057647,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 11:42:47.098: INFO: Logging kubelet events for node test-6bbac58e9d-minion-group-dtt3 Nov 22 11:42:47.140: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-minion-group-dtt3 Nov 22 11:42:47.199: INFO: metadata-proxy-v0.1-qj8lx started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:42:47.199: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 11:42:47.199: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:42:47.200: INFO: npd-v0.8.0-86sjk started at 2019-11-22 09:29:41 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:47.200: INFO: Container node-problem-detector ready: true, restart count 0 Nov 22 11:42:47.200: INFO: kube-dns-autoscaler-65bc6d4889-kncqk started at 2019-11-22 10:43:03 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:47.200: INFO: Container autoscaler ready: true, restart count 0 Nov 22 11:42:47.200: INFO: kube-proxy-test-6bbac58e9d-minion-group-dtt3 started at 2019-11-22 11:33:24 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:47.200: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 11:42:47.200: INFO: heapster-v1.6.0-beta.1-859599df9f-9nl5x started at 2019-11-22 09:29:47 +0000 UTC (0+2 container statuses recorded) Nov 22 11:42:47.200: INFO: Container heapster ready: true, restart count 0 Nov 22 11:42:47.200: INFO: Container heapster-nanny ready: true, restart count 0 Nov 22 11:42:47.200: INFO: fluentd-gcp-v3.2.0-z4gtt started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:42:47.200: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 11:42:47.200: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:42:47.200: INFO: coredns-65567c7b57-vqz56 started at 2019-11-22 09:29:55 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:47.200: INFO: Container coredns ready: true, restart count 0 Nov 22 11:42:47.200: INFO: kubernetes-dashboard-7778f8b456-dwww9 started at 2019-11-22 09:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:47.200: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 22 11:42:47.200: INFO: metrics-server-v0.3.6-7d96444597-lfv7c started at 2019-11-22 09:29:45 +0000 UTC (0+2 container statuses recorded) Nov 22 11:42:47.200: INFO: Container metrics-server ready: true, restart count 0 Nov 22 11:42:47.200: INFO: Container metrics-server-nanny ready: true, restart count 0 Nov 22 11:42:47.339: INFO: Latency metrics for node test-6bbac58e9d-minion-group-dtt3 Nov 22 11:42:47.340: INFO: Logging node info for node test-6bbac58e9d-minion-group-ldgb Nov 22 11:42:47.381: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-minion-group-ldgb /api/v1/nodes/test-6bbac58e9d-minion-group-ldgb 7af88a45-91da-49e2-aad1-693979aa273c 36350 0 2019-11-22 09:29:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-minion-group-ldgb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.gke.io/zone:us-west1-b topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"pd.csi.storage.gke.io":"projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/instances/test-6bbac58e9d-minion-group-ldgb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-minion-group-ldgb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103880232960 0} {<nil>} 101445540Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7836012544 0} {<nil>} 7652356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93492209510 0} {<nil>} 93492209510 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7573868544 0} {<nil>} 7396356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 11:39:56 +0000 UTC,LastTransitionTime:2019-11-22 10:33:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 11:39:56 +0000 UTC,LastTransitionTime:2019-11-22 10:33:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 11:39:56 +0000 UTC,LastTransitionTime:2019-11-22 10:33:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 11:39:56 +0000 UTC,LastTransitionTime:2019-11-22 11:04:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:104.199.127.196,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-minion-group-ldgb.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-minion-group-ldgb.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7e1c327ba82c05d274d059f31a030f91,SystemUUID:7E1C327B-A82C-05D2-74D0-59F31A030F91,BootID:153cc788-4fe4-4a95-a234-e7f53446bb04,KernelVersion:4.15.0-1048-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:130115752,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:3846a258f98448f6586700a37ae5974a0969cc0bb43f75b4fde0f198bb314103 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0],SizeBytes:113118759,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:276335fa25a703615cc2f2cdc51ba693fac4bdd70baa63f9cbf228291defd776 k8s.gcr.io/node-problem-detector:v0.8.0],SizeBytes:108715243,},ContainerImage{Names:[k8s.gcr.io/fluentd-gcp-scaler@sha256:4f28f10fb89506768910b858f7a18ffb996824a16d70d5ac895e49687df9ff58 k8s.gcr.io/fluentd-gcp-scaler:0.5.2],SizeBytes:90498960,},ContainerImage{Names:[gcr.io/gke-release/csi-provisioner@sha256:3cd0f6b65a01d3b1e6fa9f2eb31448e7c15d79ee782249c206ad2710ac189cff gcr.io/gke-release/csi-provisioner:v1.4.0-gke.0],SizeBytes:60974770,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/event-exporter@sha256:ab71028f7cbc851d273bb00449e30ab743d4e3be21ed2093299f718b42df0748 k8s.gcr.io/event-exporter:v0.3.1],SizeBytes:51445475,},ContainerImage{Names:[gcr.io/gke-release/csi-attacher@sha256:2ffe521f6b59df3fa0dd444eef5f22472e3fe343efc1ab39aaa05e4f28832418 gcr.io/gke-release/csi-attacher:v2.0.0-gke.0],SizeBytes:51300540,},ContainerImage{Names:[gcr.io/gke-release/csi-resizer@sha256:650e2a11a7f877b51db5e23c5b7eae30b99714b535a59a6bba2aa9c165358cb1 gcr.io/gke-release/csi-resizer:v0.3.0-gke.0],SizeBytes:51125391,},ContainerImage{Names:[quay.io/k8scsi/snapshot-controller@sha256:b3c1c484ffe4f0bbf000bda93fb745e1b3899b08d605a08581426a1963dd3e8a quay.io/k8scsi/snapshot-controller:v2.0.0-rc2],SizeBytes:47222712,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:1d49fb3b108e6b42542e4a9b056dee308f06f88824326cde1636eea0472b799d k8s.gcr.io/prometheus-to-sd:v0.7.2],SizeBytes:42314030,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[gcr.io/gke-release/csi-node-driver-registrar@sha256:c09c18e90fa65c1156ab449681c38a9da732e2bb20a724ad10f90b2b0eec97d2 gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0],SizeBytes:19358236,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64@sha256:d83d8a481145d0eb71f8bd71ae236d1c6a931dd3bdcaf80919a8ec4a4d8aff74 k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0],SizeBytes:13513083,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 11:42:47.383: INFO: Logging kubelet events for node test-6bbac58e9d-minion-group-ldgb Nov 22 11:42:47.425: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-minion-group-ldgb Nov 22 11:42:47.480: INFO: fluentd-gcp-scaler-76d9c77b4d-wh4nt started at 2019-11-22 10:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:47.480: INFO: Container fluentd-gcp-scaler ready: true, restart count 0 Nov 22 11:42:47.480: INFO: coredns-65567c7b57-s9876 started at 2019-11-22 10:43:03 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:47.480: INFO: Container coredns ready: true, restart count 0 Nov 22 11:42:47.480: INFO: fluentd-gcp-v3.2.0-f9q96 started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:42:47.480: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 11:42:47.480: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:42:47.480: INFO: metadata-proxy-v0.1-ptzjq started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:42:47.480: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 11:42:47.480: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:42:47.480: INFO: l7-default-backend-678889f899-sn2pt started at 2019-11-22 10:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:47.480: INFO: Container default-http-backend ready: true, restart count 0 Nov 22 11:42:47.480: INFO: volume-snapshot-controller-0 started at 2019-11-22 10:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:47.480: INFO: Container volume-snapshot-controller ready: true, restart count 0 Nov 22 11:42:47.480: INFO: event-exporter-v0.3.1-747b47fcd-8chbt started at 2019-11-22 10:43:02 +0000 UTC (0+2 container statuses recorded) Nov 22 11:42:47.480: INFO: Container event-exporter ready: true, restart count 0 Nov 22 11:42:47.480: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:42:47.480: INFO: kube-proxy-test-6bbac58e9d-minion-group-ldgb started at 2019-11-22 09:29:30 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:47.480: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 11:42:47.480: INFO: npd-v0.8.0-wmkxq started at 2019-11-22 09:29:42 +0000 UTC (0+1 container statuses recorded) Nov 22 11:42:47.480: INFO: Container node-problem-detector ready: true, restart count 0 Nov 22 11:42:47.627: INFO: Latency metrics for node test-6bbac58e9d-minion-group-ldgb Nov 22 11:42:47.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "provisioning-7530" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\-link\-bindmounted\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\sunmount\sif\spod\sis\sgracefully\sdeleted\swhile\skubelet\sis\sdown\s\[Disruptive\]\[Slow\]\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:340 Nov 22 11:45:05.656: Unexpected error: <*errors.errorString | 0xc0003296a0>: { s: "pod ran to completion", } pod ran to completion occurred /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:103from junit_01.xml
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:101 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 22 11:45:04.620: INFO: >>> kubeConfig: /workspace/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename provisioning �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:340 Nov 22 11:45:04.858: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/local-volume Nov 22 11:45:04.997: INFO: Node name not specified for getVolumeOpCounts, falling back to listing nodes from API Server Nov 22 11:45:05.656: FAIL: Unexpected error: <*errors.errorString | 0xc0003296a0>: { s: "pod ran to completion", } pod ran to completion occurred [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "provisioning-7153". �[1mSTEP�[0m: Found 1 events. Nov 22 11:45:05.695: INFO: At 2019-11-22 11:45:05 +0000 UTC - event for hostexec-test-6bbac58e9d-minion-group-ldgb-kkbwl: {kubelet test-6bbac58e9d-minion-group-ldgb} OutOfpods: Node didn't have enough resource: pods, requested: 1, used: 110, capacity: 110 Nov 22 11:45:05.733: INFO: POD NODE PHASE GRACE CONDITIONS Nov 22 11:45:05.733: INFO: hostexec-test-6bbac58e9d-minion-group-ldgb-kkbwl test-6bbac58e9d-minion-group-ldgb Failed [] Nov 22 11:45:05.733: INFO: Nov 22 11:45:05.774: INFO: Logging node info for node test-6bbac58e9d-master Nov 22 11:45:05.812: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-master /api/v1/nodes/test-6bbac58e9d-master 8a7a430e-36f3-4dcf-b7dd-f2a903ca1fa5 36516 0 2019-11-22 09:29:31 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3876802560 0} {<nil>} 3785940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3614658560 0} {<nil>} 3529940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 11:40:52 +0000 UTC,LastTransitionTime:2019-11-22 09:29:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 11:40:52 +0000 UTC,LastTransitionTime:2019-11-22 09:29:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 11:40:52 +0000 UTC,LastTransitionTime:2019-11-22 09:29:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 11:40:52 +0000 UTC,LastTransitionTime:2019-11-22 09:29:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.227.175.21,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-master.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-master.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fa8a320b898c1d5588780170530d5cf8,SystemUUID:fa8a320b-898c-1d55-8878-0170530d5cf8,BootID:2730095f-f6ec-4217-a9ae-32ba996e1eed,KernelVersion:4.19.76+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://19.3.1,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:212137343,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:200623393,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:110377926,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/etcd-empty-dir-cleanup@sha256:484662e55e0705caed26c6fb8632097457f43ce685756531da7a76319a7dcee1 k8s.gcr.io/etcd-empty-dir-cleanup:3.4.3.0],SizeBytes:77408900,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-glbc-amd64@sha256:1a859d138b4874642e9a8709e7ab04324669c77742349b5b21b1ef8a25fef55f k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1],SizeBytes:76121176,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 11:45:05.812: INFO: Logging kubelet events for node test-6bbac58e9d-master Nov 22 11:45:05.858: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-master Nov 22 11:45:05.902: INFO: kube-scheduler-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:05.902: INFO: Container kube-scheduler ready: true, restart count 0 Nov 22 11:45:05.902: INFO: l7-lb-controller-test-6bbac58e9d-master started at 2019-11-22 09:29:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:05.902: INFO: Container l7-lb-controller ready: true, restart count 3 Nov 22 11:45:05.902: INFO: fluentd-gcp-v3.2.0-fxhtk started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:45:05.902: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 11:45:05.902: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:45:05.902: INFO: kube-apiserver-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:05.902: INFO: Container kube-apiserver ready: true, restart count 0 Nov 22 11:45:05.902: INFO: kube-controller-manager-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:05.902: INFO: Container kube-controller-manager ready: true, restart count 1 Nov 22 11:45:05.902: INFO: etcd-empty-dir-cleanup-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:05.902: INFO: Container etcd-empty-dir-cleanup ready: true, restart count 1 Nov 22 11:45:05.902: INFO: etcd-server-events-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:05.902: INFO: Container etcd-container ready: true, restart count 1 Nov 22 11:45:05.902: INFO: etcd-server-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:05.902: INFO: Container etcd-container ready: true, restart count 1 Nov 22 11:45:05.902: INFO: kube-addon-manager-test-6bbac58e9d-master started at 2019-11-22 09:29:05 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:05.902: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 22 11:45:05.902: INFO: metadata-proxy-v0.1-xr6wl started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:45:05.902: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 11:45:05.902: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:45:06.034: INFO: Latency metrics for node test-6bbac58e9d-master Nov 22 11:45:06.035: INFO: Logging node info for node test-6bbac58e9d-minion-group-1pk2 Nov 22 11:45:06.075: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-minion-group-1pk2 /api/v1/nodes/test-6bbac58e9d-minion-group-1pk2 a4f21abc-d48a-4c0f-a26f-9e634bca825a 37388 0 2019-11-22 10:48:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-minion-group-1pk2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.gke.io/zone:us-west1-b topology.hostpath.csi/node:test-6bbac58e9d-minion-group-1pk2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-disruptive-1972":"test-6bbac58e9d-minion-group-1pk2","csi-hostpath-disruptive-9353":"test-6bbac58e9d-minion-group-1pk2","pd.csi.storage.gke.io":"projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/instances/test-6bbac58e9d-minion-group-1pk2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-minion-group-1pk2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103880232960 0} {<nil>} 101445540Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7836020736 0} {<nil>} 7652364Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93492209510 0} {<nil>} 93492209510 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7573876736 0} {<nil>} 7396364Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-11-22 11:43:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-11-22 11:43:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-11-22 11:43:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-11-22 11:43:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-11-22 11:43:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-11-22 11:43:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-11-22 11:43:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 11:41:54 +0000 UTC,LastTransitionTime:2019-11-22 11:41:54 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 11:41:54 +0000 UTC,LastTransitionTime:2019-11-22 11:41:54 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 11:41:54 +0000 UTC,LastTransitionTime:2019-11-22 11:41:54 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 11:41:54 +0000 UTC,LastTransitionTime:2019-11-22 11:41:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.6,},NodeAddress{Type:ExternalIP,Address:104.198.3.26,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-minion-group-1pk2.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-minion-group-1pk2.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c363001bc173de2779c31270a0a03e8d,SystemUUID:C363001B-C173-DE27-79C3-1270A0A03E8D,BootID:ecfbc66f-0a8c-4787-a7bf-8e0ebe1e8bb2,KernelVersion:4.15.0-1048-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:130115752,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:3846a258f98448f6586700a37ae5974a0969cc0bb43f75b4fde0f198bb314103 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0],SizeBytes:113118759,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:276335fa25a703615cc2f2cdc51ba693fac4bdd70baa63f9cbf228291defd776 k8s.gcr.io/node-problem-detector:v0.8.0],SizeBytes:108715243,},ContainerImage{Names:[k8s.gcr.io/fluentd-gcp-scaler@sha256:4f28f10fb89506768910b858f7a18ffb996824a16d70d5ac895e49687df9ff58 k8s.gcr.io/fluentd-gcp-scaler:0.5.2],SizeBytes:90498960,},ContainerImage{Names:[gcr.io/gke-release/csi-provisioner@sha256:3cd0f6b65a01d3b1e6fa9f2eb31448e7c15d79ee782249c206ad2710ac189cff gcr.io/gke-release/csi-provisioner:v1.4.0-gke.0],SizeBytes:60974770,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/event-exporter@sha256:ab71028f7cbc851d273bb00449e30ab743d4e3be21ed2093299f718b42df0748 k8s.gcr.io/event-exporter:v0.3.1],SizeBytes:51445475,},ContainerImage{Names:[gcr.io/gke-release/csi-attacher@sha256:2ffe521f6b59df3fa0dd444eef5f22472e3fe343efc1ab39aaa05e4f28832418 gcr.io/gke-release/csi-attacher:v2.0.0-gke.0],SizeBytes:51300540,},ContainerImage{Names:[gcr.io/gke-release/csi-resizer@sha256:650e2a11a7f877b51db5e23c5b7eae30b99714b535a59a6bba2aa9c165358cb1 gcr.io/gke-release/csi-resizer:v0.3.0-gke.0],SizeBytes:51125391,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:32b849c66869e0491116a333af3a7cafe226639d975818dfdb4d58fd7028a0b8 quay.io/k8scsi/csi-provisioner:v1.5.0-rc1],SizeBytes:50962879,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:48e9d3c4147ab5e4e070677a352f807a17e0f6bcf4cb19b8c34f6bfadf87781b quay.io/k8scsi/csi-snapshotter:v2.0.0-rc2],SizeBytes:50515643,},ContainerImage{Names:[quay.io/k8scsi/snapshot-controller@sha256:b3c1c484ffe4f0bbf000bda93fb745e1b3899b08d605a08581426a1963dd3e8a quay.io/k8scsi/snapshot-controller:v2.0.0-rc2],SizeBytes:47222712,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:d3e2c5fb1d887bed2e6e8f931c65209b16cf7c28b40eaf7812268f9839908790 quay.io/k8scsi/csi-attacher:v2.0.0],SizeBytes:46143101,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:eff2d6a215efd9450d90796265fc4d8832a54a3a098df06edae6ff3a5072b08f quay.io/k8scsi/csi-resizer:v0.3.0],SizeBytes:46014212,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:1d49fb3b108e6b42542e4a9b056dee308f06f88824326cde1636eea0472b799d k8s.gcr.io/prometheus-to-sd:v0.7.2],SizeBytes:42314030,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:2f80c87a67122448bd60c778dd26d4067e1f48d1ea3fefe9f92b8cd1961acfa0 quay.io/k8scsi/hostpathplugin:v1.3.0-rc1],SizeBytes:28736864,},ContainerImage{Names:[gcr.io/gke-release/csi-node-driver-registrar@sha256:c09c18e90fa65c1156ab449681c38a9da732e2bb20a724ad10f90b2b0eec97d2 gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0],SizeBytes:19358236,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:89cdb2a20bdec89b75e2fbd82a67567ea90b719524990e772f2704b19757188d quay.io/k8scsi/csi-node-driver-registrar:v1.2.0],SizeBytes:17057647,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64@sha256:d83d8a481145d0eb71f8bd71ae236d1c6a931dd3bdcaf80919a8ec4a4d8aff74 k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0],SizeBytes:13513083,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 11:45:06.075: INFO: Logging kubelet events for node test-6bbac58e9d-minion-group-1pk2 Nov 22 11:45:06.122: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-minion-group-1pk2 Nov 22 11:45:06.210: INFO: maxp-96 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-96 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-1 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-1 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-123 started at 2019-11-22 11:43:33 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-123 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-211 started at 2019-11-22 11:43:50 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-211 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-241 started at 2019-11-22 11:44:04 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-241 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-271 started at 2019-11-22 11:44:04 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-271 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-297 started at 2019-11-22 11:44:04 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-297 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-134 started at 2019-11-22 11:43:34 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-134 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-263 started at 2019-11-22 11:44:04 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-263 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-298 started at 2019-11-22 11:44:04 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-298 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-116 started at 2019-11-22 11:43:33 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-116 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-129 started at 2019-11-22 11:43:33 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-129 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: npd-v0.8.0-224c2 started at 2019-11-22 10:48:20 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container node-problem-detector ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-206 started at 2019-11-22 11:43:50 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-206 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-268 started at 2019-11-22 11:44:04 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-268 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-286 started at 2019-11-22 11:44:04 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-286 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: kube-proxy-test-6bbac58e9d-minion-group-1pk2 started at 2019-11-22 11:41:54 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-51 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-51 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-74 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-74 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-77 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-77 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-109 started at 2019-11-22 11:43:33 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-109 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-169 started at 2019-11-22 11:43:49 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-169 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: metadata-proxy-v0.1-4bxj9 started at 2019-11-22 10:48:20 +0000 UTC (0+2 container statuses recorded) Nov 22 11:45:06.210: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 11:45:06.210: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-229 started at 2019-11-22 11:44:03 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-229 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-274 started at 2019-11-22 11:44:04 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-274 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: fluentd-gcp-v3.2.0-4fdmw started at 2019-11-22 10:48:20 +0000 UTC (0+2 container statuses recorded) Nov 22 11:45:06.210: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 11:45:06.210: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-83 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-83 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-167 started at 2019-11-22 11:43:49 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-167 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-288 started at 2019-11-22 11:44:04 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-288 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-93 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-93 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-170 started at 2019-11-22 11:43:49 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-170 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-190 started at 2019-11-22 11:43:50 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-190 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-0 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-0 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-177 started at 2019-11-22 11:43:49 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-177 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-184 started at 2019-11-22 11:43:50 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-184 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-201 started at 2019-11-22 11:43:50 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-201 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-239 started at 2019-11-22 11:44:03 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-239 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-244 started at 2019-11-22 11:44:04 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-244 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-3 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-3 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-42 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-42 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-290 started at 2019-11-22 11:44:04 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-290 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-4 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-4 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-163 started at 2019-11-22 11:43:49 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-163 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-202 started at 2019-11-22 11:43:50 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-202 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-212 started at 2019-11-22 11:43:50 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-212 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-89 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-89 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-133 started at 2019-11-22 11:43:34 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-133 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-142 started at 2019-11-22 11:43:49 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-142 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-155 started at 2019-11-22 11:43:49 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-155 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-278 started at 2019-11-22 11:44:04 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-278 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-6 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-6 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-9 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-9 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-66 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-66 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-148 started at 2019-11-22 11:43:49 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-148 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-158 started at 2019-11-22 11:43:49 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-158 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-188 started at 2019-11-22 11:43:50 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-188 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-24 started at 2019-11-22 11:43:26 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-24 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-30 started at 2019-11-22 11:43:26 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-30 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-121 started at 2019-11-22 11:43:33 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-121 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-137 started at 2019-11-22 11:43:34 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-137 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-293 started at 2019-11-22 11:44:04 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-293 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-302 started at 2019-11-22 11:44:04 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-302 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-36 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-36 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-2 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-2 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-107 started at 2019-11-22 11:43:33 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-107 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-199 started at 2019-11-22 11:43:50 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-199 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-67 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-67 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-110 started at 2019-11-22 11:43:33 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-110 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-185 started at 2019-11-22 11:43:50 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-185 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-240 started at 2019-11-22 11:44:03 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-240 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-27 started at 2019-11-22 11:43:26 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-27 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-46 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-46 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-276 started at 2019-11-22 11:44:04 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-276 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-141 started at 2019-11-22 11:43:34 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-141 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-12 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-12 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-86 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-86 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-203 started at 2019-11-22 11:43:50 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-203 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-256 started at 2019-11-22 11:44:03 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-256 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-21 started at 2019-11-22 11:43:26 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-21 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-119 started at 2019-11-22 11:43:33 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-119 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-124 started at 2019-11-22 11:43:33 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-124 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-160 started at 2019-11-22 11:43:49 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-160 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-218 started at 2019-11-22 11:43:50 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-218 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-60 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-60 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-105 started at 2019-11-22 11:43:33 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-105 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-164 started at 2019-11-22 11:43:49 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-164 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-56 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-56 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-59 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-59 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-154 started at 2019-11-22 11:43:49 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-154 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-217 started at 2019-11-22 11:43:50 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-217 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-258 started at 2019-11-22 11:44:04 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-258 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-18 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-18 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-15 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-15 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-112 started at 2019-11-22 11:43:33 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-112 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-113 started at 2019-11-22 11:43:33 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-113 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-44 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-44 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-140 started at 2019-11-22 11:43:34 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-140 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-179 started at 2019-11-22 11:43:49 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-179 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-236 started at 2019-11-22 11:44:03 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-236 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-33 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-33 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-165 started at 2019-11-22 11:43:49 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-165 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-249 started at 2019-11-22 11:44:04 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-249 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-70 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-70 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-81 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-81 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-91 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-91 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-207 started at 2019-11-22 11:43:50 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-207 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-222 started at 2019-11-22 11:44:03 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-222 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-269 started at 2019-11-22 11:44:04 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-269 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-52 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-52 ready: true, restart count 0 Nov 22 11:45:06.210: INFO: maxp-257 started at 2019-11-22 11:44:03 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.210: INFO: Container maxp-257 ready: true, restart count 0 Nov 22 11:45:06.411: INFO: Latency metrics for node test-6bbac58e9d-minion-group-1pk2 Nov 22 11:45:06.411: INFO: Logging node info for node test-6bbac58e9d-minion-group-dtt3 Nov 22 11:45:06.451: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-minion-group-dtt3 /api/v1/nodes/test-6bbac58e9d-minion-group-dtt3 bbcaa4a7-21ed-4b1a-8d6c-097e686c368c 37003 0 2019-11-22 09:29:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-minion-group-dtt3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.gke.io/zone:us-west1-b topology.hostpath.csi/node:test-6bbac58e9d-minion-group-dtt3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-3611":"test-6bbac58e9d-minion-group-dtt3","pd.csi.storage.gke.io":"projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/instances/test-6bbac58e9d-minion-group-dtt3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-minion-group-dtt3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103880232960 0} {<nil>} 101445540Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7836012544 0} {<nil>} 7652356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93492209510 0} {<nil>} 93492209510 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7573868544 0} {<nil>} 7396356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 11:42:59 +0000 UTC,LastTransitionTime:2019-11-22 11:37:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 11:42:59 +0000 UTC,LastTransitionTime:2019-11-22 11:37:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 11:42:59 +0000 UTC,LastTransitionTime:2019-11-22 11:37:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 11:42:59 +0000 UTC,LastTransitionTime:2019-11-22 11:37:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.227.160.250,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-minion-group-dtt3.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-minion-group-dtt3.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:015ba3833761f0b9cd8a2196bf6fb79d,SystemUUID:015BA383-3761-F0B9-CD8A-2196BF6FB79D,BootID:c9ec395e-18ec-40c2-b13c-49ae0567ad15,KernelVersion:4.15.0-1048-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:130115752,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:3846a258f98448f6586700a37ae5974a0969cc0bb43f75b4fde0f198bb314103 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0],SizeBytes:113118759,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:276335fa25a703615cc2f2cdc51ba693fac4bdd70baa63f9cbf228291defd776 k8s.gcr.io/node-problem-detector:v0.8.0],SizeBytes:108715243,},ContainerImage{Names:[k8s.gcr.io/heapster-amd64@sha256:9fae0af136ce0cf4f88393b3670f7139ffc464692060c374d2ae748e13144521 k8s.gcr.io/heapster-amd64:v1.6.0-beta.1],SizeBytes:76016169,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:32b849c66869e0491116a333af3a7cafe226639d975818dfdb4d58fd7028a0b8 quay.io/k8scsi/csi-provisioner:v1.5.0-rc1],SizeBytes:50962879,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:48e9d3c4147ab5e4e070677a352f807a17e0f6bcf4cb19b8c34f6bfadf87781b quay.io/k8scsi/csi-snapshotter:v2.0.0-rc2],SizeBytes:50515643,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:d3e2c5fb1d887bed2e6e8f931c65209b16cf7c28b40eaf7812268f9839908790 quay.io/k8scsi/csi-attacher:v2.0.0],SizeBytes:46143101,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:eff2d6a215efd9450d90796265fc4d8832a54a3a098df06edae6ff3a5072b08f quay.io/k8scsi/csi-resizer:v0.3.0],SizeBytes:46014212,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:2114a2f70d34fa2821fb7f9bf373be5f44c8cbfeb6097fb5ba8eaf73cd38b72a k8s.gcr.io/addon-resizer:1.8.6],SizeBytes:37928220,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:2f80c87a67122448bd60c778dd26d4067e1f48d1ea3fefe9f92b8cd1961acfa0 quay.io/k8scsi/hostpathplugin:v1.3.0-rc1],SizeBytes:28736864,},ContainerImage{Names:[gcr.io/gke-release/csi-node-driver-registrar@sha256:c09c18e90fa65c1156ab449681c38a9da732e2bb20a724ad10f90b2b0eec97d2 gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0],SizeBytes:19358236,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:89cdb2a20bdec89b75e2fbd82a67567ea90b719524990e772f2704b19757188d quay.io/k8scsi/csi-node-driver-registrar:v1.2.0],SizeBytes:17057647,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 11:45:06.451: INFO: Logging kubelet events for node test-6bbac58e9d-minion-group-dtt3 Nov 22 11:45:06.493: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-minion-group-dtt3 Nov 22 11:45:06.553: INFO: maxp-136 started at 2019-11-22 11:43:36 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.553: INFO: Container maxp-136 ready: true, restart count 0 Nov 22 11:45:06.553: INFO: maxp-20 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.553: INFO: Container maxp-20 ready: true, restart count 0 Nov 22 11:45:06.553: INFO: maxp-85 started at 2019-11-22 11:43:35 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.553: INFO: Container maxp-85 ready: true, restart count 0 Nov 22 11:45:06.553: INFO: maxp-195 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.553: INFO: Container maxp-195 ready: true, restart count 0 Nov 22 11:45:06.553: INFO: maxp-228 started at 2019-11-22 11:43:53 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.553: INFO: Container maxp-228 ready: true, restart count 0 Nov 22 11:45:06.553: INFO: maxp-291 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.553: INFO: Container maxp-291 ready: true, restart count 0 Nov 22 11:45:06.553: INFO: maxp-73 started at 2019-11-22 11:43:35 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.553: INFO: Container maxp-73 ready: true, restart count 0 Nov 22 11:45:06.553: INFO: maxp-76 started at 2019-11-22 11:43:35 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.553: INFO: Container maxp-76 ready: true, restart count 0 Nov 22 11:45:06.553: INFO: maxp-237 started at 2019-11-22 11:43:53 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.553: INFO: Container maxp-237 ready: true, restart count 0 Nov 22 11:45:06.553: INFO: npd-v0.8.0-86sjk started at 2019-11-22 09:29:41 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.553: INFO: Container node-problem-detector ready: true, restart count 0 Nov 22 11:45:06.553: INFO: maxp-54 started at 2019-11-22 11:43:27 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.553: INFO: Container maxp-54 ready: true, restart count 0 Nov 22 11:45:06.553: INFO: maxp-131 started at 2019-11-22 11:43:36 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.553: INFO: Container maxp-131 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-183 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-183 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-193 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-193 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-299 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-299 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-5 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-5 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-180 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-180 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-305 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-305 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-23 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-23 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-181 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-181 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-210 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-210 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-10 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-10 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-40 started at 2019-11-22 11:43:27 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-40 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-126 started at 2019-11-22 11:43:36 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-126 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-220 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-220 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-289 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-289 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-68 started at 2019-11-22 11:43:35 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-68 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-221 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-221 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-296 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-296 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-147 started at 2019-11-22 11:43:37 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-147 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-292 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-292 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-62 started at 2019-11-22 11:43:35 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-62 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-150 started at 2019-11-22 11:43:37 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-150 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-90 started at 2019-11-22 11:43:35 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-90 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-287 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-287 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-307 started at 2019-11-22 11:44:07 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-307 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: coredns-65567c7b57-vqz56 started at 2019-11-22 09:29:55 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container coredns ready: true, restart count 0 Nov 22 11:45:06.554: INFO: kubernetes-dashboard-7778f8b456-dwww9 started at 2019-11-22 09:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-87 started at 2019-11-22 11:43:35 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-87 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-157 started at 2019-11-22 11:43:37 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-157 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-162 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-162 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-253 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-253 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-57 started at 2019-11-22 11:43:35 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-57 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-125 started at 2019-11-22 11:43:36 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-125 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-159 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-159 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-223 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-223 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-270 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-270 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-306 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-306 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-48 started at 2019-11-22 11:43:27 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-48 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-231 started at 2019-11-22 11:43:53 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-231 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-234 started at 2019-11-22 11:43:53 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-234 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-79 started at 2019-11-22 11:43:35 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-79 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-101 started at 2019-11-22 11:43:36 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-101 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-108 started at 2019-11-22 11:43:36 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-108 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-153 started at 2019-11-22 11:43:37 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-153 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-303 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-303 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: kube-proxy-test-6bbac58e9d-minion-group-dtt3 started at 2019-11-22 11:33:24 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-115 started at 2019-11-22 11:43:36 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-115 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-138 started at 2019-11-22 11:43:36 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-138 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: heapster-v1.6.0-beta.1-859599df9f-9nl5x started at 2019-11-22 09:29:47 +0000 UTC (0+2 container statuses recorded) Nov 22 11:45:06.554: INFO: Container heapster ready: true, restart count 0 Nov 22 11:45:06.554: INFO: Container heapster-nanny ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-26 started at 2019-11-22 11:43:26 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-26 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-31 started at 2019-11-22 11:43:26 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-31 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-41 started at 2019-11-22 11:43:27 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-41 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-64 started at 2019-11-22 11:43:35 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-64 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-175 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-175 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-294 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-294 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-38 started at 2019-11-22 11:43:26 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-38 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-80 started at 2019-11-22 11:43:35 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-80 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-152 started at 2019-11-22 11:43:37 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-152 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-254 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-254 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: metrics-server-v0.3.6-7d96444597-lfv7c started at 2019-11-22 09:29:45 +0000 UTC (0+2 container statuses recorded) Nov 22 11:45:06.554: INFO: Container metrics-server ready: true, restart count 0 Nov 22 11:45:06.554: INFO: Container metrics-server-nanny ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-161 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-161 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-285 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-285 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-14 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-14 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-45 started at 2019-11-22 11:43:27 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-45 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-171 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-171 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-178 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-178 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-255 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-255 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-95 started at 2019-11-22 11:43:35 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-95 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-224 started at 2019-11-22 11:43:53 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-224 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-273 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-273 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-71 started at 2019-11-22 11:43:35 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-71 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-277 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-277 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-279 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-279 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-281 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-281 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-243 started at 2019-11-22 11:43:53 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-243 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-16 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-16 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-29 started at 2019-11-22 11:43:26 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-29 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-98 started at 2019-11-22 11:43:36 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-98 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-295 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-295 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-92 started at 2019-11-22 11:43:35 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-92 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-145 started at 2019-11-22 11:43:36 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-145 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-230 started at 2019-11-22 11:43:53 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-230 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-300 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-300 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: metadata-proxy-v0.1-qj8lx started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:45:06.554: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 11:45:06.554: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:45:06.554: INFO: kube-dns-autoscaler-65bc6d4889-kncqk started at 2019-11-22 10:43:03 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container autoscaler ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-8 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-8 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-143 started at 2019-11-22 11:43:36 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-143 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-192 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-192 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-301 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-301 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: fluentd-gcp-v3.2.0-z4gtt started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:45:06.554: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 11:45:06.554: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-35 started at 2019-11-22 11:43:26 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-35 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-114 started at 2019-11-22 11:43:36 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-114 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-245 started at 2019-11-22 11:44:02 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-245 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-53 started at 2019-11-22 11:43:27 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-53 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-174 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-174 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-232 started at 2019-11-22 11:43:53 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-232 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-88 started at 2019-11-22 11:43:35 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-88 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-284 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-284 ready: true, restart count 0 Nov 22 11:45:06.554: INFO: maxp-304 started at 2019-11-22 11:44:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.554: INFO: Container maxp-304 ready: true, restart count 0 Nov 22 11:45:06.718: INFO: Latency metrics for node test-6bbac58e9d-minion-group-dtt3 Nov 22 11:45:06.718: INFO: Logging node info for node test-6bbac58e9d-minion-group-ldgb Nov 22 11:45:06.758: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-minion-group-ldgb /api/v1/nodes/test-6bbac58e9d-minion-group-ldgb 7af88a45-91da-49e2-aad1-693979aa273c 38599 0 2019-11-22 09:29:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-minion-group-ldgb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.gke.io/zone:us-west1-b topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"pd.csi.storage.gke.io":"projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/instances/test-6bbac58e9d-minion-group-ldgb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-minion-group-ldgb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103880232960 0} {<nil>} 101445540Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7836012544 0} {<nil>} 7652356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93492209510 0} {<nil>} 93492209510 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7573868544 0} {<nil>} 7396356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 11:44:57 +0000 UTC,LastTransitionTime:2019-11-22 10:33:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 11:44:57 +0000 UTC,LastTransitionTime:2019-11-22 10:33:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 11:44:57 +0000 UTC,LastTransitionTime:2019-11-22 10:33:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 11:44:57 +0000 UTC,LastTransitionTime:2019-11-22 11:04:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:104.199.127.196,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-minion-group-ldgb.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-minion-group-ldgb.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7e1c327ba82c05d274d059f31a030f91,SystemUUID:7E1C327B-A82C-05D2-74D0-59F31A030F91,BootID:153cc788-4fe4-4a95-a234-e7f53446bb04,KernelVersion:4.15.0-1048-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:130115752,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:3846a258f98448f6586700a37ae5974a0969cc0bb43f75b4fde0f198bb314103 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0],SizeBytes:113118759,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:276335fa25a703615cc2f2cdc51ba693fac4bdd70baa63f9cbf228291defd776 k8s.gcr.io/node-problem-detector:v0.8.0],SizeBytes:108715243,},ContainerImage{Names:[k8s.gcr.io/fluentd-gcp-scaler@sha256:4f28f10fb89506768910b858f7a18ffb996824a16d70d5ac895e49687df9ff58 k8s.gcr.io/fluentd-gcp-scaler:0.5.2],SizeBytes:90498960,},ContainerImage{Names:[gcr.io/gke-release/csi-provisioner@sha256:3cd0f6b65a01d3b1e6fa9f2eb31448e7c15d79ee782249c206ad2710ac189cff gcr.io/gke-release/csi-provisioner:v1.4.0-gke.0],SizeBytes:60974770,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/event-exporter@sha256:ab71028f7cbc851d273bb00449e30ab743d4e3be21ed2093299f718b42df0748 k8s.gcr.io/event-exporter:v0.3.1],SizeBytes:51445475,},ContainerImage{Names:[gcr.io/gke-release/csi-attacher@sha256:2ffe521f6b59df3fa0dd444eef5f22472e3fe343efc1ab39aaa05e4f28832418 gcr.io/gke-release/csi-attacher:v2.0.0-gke.0],SizeBytes:51300540,},ContainerImage{Names:[gcr.io/gke-release/csi-resizer@sha256:650e2a11a7f877b51db5e23c5b7eae30b99714b535a59a6bba2aa9c165358cb1 gcr.io/gke-release/csi-resizer:v0.3.0-gke.0],SizeBytes:51125391,},ContainerImage{Names:[quay.io/k8scsi/snapshot-controller@sha256:b3c1c484ffe4f0bbf000bda93fb745e1b3899b08d605a08581426a1963dd3e8a quay.io/k8scsi/snapshot-controller:v2.0.0-rc2],SizeBytes:47222712,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:1d49fb3b108e6b42542e4a9b056dee308f06f88824326cde1636eea0472b799d k8s.gcr.io/prometheus-to-sd:v0.7.2],SizeBytes:42314030,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[gcr.io/gke-release/csi-node-driver-registrar@sha256:c09c18e90fa65c1156ab449681c38a9da732e2bb20a724ad10f90b2b0eec97d2 gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0],SizeBytes:19358236,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64@sha256:d83d8a481145d0eb71f8bd71ae236d1c6a931dd3bdcaf80919a8ec4a4d8aff74 k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0],SizeBytes:13513083,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 11:45:06.759: INFO: Logging kubelet events for node test-6bbac58e9d-minion-group-ldgb Nov 22 11:45:06.801: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-minion-group-ldgb Nov 22 11:45:06.862: INFO: metadata-proxy-v0.1-ptzjq started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:45:06.862: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 11:45:06.862: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:45:06.862: INFO: fluentd-gcp-scaler-76d9c77b4d-wh4nt started at 2019-11-22 10:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.862: INFO: Container fluentd-gcp-scaler ready: true, restart count 0 Nov 22 11:45:06.862: INFO: maxp-7 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.862: INFO: Container maxp-7 ready: true, restart count 0 Nov 22 11:45:06.862: INFO: maxp-84 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.862: INFO: Container maxp-84 ready: true, restart count 0 Nov 22 11:45:06.862: INFO: maxp-144 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.862: INFO: Container maxp-144 ready: true, restart count 0 Nov 22 11:45:06.862: INFO: maxp-251 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.862: INFO: Container maxp-251 ready: true, restart count 0 Nov 22 11:45:06.862: INFO: kube-proxy-test-6bbac58e9d-minion-group-ldgb started at 2019-11-22 09:29:30 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.862: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 11:45:06.862: INFO: npd-v0.8.0-wmkxq started at 2019-11-22 09:29:42 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.862: INFO: Container node-problem-detector ready: true, restart count 0 Nov 22 11:45:06.862: INFO: maxp-176 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.862: INFO: Container maxp-176 ready: true, restart count 0 Nov 22 11:45:06.862: INFO: maxp-187 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.862: INFO: Container maxp-187 ready: true, restart count 0 Nov 22 11:45:06.862: INFO: maxp-214 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.862: INFO: Container maxp-214 ready: true, restart count 0 Nov 22 11:45:06.862: INFO: maxp-259 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.862: INFO: Container maxp-259 ready: true, restart count 0 Nov 22 11:45:06.862: INFO: maxp-103 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.862: INFO: Container maxp-103 ready: true, restart count 0 Nov 22 11:45:06.862: INFO: maxp-194 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.862: INFO: Container maxp-194 ready: true, restart count 0 Nov 22 11:45:06.862: INFO: maxp-205 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.862: INFO: Container maxp-205 ready: true, restart count 0 Nov 22 11:45:06.862: INFO: maxp-120 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.862: INFO: Container maxp-120 ready: true, restart count 0 Nov 22 11:45:06.862: INFO: maxp-139 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.862: INFO: Container maxp-139 ready: true, restart count 0 Nov 22 11:45:06.862: INFO: maxp-248 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.862: INFO: Container maxp-248 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: l7-default-backend-678889f899-sn2pt started at 2019-11-22 10:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container default-http-backend ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-11 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-11 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-61 started at 2019-11-22 11:43:30 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-61 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-72 started at 2019-11-22 11:43:30 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-72 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-186 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-186 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-63 started at 2019-11-22 11:43:30 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-63 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-65 started at 2019-11-22 11:43:30 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-65 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-117 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-117 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-280 started at 2019-11-22 11:44:10 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-280 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-265 started at 2019-11-22 11:44:10 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-265 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-49 started at 2019-11-22 11:43:30 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-49 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-135 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-135 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-197 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-197 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-246 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-246 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-127 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-127 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-233 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-233 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-28 started at 2019-11-22 11:43:26 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-28 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-39 started at 2019-11-22 11:43:26 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-39 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-47 started at 2019-11-22 11:43:30 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-47 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-106 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-106 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-172 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-172 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-252 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-252 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-260 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-260 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-261 started at 2019-11-22 11:43:53 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-261 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: volume-snapshot-controller-0 started at 2019-11-22 10:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container volume-snapshot-controller ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-37 started at 2019-11-22 11:43:26 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-37 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-130 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-130 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-198 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-198 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-242 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-242 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-282 started at 2019-11-22 11:44:10 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-282 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-13 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-13 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-50 started at 2019-11-22 11:43:30 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-50 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-69 started at 2019-11-22 11:43:30 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-69 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-204 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-204 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-238 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-238 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-275 started at 2019-11-22 11:44:10 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-275 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-55 started at 2019-11-22 11:43:30 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-55 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-82 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-82 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-272 started at 2019-11-22 11:44:10 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-272 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-34 started at 2019-11-22 11:43:26 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-34 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-100 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-100 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-146 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-146 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-17 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-17 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-97 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-97 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-99 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-99 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-151 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-151 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-215 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-215 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-216 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-216 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-32 started at 2019-11-22 11:43:26 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-32 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-200 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-200 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-128 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-128 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-283 started at 2019-11-22 11:44:10 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-283 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-58 started at 2019-11-22 11:43:30 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-58 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-78 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-78 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-132 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-132 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-250 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-250 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-94 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-94 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-156 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-156 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-196 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-196 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-267 started at 2019-11-22 11:44:10 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-267 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-19 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-19 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-149 started at 2019-11-22 11:43:32 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-149 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-209 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-209 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: event-exporter-v0.3.1-747b47fcd-8chbt started at 2019-11-22 10:43:02 +0000 UTC (0+2 container statuses recorded) Nov 22 11:45:06.863: INFO: Container event-exporter ready: true, restart count 0 Nov 22 11:45:06.863: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-225 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-225 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-264 started at 2019-11-22 11:44:10 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-264 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-104 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-104 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-168 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-168 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-247 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-247 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: fluentd-gcp-v3.2.0-f9q96 started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:45:06.863: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 11:45:06.863: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-25 started at 2019-11-22 11:43:26 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-25 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-213 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-213 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: hostexec-test-6bbac58e9d-minion-group-ldgb-kkbwl started at 2019-11-22 11:45:05 +0000 UTC (0+0 container statuses recorded) Nov 22 11:45:06.863: INFO: maxp-22 started at 2019-11-22 11:43:25 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-22 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-75 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-75 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-191 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-191 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-235 started at 2019-11-22 11:43:52 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-235 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-118 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-118 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-262 started at 2019-11-22 11:43:53 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-262 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-111 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-111 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-189 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-189 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-173 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-173 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-208 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-208 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-122 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-122 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-166 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-166 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-219 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-219 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-226 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-226 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-227 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-227 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: coredns-65567c7b57-s9876 started at 2019-11-22 10:43:03 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container coredns ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-182 started at 2019-11-22 11:43:51 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-182 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-266 started at 2019-11-22 11:44:10 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-266 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-43 started at 2019-11-22 11:43:30 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-43 ready: true, restart count 0 Nov 22 11:45:06.863: INFO: maxp-102 started at 2019-11-22 11:43:31 +0000 UTC (0+1 container statuses recorded) Nov 22 11:45:06.863: INFO: Container maxp-102 ready: true, restart count 0 Nov 22 11:45:07.061: INFO: Latency metrics for node test-6bbac58e9d-minion-group-ldgb Nov 22 11:45:07.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "provisioning-7153" for this suite.
Find ran mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\snfs\]\s\[Testpattern\:\sDynamic\sPV\s\(filesystem\svolmode\)\]\sdisruptive\[Disruptive\]\sShould\stest\sthat\spv\swritten\sbefore\skubelet\srestart\sis\sreadable\safter\srestart\.$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/disruptive.go:149 Nov 22 11:43:09.640: SSH to Node "test-6bbac58e9d-minion-group-1pk2" errored. Unexpected error: <*errors.errorString | 0xc0040916a0>: { s: "error getting SSH client to prow@104.198.3.26:22: 'ssh: handshake failed: read tcp 10.60.54.192:51572->104.198.3.26:22: read: connection reset by peer'", } error getting SSH client to prow@104.198.3.26:22: 'ssh: handshake failed: read tcp 10.60.54.192:51572->104.198.3.26:22: read: connection reset by peer' occurred /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/utils.go:164from junit_01.xml
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:101 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 22 11:42:52.888: INFO: >>> kubeConfig: /workspace/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename disruptive �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] Should test that pv written before kubelet restart is readable after restart. /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/disruptive.go:149 �[1mSTEP�[0m: creating an external dynamic provisioner pod �[1mSTEP�[0m: locating the provisioner pod Nov 22 11:42:55.430: INFO: Creating resource for dynamic PV Nov 22 11:42:55.430: INFO: Using claimSize:5Gi, test suite supported size:{ }, driver(nfs) supported size:{ } �[1mSTEP�[0m: creating a StorageClass disruptive-6928-nfs-sc5vzpb �[1mSTEP�[0m: creating a claim Nov 22 11:42:55.513: INFO: Waiting up to 5m0s for PersistentVolumeClaims [nfsghpq4] to have phase Bound Nov 22 11:42:55.553: INFO: PersistentVolumeClaim nfsghpq4 found but phase is Pending instead of Bound. Nov 22 11:42:57.594: INFO: PersistentVolumeClaim nfsghpq4 found but phase is Pending instead of Bound. Nov 22 11:42:59.635: INFO: PersistentVolumeClaim nfsghpq4 found but phase is Pending instead of Bound. Nov 22 11:43:01.673: INFO: PersistentVolumeClaim nfsghpq4 found and phase=Bound (6.160124859s) �[1mSTEP�[0m: Creating a pod with pvc �[1mSTEP�[0m: Writing to the volume. Nov 22 11:43:03.914: INFO: ExecWithOptions {Command:[/bin/sh -c echo yKNaEkZwNjdygRvQfXKC2PhIbARzQ1Vid6ITZ811fiSfiE141v4eQCUIG/6kI5o2PPpqId2c9byJs0OhnkgNhw== | base64 -d | sha256sum] Namespace:disruptive-6928 PodName:security-context-6a804649-d8d7-46e0-9140-0aa4962e5fff ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 11:43:03.914: INFO: >>> kubeConfig: /workspace/.kube/config Nov 22 11:43:04.180: INFO: ExecWithOptions {Command:[/bin/sh -c echo yKNaEkZwNjdygRvQfXKC2PhIbARzQ1Vid6ITZ811fiSfiE141v4eQCUIG/6kI5o2PPpqId2c9byJs0OhnkgNhw== | base64 -d | dd of=/mnt/volume1/file1.txt bs=64 count=1] Namespace:disruptive-6928 PodName:security-context-6a804649-d8d7-46e0-9140-0aa4962e5fff ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 11:43:04.181: INFO: >>> kubeConfig: /workspace/.kube/config �[1mSTEP�[0m: Restarting kubelet Nov 22 11:43:04.496: INFO: Checking if systemctl command is present Nov 22 11:43:09.640: FAIL: SSH to Node "test-6bbac58e9d-minion-group-1pk2" errored. Unexpected error: <*errors.errorString | 0xc0040916a0>: { s: "error getting SSH client to prow@104.198.3.26:22: 'ssh: handshake failed: read tcp 10.60.54.192:51572->104.198.3.26:22: read: connection reset by peer'", } error getting SSH client to prow@104.198.3.26:22: 'ssh: handshake failed: read tcp 10.60.54.192:51572->104.198.3.26:22: read: connection reset by peer' occurred �[1mSTEP�[0m: Deleting pod Nov 22 11:43:09.641: INFO: Deleting pod "security-context-6a804649-d8d7-46e0-9140-0aa4962e5fff" in namespace "disruptive-6928" Nov 22 11:43:09.683: INFO: Wait up to 5m0s for pod "security-context-6a804649-d8d7-46e0-9140-0aa4962e5fff" to be fully deleted �[1mSTEP�[0m: Deleting pvc Nov 22 11:43:15.760: INFO: Deleting PersistentVolumeClaim "nfsghpq4" Nov 22 11:43:15.801: INFO: Waiting up to 5m0s for PersistentVolume pvc-c2b44bb7-6906-4509-a10d-41295bf2b460 to get deleted Nov 22 11:43:15.843: INFO: PersistentVolume pvc-c2b44bb7-6906-4509-a10d-41295bf2b460 found and phase=Released (42.586487ms) Nov 22 11:43:20.891: INFO: PersistentVolume pvc-c2b44bb7-6906-4509-a10d-41295bf2b460 was removed �[1mSTEP�[0m: Deleting sc Nov 22 11:43:20.941: INFO: Deleting pod "external-provisioner-9lrr6" in namespace "disruptive-6928" Nov 22 11:43:21.007: INFO: Wait up to 5m0s for pod "external-provisioner-9lrr6" to be fully deleted [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "disruptive-6928". �[1mSTEP�[0m: Found 14 events. Nov 22 11:43:23.173: INFO: At 2019-11-22 11:42:53 +0000 UTC - event for external-provisioner-9lrr6: {default-scheduler } Scheduled: Successfully assigned disruptive-6928/external-provisioner-9lrr6 to test-6bbac58e9d-minion-group-1pk2 Nov 22 11:43:23.173: INFO: At 2019-11-22 11:42:54 +0000 UTC - event for external-provisioner-9lrr6: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulled: Container image "quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2" already present on machine Nov 22 11:43:23.173: INFO: At 2019-11-22 11:42:54 +0000 UTC - event for external-provisioner-9lrr6: {kubelet test-6bbac58e9d-minion-group-1pk2} Created: Created container nfs-provisioner Nov 22 11:43:23.174: INFO: At 2019-11-22 11:42:54 +0000 UTC - event for external-provisioner-9lrr6: {kubelet test-6bbac58e9d-minion-group-1pk2} Started: Started container nfs-provisioner Nov 22 11:43:23.174: INFO: At 2019-11-22 11:42:55 +0000 UTC - event for nfsghpq4: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "example.com/nfs-disruptive-6928" or manually created by system administrator Nov 22 11:43:23.174: INFO: At 2019-11-22 11:42:59 +0000 UTC - event for example.com-nfs-disruptive-6928: {example.com/nfs-disruptive-6928_external-provisioner-9lrr6_b43c9f9e-9230-4aeb-94f4-e564842b1b30 } LeaderElection: external-provisioner-9lrr6_b43c9f9e-9230-4aeb-94f4-e564842b1b30 became leader Nov 22 11:43:23.175: INFO: At 2019-11-22 11:42:59 +0000 UTC - event for nfsghpq4: {example.com/nfs-disruptive-6928_external-provisioner-9lrr6_b43c9f9e-9230-4aeb-94f4-e564842b1b30 } ProvisioningSucceeded: Successfully provisioned volume pvc-c2b44bb7-6906-4509-a10d-41295bf2b460 Nov 22 11:43:23.175: INFO: At 2019-11-22 11:42:59 +0000 UTC - event for nfsghpq4: {example.com/nfs-disruptive-6928_external-provisioner-9lrr6_b43c9f9e-9230-4aeb-94f4-e564842b1b30 } Provisioning: External provisioner is provisioning volume for claim "disruptive-6928/nfsghpq4" Nov 22 11:43:23.176: INFO: At 2019-11-22 11:43:01 +0000 UTC - event for security-context-6a804649-d8d7-46e0-9140-0aa4962e5fff: {default-scheduler } Scheduled: Successfully assigned disruptive-6928/security-context-6a804649-d8d7-46e0-9140-0aa4962e5fff to test-6bbac58e9d-minion-group-1pk2 Nov 22 11:43:23.176: INFO: At 2019-11-22 11:43:02 +0000 UTC - event for security-context-6a804649-d8d7-46e0-9140-0aa4962e5fff: {kubelet test-6bbac58e9d-minion-group-1pk2} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Nov 22 11:43:23.177: INFO: At 2019-11-22 11:43:02 +0000 UTC - event for security-context-6a804649-d8d7-46e0-9140-0aa4962e5fff: {kubelet test-6bbac58e9d-minion-group-1pk2} Created: Created container write-pod Nov 22 11:43:23.177: INFO: At 2019-11-22 11:43:02 +0000 UTC - event for security-context-6a804649-d8d7-46e0-9140-0aa4962e5fff: {kubelet test-6bbac58e9d-minion-group-1pk2} Started: Started container write-pod Nov 22 11:43:23.177: INFO: At 2019-11-22 11:43:09 +0000 UTC - event for security-context-6a804649-d8d7-46e0-9140-0aa4962e5fff: {kubelet test-6bbac58e9d-minion-group-1pk2} Killing: Stopping container write-pod Nov 22 11:43:23.178: INFO: At 2019-11-22 11:43:20 +0000 UTC - event for external-provisioner-9lrr6: {kubelet test-6bbac58e9d-minion-group-1pk2} Killing: Stopping container nfs-provisioner Nov 22 11:43:23.217: INFO: POD NODE PHASE GRACE CONDITIONS Nov 22 11:43:23.217: INFO: Nov 22 11:43:23.258: INFO: Logging node info for node test-6bbac58e9d-master Nov 22 11:43:23.298: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-master /api/v1/nodes/test-6bbac58e9d-master 8a7a430e-36f3-4dcf-b7dd-f2a903ca1fa5 36516 0 2019-11-22 09:29:31 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3876802560 0} {<nil>} 3785940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3614658560 0} {<nil>} 3529940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 11:40:52 +0000 UTC,LastTransitionTime:2019-11-22 09:29:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 11:40:52 +0000 UTC,LastTransitionTime:2019-11-22 09:29:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 11:40:52 +0000 UTC,LastTransitionTime:2019-11-22 09:29:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 11:40:52 +0000 UTC,LastTransitionTime:2019-11-22 09:29:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.227.175.21,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-master.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-master.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fa8a320b898c1d5588780170530d5cf8,SystemUUID:fa8a320b-898c-1d55-8878-0170530d5cf8,BootID:2730095f-f6ec-4217-a9ae-32ba996e1eed,KernelVersion:4.19.76+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://19.3.1,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:212137343,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:200623393,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:110377926,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/etcd-empty-dir-cleanup@sha256:484662e55e0705caed26c6fb8632097457f43ce685756531da7a76319a7dcee1 k8s.gcr.io/etcd-empty-dir-cleanup:3.4.3.0],SizeBytes:77408900,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-glbc-amd64@sha256:1a859d138b4874642e9a8709e7ab04324669c77742349b5b21b1ef8a25fef55f k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1],SizeBytes:76121176,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 11:43:23.298: INFO: Logging kubelet events for node test-6bbac58e9d-master Nov 22 11:43:23.341: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-master Nov 22 11:43:23.386: INFO: etcd-server-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:23.386: INFO: Container etcd-container ready: true, restart count 1 Nov 22 11:43:23.386: INFO: kube-addon-manager-test-6bbac58e9d-master started at 2019-11-22 09:29:05 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:23.386: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 22 11:43:23.386: INFO: metadata-proxy-v0.1-xr6wl started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:43:23.386: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 11:43:23.386: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:43:23.386: INFO: kube-apiserver-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:23.386: INFO: Container kube-apiserver ready: true, restart count 0 Nov 22 11:43:23.386: INFO: kube-controller-manager-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:23.386: INFO: Container kube-controller-manager ready: true, restart count 1 Nov 22 11:43:23.386: INFO: etcd-empty-dir-cleanup-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:23.386: INFO: Container etcd-empty-dir-cleanup ready: true, restart count 1 Nov 22 11:43:23.386: INFO: etcd-server-events-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:23.386: INFO: Container etcd-container ready: true, restart count 1 Nov 22 11:43:23.386: INFO: kube-scheduler-test-6bbac58e9d-master started at 2019-11-22 09:28:38 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:23.386: INFO: Container kube-scheduler ready: true, restart count 0 Nov 22 11:43:23.386: INFO: l7-lb-controller-test-6bbac58e9d-master started at 2019-11-22 09:29:06 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:23.386: INFO: Container l7-lb-controller ready: true, restart count 3 Nov 22 11:43:23.386: INFO: fluentd-gcp-v3.2.0-fxhtk started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:43:23.386: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 11:43:23.386: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:43:23.558: INFO: Latency metrics for node test-6bbac58e9d-master Nov 22 11:43:23.559: INFO: Logging node info for node test-6bbac58e9d-minion-group-1pk2 Nov 22 11:43:23.599: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-minion-group-1pk2 /api/v1/nodes/test-6bbac58e9d-minion-group-1pk2 a4f21abc-d48a-4c0f-a26f-9e634bca825a 36949 0 2019-11-22 10:48:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-minion-group-1pk2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.gke.io/zone:us-west1-b topology.hostpath.csi/node:test-6bbac58e9d-minion-group-1pk2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-disruptive-1972":"test-6bbac58e9d-minion-group-1pk2","csi-hostpath-disruptive-9353":"test-6bbac58e9d-minion-group-1pk2","pd.csi.storage.gke.io":"projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/instances/test-6bbac58e9d-minion-group-1pk2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-minion-group-1pk2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103880232960 0} {<nil>} 101445540Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7836020736 0} {<nil>} 7652364Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93492209510 0} {<nil>} 93492209510 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7573876736 0} {<nil>} 7396364Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-11-22 11:38:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-11-22 11:38:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-11-22 11:38:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-11-22 11:38:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-11-22 11:38:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-11-22 11:38:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-11-22 11:38:29 +0000 UTC,LastTransitionTime:2019-11-22 10:48:22 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 11:41:54 +0000 UTC,LastTransitionTime:2019-11-22 11:41:54 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 11:41:54 +0000 UTC,LastTransitionTime:2019-11-22 11:41:54 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 11:41:54 +0000 UTC,LastTransitionTime:2019-11-22 11:41:54 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 11:41:54 +0000 UTC,LastTransitionTime:2019-11-22 11:41:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.6,},NodeAddress{Type:ExternalIP,Address:104.198.3.26,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-minion-group-1pk2.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-minion-group-1pk2.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c363001bc173de2779c31270a0a03e8d,SystemUUID:C363001B-C173-DE27-79C3-1270A0A03E8D,BootID:ecfbc66f-0a8c-4787-a7bf-8e0ebe1e8bb2,KernelVersion:4.15.0-1048-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:130115752,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:3846a258f98448f6586700a37ae5974a0969cc0bb43f75b4fde0f198bb314103 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0],SizeBytes:113118759,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:276335fa25a703615cc2f2cdc51ba693fac4bdd70baa63f9cbf228291defd776 k8s.gcr.io/node-problem-detector:v0.8.0],SizeBytes:108715243,},ContainerImage{Names:[k8s.gcr.io/fluentd-gcp-scaler@sha256:4f28f10fb89506768910b858f7a18ffb996824a16d70d5ac895e49687df9ff58 k8s.gcr.io/fluentd-gcp-scaler:0.5.2],SizeBytes:90498960,},ContainerImage{Names:[gcr.io/gke-release/csi-provisioner@sha256:3cd0f6b65a01d3b1e6fa9f2eb31448e7c15d79ee782249c206ad2710ac189cff gcr.io/gke-release/csi-provisioner:v1.4.0-gke.0],SizeBytes:60974770,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/event-exporter@sha256:ab71028f7cbc851d273bb00449e30ab743d4e3be21ed2093299f718b42df0748 k8s.gcr.io/event-exporter:v0.3.1],SizeBytes:51445475,},ContainerImage{Names:[gcr.io/gke-release/csi-attacher@sha256:2ffe521f6b59df3fa0dd444eef5f22472e3fe343efc1ab39aaa05e4f28832418 gcr.io/gke-release/csi-attacher:v2.0.0-gke.0],SizeBytes:51300540,},ContainerImage{Names:[gcr.io/gke-release/csi-resizer@sha256:650e2a11a7f877b51db5e23c5b7eae30b99714b535a59a6bba2aa9c165358cb1 gcr.io/gke-release/csi-resizer:v0.3.0-gke.0],SizeBytes:51125391,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:32b849c66869e0491116a333af3a7cafe226639d975818dfdb4d58fd7028a0b8 quay.io/k8scsi/csi-provisioner:v1.5.0-rc1],SizeBytes:50962879,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:48e9d3c4147ab5e4e070677a352f807a17e0f6bcf4cb19b8c34f6bfadf87781b quay.io/k8scsi/csi-snapshotter:v2.0.0-rc2],SizeBytes:50515643,},ContainerImage{Names:[quay.io/k8scsi/snapshot-controller@sha256:b3c1c484ffe4f0bbf000bda93fb745e1b3899b08d605a08581426a1963dd3e8a quay.io/k8scsi/snapshot-controller:v2.0.0-rc2],SizeBytes:47222712,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:d3e2c5fb1d887bed2e6e8f931c65209b16cf7c28b40eaf7812268f9839908790 quay.io/k8scsi/csi-attacher:v2.0.0],SizeBytes:46143101,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:eff2d6a215efd9450d90796265fc4d8832a54a3a098df06edae6ff3a5072b08f quay.io/k8scsi/csi-resizer:v0.3.0],SizeBytes:46014212,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:1d49fb3b108e6b42542e4a9b056dee308f06f88824326cde1636eea0472b799d k8s.gcr.io/prometheus-to-sd:v0.7.2],SizeBytes:42314030,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:2f80c87a67122448bd60c778dd26d4067e1f48d1ea3fefe9f92b8cd1961acfa0 quay.io/k8scsi/hostpathplugin:v1.3.0-rc1],SizeBytes:28736864,},ContainerImage{Names:[gcr.io/gke-release/csi-node-driver-registrar@sha256:c09c18e90fa65c1156ab449681c38a9da732e2bb20a724ad10f90b2b0eec97d2 gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0],SizeBytes:19358236,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:89cdb2a20bdec89b75e2fbd82a67567ea90b719524990e772f2704b19757188d quay.io/k8scsi/csi-node-driver-registrar:v1.2.0],SizeBytes:17057647,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64@sha256:d83d8a481145d0eb71f8bd71ae236d1c6a931dd3bdcaf80919a8ec4a4d8aff74 k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0],SizeBytes:13513083,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 11:43:23.599: INFO: Logging kubelet events for node test-6bbac58e9d-minion-group-1pk2 Nov 22 11:43:23.645: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-minion-group-1pk2 Nov 22 11:43:23.689: INFO: kube-proxy-test-6bbac58e9d-minion-group-1pk2 started at 2019-11-22 11:41:54 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:23.689: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 11:43:23.689: INFO: metadata-proxy-v0.1-4bxj9 started at 2019-11-22 10:48:20 +0000 UTC (0+2 container statuses recorded) Nov 22 11:43:23.689: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 11:43:23.689: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:43:23.689: INFO: npd-v0.8.0-224c2 started at 2019-11-22 10:48:20 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:23.689: INFO: Container node-problem-detector ready: true, restart count 0 Nov 22 11:43:23.689: INFO: fluentd-gcp-v3.2.0-4fdmw started at 2019-11-22 10:48:20 +0000 UTC (0+2 container statuses recorded) Nov 22 11:43:23.689: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 11:43:23.689: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:43:23.838: INFO: Latency metrics for node test-6bbac58e9d-minion-group-1pk2 Nov 22 11:43:23.838: INFO: Logging node info for node test-6bbac58e9d-minion-group-dtt3 Nov 22 11:43:23.877: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-minion-group-dtt3 /api/v1/nodes/test-6bbac58e9d-minion-group-dtt3 bbcaa4a7-21ed-4b1a-8d6c-097e686c368c 37003 0 2019-11-22 09:29:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-minion-group-dtt3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.gke.io/zone:us-west1-b topology.hostpath.csi/node:test-6bbac58e9d-minion-group-dtt3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-3611":"test-6bbac58e9d-minion-group-dtt3","pd.csi.storage.gke.io":"projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/instances/test-6bbac58e9d-minion-group-dtt3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-minion-group-dtt3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103880232960 0} {<nil>} 101445540Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7836012544 0} {<nil>} 7652356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93492209510 0} {<nil>} 93492209510 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7573868544 0} {<nil>} 7396356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-11-22 11:40:06 +0000 UTC,LastTransitionTime:2019-11-22 09:29:49 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 11:42:59 +0000 UTC,LastTransitionTime:2019-11-22 11:37:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 11:42:59 +0000 UTC,LastTransitionTime:2019-11-22 11:37:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 11:42:59 +0000 UTC,LastTransitionTime:2019-11-22 11:37:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 11:42:59 +0000 UTC,LastTransitionTime:2019-11-22 11:37:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.227.160.250,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-minion-group-dtt3.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-minion-group-dtt3.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:015ba3833761f0b9cd8a2196bf6fb79d,SystemUUID:015BA383-3761-F0B9-CD8A-2196BF6FB79D,BootID:c9ec395e-18ec-40c2-b13c-49ae0567ad15,KernelVersion:4.15.0-1048-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:130115752,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:3846a258f98448f6586700a37ae5974a0969cc0bb43f75b4fde0f198bb314103 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0],SizeBytes:113118759,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:276335fa25a703615cc2f2cdc51ba693fac4bdd70baa63f9cbf228291defd776 k8s.gcr.io/node-problem-detector:v0.8.0],SizeBytes:108715243,},ContainerImage{Names:[k8s.gcr.io/heapster-amd64@sha256:9fae0af136ce0cf4f88393b3670f7139ffc464692060c374d2ae748e13144521 k8s.gcr.io/heapster-amd64:v1.6.0-beta.1],SizeBytes:76016169,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:32b849c66869e0491116a333af3a7cafe226639d975818dfdb4d58fd7028a0b8 quay.io/k8scsi/csi-provisioner:v1.5.0-rc1],SizeBytes:50962879,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:48e9d3c4147ab5e4e070677a352f807a17e0f6bcf4cb19b8c34f6bfadf87781b quay.io/k8scsi/csi-snapshotter:v2.0.0-rc2],SizeBytes:50515643,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:d3e2c5fb1d887bed2e6e8f931c65209b16cf7c28b40eaf7812268f9839908790 quay.io/k8scsi/csi-attacher:v2.0.0],SizeBytes:46143101,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:eff2d6a215efd9450d90796265fc4d8832a54a3a098df06edae6ff3a5072b08f quay.io/k8scsi/csi-resizer:v0.3.0],SizeBytes:46014212,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:2114a2f70d34fa2821fb7f9bf373be5f44c8cbfeb6097fb5ba8eaf73cd38b72a k8s.gcr.io/addon-resizer:1.8.6],SizeBytes:37928220,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:2f80c87a67122448bd60c778dd26d4067e1f48d1ea3fefe9f92b8cd1961acfa0 quay.io/k8scsi/hostpathplugin:v1.3.0-rc1],SizeBytes:28736864,},ContainerImage{Names:[gcr.io/gke-release/csi-node-driver-registrar@sha256:c09c18e90fa65c1156ab449681c38a9da732e2bb20a724ad10f90b2b0eec97d2 gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0],SizeBytes:19358236,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:89cdb2a20bdec89b75e2fbd82a67567ea90b719524990e772f2704b19757188d quay.io/k8scsi/csi-node-driver-registrar:v1.2.0],SizeBytes:17057647,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 11:43:23.878: INFO: Logging kubelet events for node test-6bbac58e9d-minion-group-dtt3 Nov 22 11:43:23.920: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-minion-group-dtt3 Nov 22 11:43:23.964: INFO: metrics-server-v0.3.6-7d96444597-lfv7c started at 2019-11-22 09:29:45 +0000 UTC (0+2 container statuses recorded) Nov 22 11:43:23.964: INFO: Container metrics-server ready: true, restart count 0 Nov 22 11:43:23.964: INFO: Container metrics-server-nanny ready: true, restart count 0 Nov 22 11:43:23.964: INFO: heapster-v1.6.0-beta.1-859599df9f-9nl5x started at 2019-11-22 09:29:47 +0000 UTC (0+2 container statuses recorded) Nov 22 11:43:23.964: INFO: Container heapster ready: true, restart count 0 Nov 22 11:43:23.964: INFO: Container heapster-nanny ready: true, restart count 0 Nov 22 11:43:23.964: INFO: fluentd-gcp-v3.2.0-z4gtt started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:43:23.964: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 11:43:23.964: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:43:23.964: INFO: coredns-65567c7b57-vqz56 started at 2019-11-22 09:29:55 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:23.964: INFO: Container coredns ready: true, restart count 0 Nov 22 11:43:23.964: INFO: kubernetes-dashboard-7778f8b456-dwww9 started at 2019-11-22 09:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:23.964: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 22 11:43:23.964: INFO: kube-proxy-test-6bbac58e9d-minion-group-dtt3 started at 2019-11-22 11:33:24 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:23.964: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 11:43:23.964: INFO: metadata-proxy-v0.1-qj8lx started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:43:23.964: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 11:43:23.964: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:43:23.964: INFO: npd-v0.8.0-86sjk started at 2019-11-22 09:29:41 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:23.964: INFO: Container node-problem-detector ready: true, restart count 0 Nov 22 11:43:23.964: INFO: kube-dns-autoscaler-65bc6d4889-kncqk started at 2019-11-22 10:43:03 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:23.964: INFO: Container autoscaler ready: true, restart count 0 Nov 22 11:43:24.116: INFO: Latency metrics for node test-6bbac58e9d-minion-group-dtt3 Nov 22 11:43:24.116: INFO: Logging node info for node test-6bbac58e9d-minion-group-ldgb Nov 22 11:43:24.156: INFO: Node Info: &Node{ObjectMeta:{test-6bbac58e9d-minion-group-ldgb /api/v1/nodes/test-6bbac58e9d-minion-group-ldgb 7af88a45-91da-49e2-aad1-693979aa273c 36350 0 2019-11-22 09:29:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:test-6bbac58e9d-minion-group-ldgb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.gke.io/zone:us-west1-b topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"pd.csi.storage.gke.io":"projects/ubuntu-os-gke-cloud-dev-tests/zones/us-west1-b/instances/test-6bbac58e9d-minion-group-ldgb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://ubuntu-os-gke-cloud-dev-tests/us-west1-b/test-6bbac58e9d-minion-group-ldgb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103880232960 0} {<nil>} 101445540Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7836012544 0} {<nil>} 7652356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93492209510 0} {<nil>} 93492209510 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7573868544 0} {<nil>} 7396356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-11-22 11:40:11 +0000 UTC,LastTransitionTime:2019-11-22 09:29:54 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-22 09:29:39 +0000 UTC,LastTransitionTime:2019-11-22 09:29:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 11:39:56 +0000 UTC,LastTransitionTime:2019-11-22 10:33:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 11:39:56 +0000 UTC,LastTransitionTime:2019-11-22 10:33:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 11:39:56 +0000 UTC,LastTransitionTime:2019-11-22 10:33:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 11:39:56 +0000 UTC,LastTransitionTime:2019-11-22 11:04:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:104.199.127.196,},NodeAddress{Type:InternalDNS,Address:test-6bbac58e9d-minion-group-ldgb.c.ubuntu-os-gke-cloud-dev-tests.internal,},NodeAddress{Type:Hostname,Address:test-6bbac58e9d-minion-group-ldgb.c.ubuntu-os-gke-cloud-dev-tests.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7e1c327ba82c05d274d059f31a030f91,SystemUUID:7E1C327B-A82C-05D2-74D0-59F31A030F91,BootID:153cc788-4fe4-4a95-a234-e7f53446bb04,KernelVersion:4.15.0-1048-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-beta.2.22+486425533b66fa,KubeProxyVersion:v1.17.0-beta.2.22+486425533b66fa,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-beta.2.22_486425533b66fa],SizeBytes:130115752,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:3846a258f98448f6586700a37ae5974a0969cc0bb43f75b4fde0f198bb314103 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.6.0-gke.0],SizeBytes:113118759,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:276335fa25a703615cc2f2cdc51ba693fac4bdd70baa63f9cbf228291defd776 k8s.gcr.io/node-problem-detector:v0.8.0],SizeBytes:108715243,},ContainerImage{Names:[k8s.gcr.io/fluentd-gcp-scaler@sha256:4f28f10fb89506768910b858f7a18ffb996824a16d70d5ac895e49687df9ff58 k8s.gcr.io/fluentd-gcp-scaler:0.5.2],SizeBytes:90498960,},ContainerImage{Names:[gcr.io/gke-release/csi-provisioner@sha256:3cd0f6b65a01d3b1e6fa9f2eb31448e7c15d79ee782249c206ad2710ac189cff gcr.io/gke-release/csi-provisioner:v1.4.0-gke.0],SizeBytes:60974770,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/event-exporter@sha256:ab71028f7cbc851d273bb00449e30ab743d4e3be21ed2093299f718b42df0748 k8s.gcr.io/event-exporter:v0.3.1],SizeBytes:51445475,},ContainerImage{Names:[gcr.io/gke-release/csi-attacher@sha256:2ffe521f6b59df3fa0dd444eef5f22472e3fe343efc1ab39aaa05e4f28832418 gcr.io/gke-release/csi-attacher:v2.0.0-gke.0],SizeBytes:51300540,},ContainerImage{Names:[gcr.io/gke-release/csi-resizer@sha256:650e2a11a7f877b51db5e23c5b7eae30b99714b535a59a6bba2aa9c165358cb1 gcr.io/gke-release/csi-resizer:v0.3.0-gke.0],SizeBytes:51125391,},ContainerImage{Names:[quay.io/k8scsi/snapshot-controller@sha256:b3c1c484ffe4f0bbf000bda93fb745e1b3899b08d605a08581426a1963dd3e8a quay.io/k8scsi/snapshot-controller:v2.0.0-rc2],SizeBytes:47222712,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:1d49fb3b108e6b42542e4a9b056dee308f06f88824326cde1636eea0472b799d k8s.gcr.io/prometheus-to-sd:v0.7.2],SizeBytes:42314030,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[gcr.io/gke-release/csi-node-driver-registrar@sha256:c09c18e90fa65c1156ab449681c38a9da732e2bb20a724ad10f90b2b0eec97d2 gcr.io/gke-release/csi-node-driver-registrar:v1.2.0-gke.0],SizeBytes:19358236,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64@sha256:d83d8a481145d0eb71f8bd71ae236d1c6a931dd3bdcaf80919a8ec4a4d8aff74 k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0],SizeBytes:13513083,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 11:43:24.156: INFO: Logging kubelet events for node test-6bbac58e9d-minion-group-ldgb Nov 22 11:43:24.197: INFO: Logging pods the kubelet thinks is on node test-6bbac58e9d-minion-group-ldgb Nov 22 11:43:24.241: INFO: l7-default-backend-678889f899-sn2pt started at 2019-11-22 10:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:24.241: INFO: Container default-http-backend ready: true, restart count 0 Nov 22 11:43:24.241: INFO: volume-snapshot-controller-0 started at 2019-11-22 10:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:24.241: INFO: Container volume-snapshot-controller ready: true, restart count 0 Nov 22 11:43:24.241: INFO: event-exporter-v0.3.1-747b47fcd-8chbt started at 2019-11-22 10:43:02 +0000 UTC (0+2 container statuses recorded) Nov 22 11:43:24.241: INFO: Container event-exporter ready: true, restart count 0 Nov 22 11:43:24.241: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:43:24.241: INFO: fluentd-gcp-scaler-76d9c77b4d-wh4nt started at 2019-11-22 10:43:02 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:24.241: INFO: Container fluentd-gcp-scaler ready: true, restart count 0 Nov 22 11:43:24.241: INFO: coredns-65567c7b57-s9876 started at 2019-11-22 10:43:03 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:24.241: INFO: Container coredns ready: true, restart count 0 Nov 22 11:43:24.241: INFO: fluentd-gcp-v3.2.0-f9q96 started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:43:24.241: INFO: Container fluentd-gcp ready: true, restart count 0 Nov 22 11:43:24.241: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:43:24.241: INFO: metadata-proxy-v0.1-ptzjq started at 2019-11-22 09:29:31 +0000 UTC (0+2 container statuses recorded) Nov 22 11:43:24.241: INFO: Container metadata-proxy ready: true, restart count 0 Nov 22 11:43:24.241: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 22 11:43:24.241: INFO: kube-proxy-test-6bbac58e9d-minion-group-ldgb started at 2019-11-22 09:29:30 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:24.241: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 11:43:24.241: INFO: npd-v0.8.0-wmkxq started at 2019-11-22 09:29:42 +0000 UTC (0+1 container statuses recorded) Nov 22 11:43:24.241: INFO: Container node-problem-detector ready: true, restart count 0 Nov 22 11:43:24.381: INFO: Latency metrics for node test-6bbac58e9d-minion-group-ldgb Nov 22 11:43:24.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruptive-6928" for this suite.
Filter through log files | View test history on testgrid
error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
Check APIReachability
Deferred TearDown
DumpClusterLogs
Extract
IsUp
Kubernetes e2e suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
Kubernetes e2e suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
Kubernetes e2e suite [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node
Kubernetes e2e suite [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s
Kubernetes e2e suite [sig-network] DNS configMap nameserver [IPv4] Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver [IPv4] Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver [IPv4] Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
Kubernetes e2e suite [sig-network] Networking should recreate its iptables rules if they are deleted [Disruptive]
Kubernetes e2e suite [sig-network] Services should reconcile LB health check interval [Slow][Serial]
Kubernetes e2e suite [sig-network] Services should work after restarting apiserver [Disruptive]
Kubernetes e2e suite [sig-network] Services should work after restarting kube-proxy [Disruptive]
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling [NodeFeature:RuntimeHandler] [Disruptive]
Kubernetes e2e suite [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]
Kubernetes e2e suite [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes
Kubernetes e2e suite [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes
Kubernetes e2e suite [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes
Kubernetes e2e suite [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes
Kubernetes e2e suite [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by changing the default annotation [Serial] [Disruptive]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by removing the default annotation [Serial] [Disruptive]
Kubernetes e2e suite [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow]
Kubernetes e2e suite [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a file written to the mount before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a volume mounted to a pod that is force deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running
Kubernetes e2e suite [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes
Kubernetes e2e suite [sig-storage] Pod Disks detach in a disrupted environment [Slow] [Disruptive] when node is deleted
Kubernetes e2e suite [sig-storage] Pod Disks detach in a disrupted environment [Slow] [Disruptive] when node's API object is deleted
Kubernetes e2e suite [sig-storage] Pod Disks detach in a disrupted environment [Slow] [Disruptive] when pod is evicted
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics PVController should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics PVController should create none metrics for pvc controller before creating any PV or PVC
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics PVController should create unbound pv count metrics for pvc controller after creating pv only
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics PVController should create unbound pvc count metrics for pvc controller after creating pvc only
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics should create metrics for total number of volumes in A/D Controller
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics should create metrics for total time taken in volume operations in P/V Controller
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning and attach/detach
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning errors [Slow]
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics should create volume metrics in Volume Manager
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics should create volume metrics with the correct PVC ref
TearDown
TearDown Previous
Timeout
Up
kubectl version
list nodes
test setup
Kubernetes e2e suite Recreate [Feature:Recreate] recreate nodes and ensure they function upon restart
Kubernetes e2e suite [k8s.io] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [k8s.io] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [k8s.io] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [k8s.io] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [k8s.io] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [k8s.io] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] GKE local SSD [Feature:GKELocalSSD] should write and read from node local SSD [Feature:GKELocalSSD]
Kubernetes e2e suite [k8s.io] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Lease lease API should be available [Conformance]
Kubernetes e2e suite [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set
Kubernetes e2e suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
Kubernetes e2e suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
Kubernetes e2e suite [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Pods should be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
Kubernetes e2e suite [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
Kubernetes e2e suite [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
Kubernetes e2e suite [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe
Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]
Kubernetes e2e suite [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout
Kubernetes e2e suite [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Probing container should be restarted with a local redirect http liveness probe
Kubernetes e2e suite [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
Kubernetes e2e suite [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node
Kubernetes e2e suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls
Kubernetes e2e suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls
Kubernetes e2e suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted
Kubernetes e2e suite [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage]
Kubernetes e2e suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow]
Kubernetes e2e suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow]
Kubernetes e2e suite [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow]
Kubernetes e2e suite [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow]
Kubernetes e2e suite [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow]
Kubernetes e2e suite [k8s.io] [Feature:Example] [k8s.io] Downward API should create a pod that prints his name and namespace
Kubernetes e2e suite [k8s.io] [Feature:Example] [k8s.io] Liveness liveness pods should be automatically restarted
Kubernetes e2e suite [k8s.io] [Feature:Example] [k8s.io] Secret should create a pod that reads a secret
Kubernetes e2e suite [k8s.io] [Feature:TTLAfterFinished][NodeAlphaFeature:TTLAfterFinished] job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [k8s.io] [sig-cloud-provider] [Feature:CloudProvider][Disruptive] Nodes should be deleted on API server if it doesn't exist in the cloud provider
Kubernetes e2e suite [k8s.io] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined
Kubernetes e2e suite [k8s.io] [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile
Kubernetes e2e suite [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
Kubernetes e2e suite [k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] experimental resource usage tracking [Feature:ExperimentalResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 0 pods per node
Kubernetes e2e suite [k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [k8s.io] [sig-node] Mount propagation should propagate mounts to the host
Kubernetes e2e suite [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] should run without error
Kubernetes e2e suite [k8s.io] [sig-node] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods
Kubernetes e2e suite [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]
Kubernetes e2e suite [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
Kubernetes e2e suite [k8s.io] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process [Flaky]
Kubernetes e2e suite [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]
Kubernetes e2e suite [k8s.io] [sig-node] SSH should SSH to all nodes and run commands
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support seccomp alpha runtime/default annotation [Feature:Seccomp] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support seccomp alpha unconfined annotation on the container [Feature:Seccomp] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support seccomp alpha unconfined annotation on the pod [Feature:Seccomp] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support seccomp default which is unconfined [Feature:Seccomp] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support volume SELinux relabeling [Flaky] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support volume SELinux relabeling when using hostIPC [Flaky] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support volume SELinux relabeling when using hostPID [Flaky] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] crictl should be able to run crictl on the node
Kubernetes e2e suite [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
Kubernetes e2e suite [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] host cleanup with volume mounts [sig-storage][HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (active) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] host cleanup with volume mounts [sig-storage][HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (sleeping) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [sig-api-machinery] Generated clientset should create v1beta1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]
Kubernetes e2e suite [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [sig-apps] CronJob should delete successful/failed finished jobs with limit of one job
Kubernetes e2e suite [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [sig-apps] CronJob should not schedule jobs when suspended [Slow]
Kubernetes e2e suite [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow]
Kubernetes e2e suite [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [sig-apps] CronJob should replace jobs when ReplaceConcurrent
Kubernetes e2e suite [sig-apps] CronJob should schedule multiple jobs concurrently
Kubernetes e2e suite [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it
Kubernetes e2e suite [sig-apps] DisruptionController should create a PodDisruptionBudget
Kubernetes e2e suite [sig-apps] DisruptionController should update PodDisruptionBudget status
Kubernetes e2e suite [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] Pods should be evicted from unready Node [Feature:TaintEviction] All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be evicted after eviction timeout passes
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive]
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive]
Kubernetes e2e suite [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should audit API calls to create and delete custom resource definition.
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should audit API calls to create, get, update, patch, delete, list, watch configmaps.
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should audit API calls to create, get, update, patch, delete, list, watch deployments.
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should audit API calls to create, get, update, patch, delete, list, watch pods.
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should audit API calls to create, get, update, patch, delete, list, watch secrets.
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should audit API calls to get a pod with unauthorized user.
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should list pods as impersonated user.
Kubernetes e2e suite [sig-auth] Certificates API should support building a client with a CSR
Kubernetes e2e suite [sig-auth] Metadata Concealment should run a check-metadata-concealment job to completion
Kubernetes e2e suite [sig-auth] PodSecurityPolicy should allow pods under the privileged policy.PodSecurityPolicy
Kubernetes e2e suite [sig-auth] PodSecurityPolicy should enforce the restricted policy.PodSecurityPolicy
Kubernetes e2e suite [sig-auth] PodSecurityPolicy should forbid pod creation when no PSP is available
Kubernetes e2e suite [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should ensure a single API token exists
Kubernetes e2e suite [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] [Feature:TokenRequestProjection]
Kubernetes e2e suite [sig-auth] [Feature:DynamicAudit] should dynamically audit API calls
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] DNS horizontal autoscaling [DisabledForLargeClusters] kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling [sig-autoscaling] Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] ReplicationController light Should scale from 2 pods to 1 pod [Slow]
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Object from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver with Prometheus [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target average value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two External metrics from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two metrics of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl alpha client Kubectl run CronJob should create a CronJob
Kubernetes e2e suite [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply apply set/view last-applied
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run CronJob should create a CronJob
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should contain last line of the log
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should handle in-cluster config
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec using resource/name
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support inline execution and attach
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support port-forward
Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client kubectl get output should contain custom columns for each resource
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
Kubernetes e2e suite [sig-instrumentation] Cadvisor should be healthy on every node.
Kubernetes e2e suite [sig-instrumentation] Cluster level logging implemented by Stackdriver [Feature:StackdriverLogging] [Soak] should ingest logs from applications running for a prolonged amount of time
Kubernetes e2e suite [sig-instrumentation] Cluster level logging implemented by Stackdriver should ingest events [Feature:StackdriverLogging]
Kubernetes e2e suite [sig-instrumentation] Cluster level logging implemented by Stackdriver should ingest logs [Feature:StackdriverLogging]
Kubernetes e2e suite [sig-instrumentation] Cluster level logging implemented by Stackdriver should ingest system logs from all nodes [Feature:StackdriverLogging]
Kubernetes e2e suite [sig-instrumentation] Cluster level logging using Elasticsearch [Feature:Elasticsearch] should check that logs from containers are ingested into Elasticsearch
Kubernetes e2e suite [sig-instrumentation] Kibana Logging Instances Is Alive [Feature:Elasticsearch] should check that the Kibana logging instance is alive
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from API server.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should have accelerator metrics [Feature:StackdriverAcceleratorMonitoring]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for external metrics [Feature:StackdriverExternalMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for new resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for old resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Stackdriver Metadata Agent [Feature:StackdriverMetadataAgent]
Kubernetes e2e suite [sig-network] ClusterDns [Feature:Example] should create pod that uses dns
Kubernetes e2e suite [sig-network] DNS configMap federations [Feature:Federation] should be able to change federation configuration [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver [Feature:Networking-IPv6] [LinuxOnly] Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver [Feature:Networking-IPv6] [LinuxOnly] Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver [Feature:Networking-IPv6] [LinuxOnly] Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for ExternalName services [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for services [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for the cluster [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for the cluster [Provider:GCE]
Kubernetes e2e suite [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
Kubernetes e2e suite [sig-network] DNS should support configurable pod DNS nameservers [Conformance]
Kubernetes e2e suite [sig-network] DNS should support configurable pod resolv.conf
Kubernetes e2e suite [sig-network] ESIPP [Slow] [DisabledForLargeClusters] should handle updates to ExternalTrafficPolicy field
Kubernetes e2e suite [sig-network] ESIPP [Slow] [DisabledForLargeClusters] should only target nodes with endpoints
Kubernetes e2e suite [sig-network] ESIPP [Slow] [DisabledForLargeClusters] should work for type=LoadBalancer
Kubernetes e2e suite [sig-network] ESIPP [Slow] [DisabledForLargeClusters] should work for type=NodePort
Kubernetes e2e suite [sig-network] ESIPP [Slow] [DisabledForLargeClusters] should work from pods
Kubernetes e2e suite [sig-network] EndpointSlice [Feature:EndpointSlice] version v1 should create Endpoints and EndpointSlices for Pods matching a Service
Kubernetes e2e suite [sig-network] Firewall rule should have correct firewall rules for e2e cluster
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] multicluster ingress should get instance group annotation
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should create ingress with pre-shared certificate
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should support multiple TLS certs
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] rolling update backend pods should not cause service disruption
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should be able to create a ClusterIP service
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should be able to switch between IG and NEG modes
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should conform to Ingress spec
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should create NEGs for all ports with the Ingress annotation, and NEGs for the standalone annotation otherwise
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should sync endpoints for both Ingress-referenced NEG and standalone NEG
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should sync endpoints to NEG
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:kubemci] should conform to Ingress spec
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:kubemci] should create ingress with backend HTTPS
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:kubemci] should create ingress with pre-shared certificate
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:kubemci] should remove clusters as expected
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:kubemci] should support https-only annotation
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:kubemci] single and multi-cluster ingresses should be able to exist together
Kubernetes e2e suite [sig-network] Loadbalancing: L7 Scalability GCE [Slow] [Serial] [Feature:IngressScale] Creating and updating ingresses should happen promptly with small/medium/large amount of ingresses
Kubernetes e2e suite [sig-network] Loadbalancing: L7 [Slow] Nginx should conform to Ingress spec
Kubernetes e2e suite [sig-network] Network should resolve connrection reset issue #74839 [Slow]
Kubernetes e2e suite [sig-network] Network should set TCP CLOSE_WAIT timeout
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should be able to handle large requests: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update endpoints: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update endpoints: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]
Kubernetes e2e suite [sig-network] Networking IPerf IPv4 [Experimental] [Feature:Networking-IPv4] [Slow] [Feature:Networking-Performance] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients)
Kubernetes e2e suite [sig-network] Networking IPerf IPv6 [Experimental] [Feature:Networking-IPv6] [Slow] [Feature:Networking-Performance] [LinuxOnly] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients)
Kubernetes e2e suite [sig-network] Networking should check kube-proxy urls
Kubernetes e2e suite [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
Kubernetes e2e suite [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental][LinuxOnly]
Kubernetes e2e suite [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
Kubernetes e2e suite [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
Kubernetes e2e suite [sig-network] Service endpoints latency should not be very high [Conformance]
Kubernetes e2e suite [sig-network] Services [Feature:GCEAlphaFeature][Slow] should be able to create and tear down a standard-tier load balancer [Slow]
Kubernetes e2e suite [sig-network] Services should allow pods to hairpin back to themselves through services
Kubernetes e2e suite [sig-network] Services should be able to change the type and ports of a service [Slow] [DisabledForLargeClusters]
Kubernetes e2e suite [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to create an internal type load balancer [Slow] [DisabledForLargeClusters]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should be able to up and down services
Kubernetes e2e suite [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols
Kubernetes e2e suite [sig-network] Services should be rejected when no endpoints exist
Kubernetes e2e suite [sig-network] Services should check NodePort out-of-range
Kubernetes e2e suite [sig-network] Services should create endpoints for unready pods
Kubernetes e2e suite [sig-network] Services should handle load balancer cleanup finalizer for service [Slow]
Kubernetes e2e suite [sig-network] Services should have session affinity work for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should have session affinity work for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should have session affinity work for NodePort service [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should implement service.kubernetes.io/headless
Kubernetes e2e suite [sig-network] Services should implement service.kubernetes.io/service-proxy-name
Kubernetes e2e suite [sig-network] Services should only allow access from service loadbalancer source ranges [Slow]
Kubernetes e2e suite [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should prevent NodePort collisions
Kubernetes e2e suite [sig-network] Services should provide secure master service [Conformance]
Kubernetes e2e suite [sig-network] Services should release NodePorts on delete
Kubernetes e2e suite [sig-network] Services should serve a basic endpoint from pods [Conformance]
Kubernetes e2e suite [sig-network] Services should serve multiport endpoints from pods [Conformance]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should be able to reach pod on ipv4 and ipv6 ip [Feature:IPv6DualStackAlphaFeature:Phase2]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should create pod, add ipv6 and ipv4 ip to pod ips
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should create service with cluster ip from primary service range [Feature:IPv6DualStackAlphaFeature:Phase2]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should create service with ipv4 cluster ip [Feature:IPv6DualStackAlphaFeature:Phase2]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should create service with ipv6 cluster ip [Feature:IPv6DualStackAlphaFeature:Phase2]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should have ipv4 and ipv6 internal node ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should have ipv4 and ipv6 node podCIDRs
Kubernetes e2e suite [sig-network] [Feature:PerformanceDNS][Serial] Should answer DNS query for maximum number of services per cluster
Kubernetes e2e suite [sig-network] [sig-windows] Networking Granular Checks: Pods should function for intra-pod communication: http
Kubernetes e2e suite [sig-network] [sig-windows] Networking Granular Checks: Pods should function for intra-pod communication: udp
Kubernetes e2e suite [sig-network] [sig-windows] Networking Granular Checks: Pods should function for node-pod communication: http
Kubernetes e2e suite [sig-network] [sig-windows] Networking Granular Checks: Pods should function for node-pod communication: udp
Kubernetes e2e suite [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should update ConfigMap successfully
Kubernetes e2e suite [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
Kubernetes e2e suite [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] run Nvidia GPU Device Plugin tests with a recreation
Kubernetes e2e suite [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.
Kubernetes e2e suite [sig-scheduling] Multi-AZ Cluster Volumes [sig-storage] should only be allowed to provision PDs in zones where nodes exist
Kubernetes e2e suite [sig-scheduling] Multi-AZ Cluster Volumes [sig-storage] should schedule pods in the same zones as statically provisioned PVs
Kubernetes e2e suite [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones
Kubernetes e2e suite [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones
Kubernetes e2e suite [sig-scheduling] PreemptionExecutionPath runs ReplicaSets to verify preemption running path
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]
Kubernetes e2e suite [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests
Kubernetes e2e suite [sig-service-catalog] [Feature:PodPreset] PodPreset should create a pod preset
Kubernetes e2e suite [sig-service-catalog] [Feature:PodPreset] PodPreset should not modify the pod on conflict
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot] snapshottable should create snapshot with defaults [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should support two pods which share the same volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot] snapshottable should create snapshot with defaults [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: inline ephemeral CSI volume] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: inline ephemeral CSI volume] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: inline ephemeral CSI volume] ephemeral should support two pods which share the same volume
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment
Kubernetes e2e suite [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit when limit is bigger than 0 [Slow]
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil
Kubernetes e2e suite [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
Kubernetes e2e suite [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
Kubernetes e2e suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Detaching volumes should not work when mount is in progress [Slow]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner Default should create and delete default persistent volumes [Slow]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] deletion should be idempotent
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] should not provision a volume in an unmanaged GCE zone.
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] should provision storage with different parameters
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] should provision storage with non-default reclaim policy Retain
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] should test that deleting a claim before the volume is provisioned deletes the volume.
Kubernetes e2e suite [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV
Kubernetes e2e suite [sig-storage] Dynamic Provisioning [k8s.io] GlusterDynamicProvisioner should create and delete persistent volumes [fast]
Kubernetes e2e suite [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret
Kubernetes e2e suite [sig-storage] Flexvolumes should be mountable when attachable
Kubernetes e2e suite [sig-storage] Flexvolumes should be mountable when non-attachable
Kubernetes e2e suite [sig-storage] GCP Volumes GlusterFS should be mountable
Kubernetes e2e suite [sig-storage] GCP Volumes NFSv3 should be mountable for NFSv3
Kubernetes e2e suite [sig-storage] GCP Volumes NFSv4 should be mountable for NFSv4
Kubernetes e2e suite [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] HostPath should support r/w [NodeConformance]
Kubernetes e2e suite [sig-storage] HostPath should support subPath [NodeConformance]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted