Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 2h50m |
Revision | |
Builder | 49b7873c-f1e3-11ec-aa53-764d9ce5d219 |
Refs |
master:eae9c7f8 311:4c76cf20 |
infra-commit | 0cd795700 |
repo | sigs.k8s.io/gcp-filestore-csi-driver |
repo-commit | 66ca03486b2fa3683b700d888f54c910287a0bc0 |
repos | {u'sigs.k8s.io/gcp-filestore-csi-driver': u'master:eae9c7f8c2acf2e0a146104f3e5a7bcdc54d2a48,311:4c76cf209b8f0148919b2b0fcb943f5da6e60279'} |
... skipping 140 lines ... I0622 04:26:31.639] PWD is /go/src/sigs.k8s.io/gcp-filestore-csi-driver I0622 04:26:31.640] STAGINGVERSION is 310b4a5f-78fb-41bb-80ec-d37bd61c14c3 I0622 04:26:31.640] STAGINGIMAGE is gcr.io/ci-kubernetes-e2e-gke-gpu/gcp-filestore-csi-driver I0622 04:26:31.640] WEBHOOK_STAGINGIMAGE is gcr.io/ci-kubernetes-e2e-gke-gpu/gcp-filestore-csi-driver-webhook I0622 04:26:31.640] # Ensure we use a builder that can leverage it (the default on linux will not) I0622 04:26:31.641] docker buildx rm multiarch-multiplatform-builder W0622 04:26:31.741] error: no builder "multiarch-multiplatform-builder" found W0622 04:26:31.743] make: [Makefile:179: init-buildx] Error 1 (ignored) I0622 04:26:31.844] docker buildx create --use --name=multiarch-multiplatform-builder I0622 04:26:31.861] multiarch-multiplatform-builder I0622 04:26:31.867] docker run --rm --privileged multiarch/qemu-user-static --reset --credential yes --persistent yes W0622 04:26:31.968] Unable to find image 'multiarch/qemu-user-static:latest' locally W0622 04:26:32.782] latest: Pulling from multiarch/qemu-user-static W0622 04:26:32.783] 19d511225f94: Pulling fs layer ... skipping 642 lines ... W0622 04:54:59.215] NODE_NAMES=e2e-test-prow-minion-group-0ffc e2e-test-prow-minion-group-gcmd e2e-test-prow-minion-group-qhf3 W0622 04:54:59.215] Trying to find master named 'e2e-test-prow-master' W0622 04:54:59.215] Looking for address 'e2e-test-prow-master-ip' I0622 04:55:02.983] Waiting up to 300 seconds for cluster initialization. I0622 04:55:02.984] I0622 04:55:02.985] This will continually check to see if the API for kubernetes is reachable. I0622 04:55:02.987] This may time out if there was some uncaught error during start up. I0622 04:55:02.988] W0622 04:55:03.090] Using master: e2e-test-prow-master (external IP: 34.123.17.188; internal IP: (not set)) I0622 04:55:55.001] ...............Kubernetes cluster created. I0622 04:55:55.158] Cluster "ci-kubernetes-e2e-gke-gpu_e2e-test-prow" set. I0622 04:55:55.325] User "ci-kubernetes-e2e-gke-gpu_e2e-test-prow" set. I0622 04:55:55.487] Context "ci-kubernetes-e2e-gke-gpu_e2e-test-prow" created. ... skipping 28 lines ... I0622 04:57:05.930] e2e-test-prow-minion-group-qhf3 Ready <none> 15s v1.25.0-alpha.1.65+3beb8dc5967801 W0622 04:57:06.652] Warning: v1 ComponentStatus is deprecated in v1.19+ I0622 04:57:06.755] Validate output: W0622 04:57:07.412] Warning: v1 ComponentStatus is deprecated in v1.19+ W0622 04:57:07.433] Done, listing cluster services: W0622 04:57:07.433] I0622 04:57:07.535] NAME STATUS MESSAGE ERROR I0622 04:57:07.535] etcd-1 Healthy {"health":"true","reason":""} I0622 04:57:07.536] etcd-0 Healthy {"health":"true","reason":""} I0622 04:57:07.536] controller-manager Healthy ok I0622 04:57:07.536] scheduler Healthy ok I0622 04:57:07.536] [0;32mCluster validation succeeded[0m I0622 04:57:08.144] [0;32mKubernetes control plane[0m is running at [0;33mhttps://34.123.17.188[0m ... skipping 90 lines ... W0622 04:57:31.157] + ensure_var PKGDIR W0622 04:57:31.157] + [[ -z /go/src/sigs.k8s.io/gcp-filestore-csi-driver ]] W0622 04:57:31.157] + echo 'PKGDIR is /go/src/sigs.k8s.io/gcp-filestore-csi-driver' W0622 04:57:31.158] + /go/src/sigs.k8s.io/gcp-filestore-csi-driver/deploy/kubernetes/install_kustomize.sh I0622 04:57:31.969] {Version:kustomize/v4.0.4 GitCommit:9785bda7bedc6fc0fbd54f57fcf5b44a460cef76 BuildDate:2021-02-28T20:23:59Z GoOs:linux GoArch:amd64} W0622 04:57:32.070] + kubectl get namespace gcp-filestore-csi-driver -v=2 W0622 04:57:32.134] Error from server (NotFound): namespaces "gcp-filestore-csi-driver" not found W0622 04:57:32.139] + kubectl create namespace gcp-filestore-csi-driver -v=2 I0622 04:57:32.263] namespace/gcp-filestore-csi-driver created W0622 04:57:32.365] + kubectl get clusterrolebinding cluster-admin-binding W0622 04:57:32.404] Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "cluster-admin-binding" not found W0622 04:57:32.412] ++ gcloud config get-value account W0622 04:57:34.088] + kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com I0622 04:57:34.223] clusterrolebinding.rbac.authorization.k8s.io/cluster-admin-binding created W0622 04:57:34.324] + '[' stable-master '!=' dev ']' W0622 04:57:34.325] + kubectl get secret gcp-filestore-csi-driver-sa --namespace=gcp-filestore-csi-driver W0622 04:57:34.352] Error from server (NotFound): secrets "gcp-filestore-csi-driver-sa" not found W0622 04:57:34.358] + kubectl create secret generic gcp-filestore-csi-driver-sa --from-file=/tmp/gcp-fs-driver-tmp4168571822/gcp_filestore_csi_driver_sa.json --namespace=gcp-filestore-csi-driver I0622 04:57:34.493] secret/gcp-filestore-csi-driver-sa created W0622 04:57:34.594] + '[' stable-master == multishare ']' W0622 04:57:34.594] + readonly tmp_spec=/tmp/gcp-filestore-csi-driver-specs-generated.yaml W0622 04:57:34.594] + tmp_spec=/tmp/gcp-filestore-csi-driver-specs-generated.yaml W0622 04:57:34.595] + /go/src/sigs.k8s.io/gcp-filestore-csi-driver/bin/kustomize build /go/src/sigs.k8s.io/gcp-filestore-csi-driver/deploy/kubernetes/overlays/stable-master ... skipping 798 lines ... W0622 04:57:56.014] node.kubernetes.io/unreachable:NoExecute op=Exists W0622 04:57:56.015] node.kubernetes.io/unschedulable:NoSchedule op=Exists W0622 04:57:56.016] Events: W0622 04:57:56.017] Type Reason Age From Message W0622 04:57:56.017] ---- ------ ---- ---- ------- W0622 04:57:56.018] Normal Scheduled 18s default-scheduler Successfully assigned gcp-filestore-csi-driver/gcp-filestore-csi-node-8bpxj to e2e-test-prow-minion-group-0ffc W0622 04:57:56.018] Warning FailedMount 17s kubelet MountVolume.SetUp failed for volume "kube-api-access-6lvwq" : failed to sync configmap cache: timed out waiting for the condition W0622 04:57:56.019] Normal Pulling 16s kubelet Pulling image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0" W0622 04:57:56.021] Normal Pulled 16s kubelet Successfully pulled image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0" in 710.448162ms W0622 04:57:56.021] Normal Created 16s kubelet Created container csi-driver-registrar W0622 04:57:56.022] Normal Started 15s kubelet Started container csi-driver-registrar W0622 04:57:56.022] Normal Pulling 15s kubelet Pulling image "gcr.io/ci-kubernetes-e2e-gke-gpu/gcp-filestore-csi-driver:310b4a5f-78fb-41bb-80ec-d37bd61c14c3" W0622 04:57:56.024] Normal Pulled 9s kubelet Successfully pulled image "gcr.io/ci-kubernetes-e2e-gke-gpu/gcp-filestore-csi-driver:310b4a5f-78fb-41bb-80ec-d37bd61c14c3" in 6.466006088s ... skipping 292 lines ... I0622 04:58:02.029] I0622 04:58:02.030] Running in parallel across [1m3[0m nodes I0622 04:58:02.030] I0622 04:59:16.264] Jun 22 04:58:02.029: INFO: >>> kubeConfig: /root/.kube/config I0622 04:59:16.267] Jun 22 04:58:02.033: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable I0622 04:59:16.267] Jun 22 04:58:02.067: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready I0622 04:59:16.267] Jun 22 04:58:02.111: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.268] Jun 22 04:58:02.111: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.268] Jun 22 04:58:02.111: INFO: 27 / 29 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) I0622 04:59:16.268] Jun 22 04:58:02.111: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.269] Jun 22 04:58:02.111: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.269] Jun 22 04:58:02.111: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.270] Jun 22 04:58:02.111: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.270] Jun 22 04:58:02.111: INFO: I0622 04:59:16.271] Jun 22 04:58:04.155: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.271] Jun 22 04:58:04.155: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.271] Jun 22 04:58:04.155: INFO: 27 / 29 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) I0622 04:59:16.272] Jun 22 04:58:04.155: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.272] Jun 22 04:58:04.155: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.273] Jun 22 04:58:04.155: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.274] Jun 22 04:58:04.155: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.277] Jun 22 04:58:04.155: INFO: I0622 04:59:16.277] Jun 22 04:58:06.145: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.278] Jun 22 04:58:06.145: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.278] Jun 22 04:58:06.145: INFO: 27 / 29 pods in namespace 'kube-system' are running and ready (4 seconds elapsed) I0622 04:59:16.278] Jun 22 04:58:06.145: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.278] Jun 22 04:58:06.145: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.279] Jun 22 04:58:06.145: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.280] Jun 22 04:58:06.145: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.285] Jun 22 04:58:06.145: INFO: I0622 04:59:16.286] Jun 22 04:58:08.161: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.288] Jun 22 04:58:08.161: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.290] Jun 22 04:58:08.161: INFO: 27 / 29 pods in namespace 'kube-system' are running and ready (6 seconds elapsed) I0622 04:59:16.292] Jun 22 04:58:08.161: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.294] Jun 22 04:58:08.161: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.295] Jun 22 04:58:08.161: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.297] Jun 22 04:58:08.161: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.298] Jun 22 04:58:08.161: INFO: I0622 04:59:16.299] Jun 22 04:58:10.160: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.300] Jun 22 04:58:10.160: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.300] Jun 22 04:58:10.160: INFO: 27 / 29 pods in namespace 'kube-system' are running and ready (8 seconds elapsed) I0622 04:59:16.301] Jun 22 04:58:10.160: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.303] Jun 22 04:58:10.160: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.305] Jun 22 04:58:10.160: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.306] Jun 22 04:58:10.160: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.307] Jun 22 04:58:10.160: INFO: I0622 04:59:16.308] Jun 22 04:58:12.168: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.309] Jun 22 04:58:12.169: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.314] Jun 22 04:58:12.169: INFO: 27 / 29 pods in namespace 'kube-system' are running and ready (10 seconds elapsed) I0622 04:59:16.315] Jun 22 04:58:12.169: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.317] Jun 22 04:58:12.169: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.319] Jun 22 04:58:12.169: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.321] Jun 22 04:58:12.169: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.323] Jun 22 04:58:12.169: INFO: I0622 04:59:16.331] Jun 22 04:58:14.163: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.332] Jun 22 04:58:14.164: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.333] Jun 22 04:58:14.164: INFO: 27 / 29 pods in namespace 'kube-system' are running and ready (12 seconds elapsed) I0622 04:59:16.335] Jun 22 04:58:14.164: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.336] Jun 22 04:58:14.164: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.337] Jun 22 04:58:14.164: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.339] Jun 22 04:58:14.164: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.340] Jun 22 04:58:14.164: INFO: I0622 04:59:16.341] Jun 22 04:58:16.154: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.342] Jun 22 04:58:16.155: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.342] Jun 22 04:58:16.155: INFO: 27 / 29 pods in namespace 'kube-system' are running and ready (14 seconds elapsed) I0622 04:59:16.343] Jun 22 04:58:16.155: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.344] Jun 22 04:58:16.155: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.345] Jun 22 04:58:16.155: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.351] Jun 22 04:58:16.155: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.352] Jun 22 04:58:16.155: INFO: I0622 04:59:16.353] Jun 22 04:58:18.163: INFO: The status of Pod etcd-server-events-e2e-test-prow-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.354] Jun 22 04:58:18.163: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.355] Jun 22 04:58:18.163: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.355] Jun 22 04:58:18.163: INFO: 27 / 30 pods in namespace 'kube-system' are running and ready (16 seconds elapsed) I0622 04:59:16.356] Jun 22 04:58:18.163: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.357] Jun 22 04:58:18.163: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.358] Jun 22 04:58:18.163: INFO: etcd-server-events-e2e-test-prow-master e2e-test-prow-master Pending [] I0622 04:59:16.359] Jun 22 04:58:18.163: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.366] Jun 22 04:58:18.163: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.370] Jun 22 04:58:18.163: INFO: I0622 04:59:16.373] Jun 22 04:58:20.176: INFO: The status of Pod etcd-server-events-e2e-test-prow-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.374] Jun 22 04:58:20.176: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.375] Jun 22 04:58:20.176: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.375] Jun 22 04:58:20.176: INFO: 27 / 30 pods in namespace 'kube-system' are running and ready (18 seconds elapsed) I0622 04:59:16.375] Jun 22 04:58:20.176: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.375] Jun 22 04:58:20.176: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.376] Jun 22 04:58:20.176: INFO: etcd-server-events-e2e-test-prow-master e2e-test-prow-master Pending [] I0622 04:59:16.376] Jun 22 04:58:20.176: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.377] Jun 22 04:58:20.176: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.377] Jun 22 04:58:20.176: INFO: I0622 04:59:16.378] Jun 22 04:58:22.152: INFO: The status of Pod etcd-server-e2e-test-prow-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.379] Jun 22 04:58:22.152: INFO: The status of Pod etcd-server-events-e2e-test-prow-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.380] Jun 22 04:58:22.152: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.381] Jun 22 04:58:22.152: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.382] Jun 22 04:58:22.152: INFO: 27 / 31 pods in namespace 'kube-system' are running and ready (20 seconds elapsed) I0622 04:59:16.383] Jun 22 04:58:22.152: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.383] Jun 22 04:58:22.152: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.384] Jun 22 04:58:22.152: INFO: etcd-server-e2e-test-prow-master e2e-test-prow-master Pending [] I0622 04:59:16.385] Jun 22 04:58:22.152: INFO: etcd-server-events-e2e-test-prow-master e2e-test-prow-master Pending [] I0622 04:59:16.387] Jun 22 04:58:22.152: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.388] Jun 22 04:58:22.152: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.389] Jun 22 04:58:22.152: INFO: I0622 04:59:16.390] Jun 22 04:58:24.151: INFO: The status of Pod etcd-server-e2e-test-prow-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.391] Jun 22 04:58:24.151: INFO: The status of Pod etcd-server-events-e2e-test-prow-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.392] Jun 22 04:58:24.151: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.397] Jun 22 04:58:24.151: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.398] Jun 22 04:58:24.151: INFO: 27 / 31 pods in namespace 'kube-system' are running and ready (22 seconds elapsed) I0622 04:59:16.399] Jun 22 04:58:24.151: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.403] Jun 22 04:58:24.151: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.405] Jun 22 04:58:24.151: INFO: etcd-server-e2e-test-prow-master e2e-test-prow-master Pending [] I0622 04:59:16.406] Jun 22 04:58:24.151: INFO: etcd-server-events-e2e-test-prow-master e2e-test-prow-master Pending [] I0622 04:59:16.406] Jun 22 04:58:24.151: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.407] Jun 22 04:58:24.151: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.407] Jun 22 04:58:24.151: INFO: I0622 04:59:16.408] Jun 22 04:58:26.152: INFO: The status of Pod etcd-server-e2e-test-prow-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.408] Jun 22 04:58:26.152: INFO: The status of Pod etcd-server-events-e2e-test-prow-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.408] Jun 22 04:58:26.152: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.409] Jun 22 04:58:26.152: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.409] Jun 22 04:58:26.152: INFO: 27 / 31 pods in namespace 'kube-system' are running and ready (24 seconds elapsed) I0622 04:59:16.409] Jun 22 04:58:26.152: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.409] Jun 22 04:58:26.152: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.410] Jun 22 04:58:26.152: INFO: etcd-server-e2e-test-prow-master e2e-test-prow-master Pending [] I0622 04:59:16.410] Jun 22 04:58:26.152: INFO: etcd-server-events-e2e-test-prow-master e2e-test-prow-master Pending [] I0622 04:59:16.412] Jun 22 04:58:26.152: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.413] Jun 22 04:58:26.152: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.413] Jun 22 04:58:26.152: INFO: I0622 04:59:16.413] Jun 22 04:58:28.153: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.414] Jun 22 04:58:28.153: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.414] Jun 22 04:58:28.153: INFO: 29 / 31 pods in namespace 'kube-system' are running and ready (26 seconds elapsed) I0622 04:59:16.419] Jun 22 04:58:28.153: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.420] Jun 22 04:58:28.153: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.422] Jun 22 04:58:28.153: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.423] Jun 22 04:58:28.153: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.424] Jun 22 04:58:28.153: INFO: I0622 04:59:16.425] Jun 22 04:58:30.147: INFO: The status of Pod fluentd-gcp-v3.2.0-c5glc is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.426] Jun 22 04:58:30.147: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.427] Jun 22 04:58:30.147: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.428] Jun 22 04:58:30.147: INFO: 28 / 31 pods in namespace 'kube-system' are running and ready (28 seconds elapsed) I0622 04:59:16.428] Jun 22 04:58:30.147: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.429] Jun 22 04:58:30.147: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.430] Jun 22 04:58:30.147: INFO: fluentd-gcp-v3.2.0-c5glc e2e-test-prow-minion-group-gcmd Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:58:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:58:29 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:58:29 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:58:29 +0000 UTC }] I0622 04:59:16.432] Jun 22 04:58:30.147: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.433] Jun 22 04:58:30.147: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.434] Jun 22 04:58:30.147: INFO: I0622 04:59:16.435] Jun 22 04:58:32.147: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.436] Jun 22 04:58:32.147: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.436] Jun 22 04:58:32.147: INFO: 29 / 31 pods in namespace 'kube-system' are running and ready (30 seconds elapsed) I0622 04:59:16.437] Jun 22 04:58:32.147: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.438] Jun 22 04:58:32.147: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.443] Jun 22 04:58:32.147: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.445] Jun 22 04:58:32.148: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.446] Jun 22 04:58:32.148: INFO: I0622 04:59:16.447] Jun 22 04:58:34.152: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.448] Jun 22 04:58:34.152: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.449] Jun 22 04:58:34.152: INFO: 29 / 31 pods in namespace 'kube-system' are running and ready (32 seconds elapsed) I0622 04:59:16.449] Jun 22 04:58:34.152: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.450] Jun 22 04:58:34.152: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.452] Jun 22 04:58:34.153: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.453] Jun 22 04:58:34.153: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.454] Jun 22 04:58:34.153: INFO: I0622 04:59:16.455] Jun 22 04:58:36.155: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.456] Jun 22 04:58:36.155: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.456] Jun 22 04:58:36.155: INFO: 29 / 31 pods in namespace 'kube-system' are running and ready (34 seconds elapsed) I0622 04:59:16.457] Jun 22 04:58:36.155: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.458] Jun 22 04:58:36.155: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.463] Jun 22 04:58:36.155: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.465] Jun 22 04:58:36.155: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.466] Jun 22 04:58:36.155: INFO: I0622 04:59:16.467] Jun 22 04:58:38.166: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.468] Jun 22 04:58:38.166: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.469] Jun 22 04:58:38.166: INFO: 29 / 31 pods in namespace 'kube-system' are running and ready (36 seconds elapsed) I0622 04:59:16.469] Jun 22 04:58:38.166: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.470] Jun 22 04:58:38.166: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.472] Jun 22 04:58:38.166: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.473] Jun 22 04:58:38.166: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.478] Jun 22 04:58:38.166: INFO: I0622 04:59:16.480] Jun 22 04:58:40.158: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.481] Jun 22 04:58:40.158: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.482] Jun 22 04:58:40.158: INFO: 29 / 31 pods in namespace 'kube-system' are running and ready (38 seconds elapsed) I0622 04:59:16.483] Jun 22 04:58:40.158: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.483] Jun 22 04:58:40.158: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.485] Jun 22 04:58:40.158: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.486] Jun 22 04:58:40.159: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.487] Jun 22 04:58:40.159: INFO: I0622 04:59:16.488] Jun 22 04:58:42.154: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.489] Jun 22 04:58:42.155: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.490] Jun 22 04:58:42.155: INFO: 29 / 31 pods in namespace 'kube-system' are running and ready (40 seconds elapsed) I0622 04:59:16.491] Jun 22 04:58:42.155: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.499] Jun 22 04:58:42.155: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.500] Jun 22 04:58:42.155: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.502] Jun 22 04:58:42.155: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.503] Jun 22 04:58:42.155: INFO: I0622 04:59:16.505] Jun 22 04:58:44.150: INFO: The status of Pod l7-default-backend-5f6f9745f9-2krz2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.506] Jun 22 04:58:44.150: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.507] Jun 22 04:58:44.150: INFO: 29 / 31 pods in namespace 'kube-system' are running and ready (42 seconds elapsed) I0622 04:59:16.508] Jun 22 04:58:44.150: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready. I0622 04:59:16.508] Jun 22 04:58:44.150: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.509] Jun 22 04:58:44.150: INFO: l7-default-backend-5f6f9745f9-2krz2 e2e-test-prow-minion-group-0ffc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:12 +0000 UTC }] I0622 04:59:16.510] Jun 22 04:58:44.150: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.511] Jun 22 04:58:44.150: INFO: I0622 04:59:16.511] Jun 22 04:58:46.160: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.512] Jun 22 04:58:46.160: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (44 seconds elapsed) I0622 04:59:16.512] Jun 22 04:58:46.160: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready. I0622 04:59:16.513] Jun 22 04:58:46.160: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.514] Jun 22 04:58:46.160: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.514] Jun 22 04:58:46.160: INFO: I0622 04:59:16.514] Jun 22 04:58:48.163: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.514] Jun 22 04:58:48.163: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (46 seconds elapsed) I0622 04:59:16.515] Jun 22 04:58:48.163: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready. I0622 04:59:16.515] Jun 22 04:58:48.163: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.516] Jun 22 04:58:48.163: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.516] Jun 22 04:58:48.163: INFO: I0622 04:59:16.517] Jun 22 04:58:50.147: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.517] Jun 22 04:58:50.147: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (48 seconds elapsed) I0622 04:59:16.517] Jun 22 04:58:50.147: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready. I0622 04:59:16.518] Jun 22 04:58:50.147: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.522] Jun 22 04:58:50.147: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.523] Jun 22 04:58:50.147: INFO: I0622 04:59:16.523] Jun 22 04:58:52.143: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.523] Jun 22 04:58:52.143: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (50 seconds elapsed) I0622 04:59:16.524] Jun 22 04:58:52.143: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready. I0622 04:59:16.524] Jun 22 04:58:52.143: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.525] Jun 22 04:58:52.143: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.525] Jun 22 04:58:52.143: INFO: I0622 04:59:16.525] Jun 22 04:58:54.172: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.525] Jun 22 04:58:54.172: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (52 seconds elapsed) I0622 04:59:16.526] Jun 22 04:58:54.172: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready. I0622 04:59:16.526] Jun 22 04:58:54.172: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.531] Jun 22 04:58:54.172: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.531] Jun 22 04:58:54.172: INFO: I0622 04:59:16.531] Jun 22 04:58:56.140: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.532] Jun 22 04:58:56.140: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (54 seconds elapsed) I0622 04:59:16.532] Jun 22 04:58:56.140: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready. I0622 04:59:16.532] Jun 22 04:58:56.140: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.533] Jun 22 04:58:56.140: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.533] Jun 22 04:58:56.140: INFO: I0622 04:59:16.534] Jun 22 04:58:58.153: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.534] Jun 22 04:58:58.153: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (56 seconds elapsed) I0622 04:59:16.534] Jun 22 04:58:58.153: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready. I0622 04:59:16.534] Jun 22 04:58:58.153: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.535] Jun 22 04:58:58.153: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.535] Jun 22 04:58:58.153: INFO: I0622 04:59:16.536] Jun 22 04:59:00.146: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.536] Jun 22 04:59:00.146: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (58 seconds elapsed) I0622 04:59:16.536] Jun 22 04:59:00.146: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready. I0622 04:59:16.536] Jun 22 04:59:00.146: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.537] Jun 22 04:59:00.146: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.537] Jun 22 04:59:00.146: INFO: I0622 04:59:16.537] Jun 22 04:59:02.143: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.538] Jun 22 04:59:02.143: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (60 seconds elapsed) I0622 04:59:16.542] Jun 22 04:59:02.143: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready. I0622 04:59:16.543] Jun 22 04:59:02.143: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.543] Jun 22 04:59:02.143: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.544] Jun 22 04:59:02.143: INFO: I0622 04:59:16.544] Jun 22 04:59:04.144: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.544] Jun 22 04:59:04.144: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (62 seconds elapsed) I0622 04:59:16.545] Jun 22 04:59:04.144: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready. I0622 04:59:16.545] Jun 22 04:59:04.144: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.546] Jun 22 04:59:04.144: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.546] Jun 22 04:59:04.144: INFO: I0622 04:59:16.551] Jun 22 04:59:06.168: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.551] Jun 22 04:59:06.168: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (64 seconds elapsed) I0622 04:59:16.551] Jun 22 04:59:06.168: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready. I0622 04:59:16.551] Jun 22 04:59:06.168: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.552] Jun 22 04:59:06.168: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.552] Jun 22 04:59:06.168: INFO: I0622 04:59:16.553] Jun 22 04:59:08.160: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.553] Jun 22 04:59:08.160: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (66 seconds elapsed) I0622 04:59:16.553] Jun 22 04:59:08.160: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready. I0622 04:59:16.554] Jun 22 04:59:08.160: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.554] Jun 22 04:59:08.160: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.555] Jun 22 04:59:08.160: INFO: I0622 04:59:16.555] Jun 22 04:59:10.160: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.555] Jun 22 04:59:10.160: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (68 seconds elapsed) I0622 04:59:16.556] Jun 22 04:59:10.160: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready. I0622 04:59:16.556] Jun 22 04:59:10.160: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.557] Jun 22 04:59:10.161: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.557] Jun 22 04:59:10.161: INFO: I0622 04:59:16.557] Jun 22 04:59:12.158: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.557] Jun 22 04:59:12.158: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (70 seconds elapsed) I0622 04:59:16.558] Jun 22 04:59:12.158: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready. I0622 04:59:16.558] Jun 22 04:59:12.158: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.563] Jun 22 04:59:12.158: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.563] Jun 22 04:59:12.158: INFO: I0622 04:59:16.564] Jun 22 04:59:14.162: INFO: The status of Pod metrics-server-v0.5.2-755d66fb57-lqjpn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed I0622 04:59:16.564] Jun 22 04:59:14.162: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (72 seconds elapsed) I0622 04:59:16.564] Jun 22 04:59:14.162: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready. I0622 04:59:16.564] Jun 22 04:59:14.162: INFO: POD NODE PHASE GRACE CONDITIONS I0622 04:59:16.565] Jun 22 04:59:14.162: INFO: metrics-server-v0.5.2-755d66fb57-lqjpn e2e-test-prow-minion-group-qhf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-22 04:57:34 +0000 UTC }] I0622 04:59:16.565] Jun 22 04:59:14.162: INFO: I0622 04:59:16.565] Jun 22 04:59:16.207: INFO: 31 / 31 pods in namespace 'kube-system' are running and ready (74 seconds elapsed) ... skipping 27 lines ... I0622 04:59:16.575] I0622 04:59:16.575] [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m I0622 04:59:16.575] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 04:59:16.575] [90mtest/e2e/storage/external/external.go:174[0m I0622 04:59:16.576] [Testpattern: Dynamic PV (delayed binding)] topology I0622 04:59:16.576] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 04:59:16.576] [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m I0622 04:59:16.576] [90mtest/e2e/storage/testsuites/topology.go:194[0m I0622 04:59:16.576] I0622 04:59:16.576] [36mDriver "csi-gcpfs-fs-sc-basic-hdd" does not support topology - skipping[0m I0622 04:59:16.576] I0622 04:59:16.577] test/e2e/storage/testsuites/topology.go:93 I0622 04:59:16.577] [90m------------------------------[0m ... skipping 274 lines ... I0622 05:04:59.000] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:04:59.000] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy I0622 05:04:59.000] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:04:59.001] (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents I0622 05:04:59.001] [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m I0622 05:04:59.001] [90m------------------------------[0m I0622 05:04:59.001] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":1,"skipped":0,"failed":0} I0622 05:04:59.001] I0622 05:04:59.012] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:04:59.013] [90m------------------------------[0m I0622 05:04:59.013] [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath I0622 05:04:59.013] test/e2e/storage/framework/testsuite.go:51 I0622 05:04:59.013] Jun 22 05:04:59.010: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "InlineVolume" - skipping ... skipping 283 lines ... I0622 05:05:16.095] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:05:16.095] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral I0622 05:05:16.095] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:05:16.096] should support multiple inline ephemeral volumes I0622 05:05:16.096] [90mtest/e2e/storage/testsuites/ephemeral.go:315[0m I0622 05:05:16.096] [90m------------------------------[0m I0622 05:05:16.096] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":1,"skipped":133,"failed":0} I0622 05:05:16.096] I0622 05:05:29.615] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:05:29.615] [90m------------------------------[0m I0622 05:05:29.616] [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand I0622 05:05:29.616] test/e2e/storage/framework/testsuite.go:51 I0622 05:05:29.616] [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand ... skipping 9 lines ... I0622 05:05:29.618] Jun 22 05:04:59.494: INFO: Using claimSize:1Ti, test suite supported size:{ 1Gi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Gi} I0622 05:05:29.619] [1mSTEP[0m: creating a StorageClass volume-expand-2764-e2e-scbq9vc I0622 05:05:29.619] [1mSTEP[0m: creating a claim I0622 05:05:29.619] Jun 22 05:04:59.503: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0622 05:05:29.619] [1mSTEP[0m: Expanding non-expandable pvc I0622 05:05:29.620] Jun 22 05:04:59.525: INFO: currentPvcSize {{1099511627776 0} {<nil>} 1Ti BinarySI}, newSize {{1100585369600 0} {<nil>} BinarySI} I0622 05:05:29.620] Jun 22 05:04:59.534: INFO: Error updating pvc csi-gcpfs-fs-sc-basic-hdd86kbp: PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hdd86kbp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims I0622 05:05:29.620] core.PersistentVolumeClaimSpec{ I0622 05:05:29.620] AccessModes: {"ReadWriteOnce"}, I0622 05:05:29.620] Selector: nil, I0622 05:05:29.620] Resources: core.ResourceRequirements{ I0622 05:05:29.620] Limits: nil, I0622 05:05:29.621] - Requests: core.ResourceList{ ... skipping 5 lines ... I0622 05:05:29.621] }, I0622 05:05:29.621] VolumeName: "", I0622 05:05:29.621] StorageClassName: &"volume-expand-2764-e2e-scbq9vc", I0622 05:05:29.621] ... // 3 identical fields I0622 05:05:29.621] } I0622 05:05:29.621] I0622 05:05:29.622] Jun 22 05:05:01.549: INFO: Error updating pvc csi-gcpfs-fs-sc-basic-hdd86kbp: PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hdd86kbp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims I0622 05:05:29.622] core.PersistentVolumeClaimSpec{ I0622 05:05:29.622] AccessModes: {"ReadWriteOnce"}, I0622 05:05:29.622] Selector: nil, I0622 05:05:29.622] Resources: core.ResourceRequirements{ I0622 05:05:29.622] Limits: nil, I0622 05:05:29.622] - Requests: core.ResourceList{ ... skipping 5 lines ... I0622 05:05:29.623] }, I0622 05:05:29.623] VolumeName: "", I0622 05:05:29.623] StorageClassName: &"volume-expand-2764-e2e-scbq9vc", I0622 05:05:29.623] ... // 3 identical fields I0622 05:05:29.623] } I0622 05:05:29.623] I0622 05:05:29.623] Jun 22 05:05:03.547: INFO: Error updating pvc csi-gcpfs-fs-sc-basic-hdd86kbp: PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hdd86kbp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims I0622 05:05:29.624] core.PersistentVolumeClaimSpec{ I0622 05:05:29.624] AccessModes: {"ReadWriteOnce"}, I0622 05:05:29.624] Selector: nil, I0622 05:05:29.624] Resources: core.ResourceRequirements{ I0622 05:05:29.624] Limits: nil, I0622 05:05:29.624] - Requests: core.ResourceList{ ... skipping 5 lines ... I0622 05:05:29.624] }, I0622 05:05:29.624] VolumeName: "", I0622 05:05:29.625] StorageClassName: &"volume-expand-2764-e2e-scbq9vc", I0622 05:05:29.625] ... // 3 identical fields I0622 05:05:29.625] } I0622 05:05:29.625] I0622 05:05:29.625] Jun 22 05:05:05.544: INFO: Error updating pvc csi-gcpfs-fs-sc-basic-hdd86kbp: PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hdd86kbp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims I0622 05:05:29.625] core.PersistentVolumeClaimSpec{ I0622 05:05:29.625] AccessModes: {"ReadWriteOnce"}, I0622 05:05:29.625] Selector: nil, I0622 05:05:29.625] Resources: core.ResourceRequirements{ I0622 05:05:29.625] Limits: nil, I0622 05:05:29.625] - Requests: core.ResourceList{ ... skipping 5 lines ... I0622 05:05:29.626] }, I0622 05:05:29.627] VolumeName: "", I0622 05:05:29.627] StorageClassName: &"volume-expand-2764-e2e-scbq9vc", I0622 05:05:29.627] ... // 3 identical fields I0622 05:05:29.627] } I0622 05:05:29.628] I0622 05:05:29.628] Jun 22 05:05:07.547: INFO: Error updating pvc csi-gcpfs-fs-sc-basic-hdd86kbp: PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hdd86kbp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims I0622 05:05:29.628] core.PersistentVolumeClaimSpec{ I0622 05:05:29.628] AccessModes: {"ReadWriteOnce"}, I0622 05:05:29.628] Selector: nil, I0622 05:05:29.628] Resources: core.ResourceRequirements{ I0622 05:05:29.629] Limits: nil, I0622 05:05:29.629] - Requests: core.ResourceList{ ... skipping 5 lines ... I0622 05:05:29.629] }, I0622 05:05:29.629] VolumeName: "", I0622 05:05:29.630] StorageClassName: &"volume-expand-2764-e2e-scbq9vc", I0622 05:05:29.630] ... // 3 identical fields I0622 05:05:29.630] } I0622 05:05:29.630] I0622 05:05:29.630] Jun 22 05:05:09.546: INFO: Error updating pvc csi-gcpfs-fs-sc-basic-hdd86kbp: PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hdd86kbp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims I0622 05:05:29.630] core.PersistentVolumeClaimSpec{ I0622 05:05:29.630] AccessModes: {"ReadWriteOnce"}, I0622 05:05:29.630] Selector: nil, I0622 05:05:29.630] Resources: core.ResourceRequirements{ I0622 05:05:29.631] Limits: nil, I0622 05:05:29.631] - Requests: core.ResourceList{ ... skipping 5 lines ... I0622 05:05:29.631] }, I0622 05:05:29.631] VolumeName: "", I0622 05:05:29.632] StorageClassName: &"volume-expand-2764-e2e-scbq9vc", I0622 05:05:29.632] ... // 3 identical fields I0622 05:05:29.632] } I0622 05:05:29.632] I0622 05:05:29.632] Jun 22 05:05:11.545: INFO: Error updating pvc csi-gcpfs-fs-sc-basic-hdd86kbp: PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hdd86kbp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims I0622 05:05:29.632] core.PersistentVolumeClaimSpec{ I0622 05:05:29.633] AccessModes: {"ReadWriteOnce"}, I0622 05:05:29.633] Selector: nil, I0622 05:05:29.633] Resources: core.ResourceRequirements{ I0622 05:05:29.633] Limits: nil, I0622 05:05:29.633] - Requests: core.ResourceList{ ... skipping 5 lines ... I0622 05:05:29.634] }, I0622 05:05:29.634] VolumeName: "", I0622 05:05:29.634] StorageClassName: &"volume-expand-2764-e2e-scbq9vc", I0622 05:05:29.634] ... // 3 identical fields I0622 05:05:29.634] } I0622 05:05:29.634] I0622 05:05:29.634] Jun 22 05:05:13.544: INFO: Error updating pvc csi-gcpfs-fs-sc-basic-hdd86kbp: PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hdd86kbp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims I0622 05:05:29.634] core.PersistentVolumeClaimSpec{ I0622 05:05:29.634] AccessModes: {"ReadWriteOnce"}, I0622 05:05:29.635] Selector: nil, I0622 05:05:29.635] Resources: core.ResourceRequirements{ I0622 05:05:29.635] Limits: nil, I0622 05:05:29.635] - Requests: core.ResourceList{ ... skipping 5 lines ... I0622 05:05:29.635] }, I0622 05:05:29.635] VolumeName: "", I0622 05:05:29.635] StorageClassName: &"volume-expand-2764-e2e-scbq9vc", I0622 05:05:29.636] ... // 3 identical fields I0622 05:05:29.636] } I0622 05:05:29.636] I0622 05:05:29.636] Jun 22 05:05:15.543: INFO: Error updating pvc csi-gcpfs-fs-sc-basic-hdd86kbp: PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hdd86kbp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims I0622 05:05:29.636] core.PersistentVolumeClaimSpec{ I0622 05:05:29.636] AccessModes: {"ReadWriteOnce"}, I0622 05:05:29.636] Selector: nil, I0622 05:05:29.636] Resources: core.ResourceRequirements{ I0622 05:05:29.636] Limits: nil, I0622 05:05:29.636] - Requests: core.ResourceList{ ... skipping 5 lines ... I0622 05:05:29.637] }, I0622 05:05:29.637] VolumeName: "", I0622 05:05:29.637] StorageClassName: &"volume-expand-2764-e2e-scbq9vc", I0622 05:05:29.637] ... // 3 identical fields I0622 05:05:29.637] } I0622 05:05:29.637] I0622 05:05:29.637] Jun 22 05:05:17.569: INFO: Error updating pvc csi-gcpfs-fs-sc-basic-hdd86kbp: PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hdd86kbp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims I0622 05:05:29.638] core.PersistentVolumeClaimSpec{ I0622 05:05:29.638] AccessModes: {"ReadWriteOnce"}, I0622 05:05:29.638] Selector: nil, I0622 05:05:29.638] Resources: core.ResourceRequirements{ I0622 05:05:29.638] Limits: nil, I0622 05:05:29.638] - Requests: core.ResourceList{ ... skipping 5 lines ... I0622 05:05:29.639] }, I0622 05:05:29.639] VolumeName: "", I0622 05:05:29.639] StorageClassName: &"volume-expand-2764-e2e-scbq9vc", I0622 05:05:29.639] ... // 3 identical fields I0622 05:05:29.639] } I0622 05:05:29.639] I0622 05:05:29.640] Jun 22 05:05:19.544: INFO: Error updating pvc csi-gcpfs-fs-sc-basic-hdd86kbp: PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hdd86kbp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims I0622 05:05:29.640] core.PersistentVolumeClaimSpec{ I0622 05:05:29.640] AccessModes: {"ReadWriteOnce"}, I0622 05:05:29.640] Selector: nil, I0622 05:05:29.640] Resources: core.ResourceRequirements{ I0622 05:05:29.640] Limits: nil, I0622 05:05:29.640] - Requests: core.ResourceList{ ... skipping 5 lines ... I0622 05:05:29.641] }, I0622 05:05:29.641] VolumeName: "", I0622 05:05:29.642] StorageClassName: &"volume-expand-2764-e2e-scbq9vc", I0622 05:05:29.642] ... // 3 identical fields I0622 05:05:29.642] } I0622 05:05:29.642] I0622 05:05:29.642] Jun 22 05:05:21.549: INFO: Error updating pvc csi-gcpfs-fs-sc-basic-hdd86kbp: PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hdd86kbp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims I0622 05:05:29.642] core.PersistentVolumeClaimSpec{ I0622 05:05:29.643] AccessModes: {"ReadWriteOnce"}, I0622 05:05:29.643] Selector: nil, I0622 05:05:29.643] Resources: core.ResourceRequirements{ I0622 05:05:29.643] Limits: nil, I0622 05:05:29.643] - Requests: core.ResourceList{ ... skipping 5 lines ... I0622 05:05:29.644] }, I0622 05:05:29.644] VolumeName: "", I0622 05:05:29.644] StorageClassName: &"volume-expand-2764-e2e-scbq9vc", I0622 05:05:29.644] ... // 3 identical fields I0622 05:05:29.644] } I0622 05:05:29.645] I0622 05:05:29.645] Jun 22 05:05:23.545: INFO: Error updating pvc csi-gcpfs-fs-sc-basic-hdd86kbp: PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hdd86kbp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims I0622 05:05:29.645] core.PersistentVolumeClaimSpec{ I0622 05:05:29.645] AccessModes: {"ReadWriteOnce"}, I0622 05:05:29.645] Selector: nil, I0622 05:05:29.645] Resources: core.ResourceRequirements{ I0622 05:05:29.646] Limits: nil, I0622 05:05:29.646] - Requests: core.ResourceList{ ... skipping 5 lines ... I0622 05:05:29.647] }, I0622 05:05:29.647] VolumeName: "", I0622 05:05:29.648] StorageClassName: &"volume-expand-2764-e2e-scbq9vc", I0622 05:05:29.648] ... // 3 identical fields I0622 05:05:29.648] } I0622 05:05:29.648] I0622 05:05:29.648] Jun 22 05:05:25.570: INFO: Error updating pvc csi-gcpfs-fs-sc-basic-hdd86kbp: PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hdd86kbp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims I0622 05:05:29.649] core.PersistentVolumeClaimSpec{ I0622 05:05:29.649] AccessModes: {"ReadWriteOnce"}, I0622 05:05:29.649] Selector: nil, I0622 05:05:29.649] Resources: core.ResourceRequirements{ I0622 05:05:29.649] Limits: nil, I0622 05:05:29.650] - Requests: core.ResourceList{ ... skipping 5 lines ... I0622 05:05:29.651] }, I0622 05:05:29.651] VolumeName: "", I0622 05:05:29.651] StorageClassName: &"volume-expand-2764-e2e-scbq9vc", I0622 05:05:29.651] ... // 3 identical fields I0622 05:05:29.651] } I0622 05:05:29.652] I0622 05:05:29.652] Jun 22 05:05:27.547: INFO: Error updating pvc csi-gcpfs-fs-sc-basic-hdd86kbp: PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hdd86kbp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims I0622 05:05:29.652] core.PersistentVolumeClaimSpec{ I0622 05:05:29.652] AccessModes: {"ReadWriteOnce"}, I0622 05:05:29.652] Selector: nil, I0622 05:05:29.653] Resources: core.ResourceRequirements{ I0622 05:05:29.653] Limits: nil, I0622 05:05:29.653] - Requests: core.ResourceList{ ... skipping 5 lines ... I0622 05:05:29.653] }, I0622 05:05:29.653] VolumeName: "", I0622 05:05:29.653] StorageClassName: &"volume-expand-2764-e2e-scbq9vc", I0622 05:05:29.654] ... // 3 identical fields I0622 05:05:29.654] } I0622 05:05:29.654] I0622 05:05:29.654] Jun 22 05:05:29.551: INFO: Error updating pvc csi-gcpfs-fs-sc-basic-hdd86kbp: PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hdd86kbp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims I0622 05:05:29.654] core.PersistentVolumeClaimSpec{ I0622 05:05:29.654] AccessModes: {"ReadWriteOnce"}, I0622 05:05:29.654] Selector: nil, I0622 05:05:29.654] Resources: core.ResourceRequirements{ I0622 05:05:29.654] Limits: nil, I0622 05:05:29.654] - Requests: core.ResourceList{ ... skipping 5 lines ... I0622 05:05:29.655] }, I0622 05:05:29.655] VolumeName: "", I0622 05:05:29.655] StorageClassName: &"volume-expand-2764-e2e-scbq9vc", I0622 05:05:29.655] ... // 3 identical fields I0622 05:05:29.655] } I0622 05:05:29.655] I0622 05:05:29.656] Jun 22 05:05:29.560: INFO: Error updating pvc csi-gcpfs-fs-sc-basic-hdd86kbp: PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hdd86kbp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims I0622 05:05:29.656] core.PersistentVolumeClaimSpec{ I0622 05:05:29.656] AccessModes: {"ReadWriteOnce"}, I0622 05:05:29.656] Selector: nil, I0622 05:05:29.656] Resources: core.ResourceRequirements{ I0622 05:05:29.656] Limits: nil, I0622 05:05:29.656] - Requests: core.ResourceList{ ... skipping 22 lines ... I0622 05:05:29.659] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:05:29.659] [Testpattern: Dynamic PV (default fs)] volume-expand I0622 05:05:29.659] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:05:29.659] should not allow expansion of pvcs without AllowVolumeExpansion property I0622 05:05:29.659] [90mtest/e2e/storage/testsuites/volume_expand.go:159[0m I0622 05:05:29.659] [90m------------------------------[0m I0622 05:05:29.660] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":2,"skipped":369,"failed":0} I0622 05:05:29.660] I0622 05:05:29.660] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:05:29.660] [90m------------------------------[0m I0622 05:05:29.660] [BeforeEach] [Testpattern: Inline-volume (default fs)] volumeIO I0622 05:05:29.660] test/e2e/storage/framework/testsuite.go:51 I0622 05:05:29.661] Jun 22 05:05:29.654: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "InlineVolume" - skipping ... skipping 42 lines ... I0622 05:06:58.659] test/e2e/framework/framework.go:186 I0622 05:06:58.659] [1mSTEP[0m: Creating a kubernetes client I0622 05:06:58.659] Jun 22 04:59:17.002: INFO: >>> kubeConfig: /root/.kube/config I0622 05:06:58.659] [1mSTEP[0m: Building a namespace api object, basename provisioning I0622 05:06:58.659] [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace I0622 05:06:58.659] [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace I0622 05:06:58.659] [It] should fail if subpath file is outside the volume [Slow][LinuxOnly] I0622 05:06:58.659] test/e2e/storage/testsuites/subpath.go:258 I0622 05:06:58.659] Jun 22 04:59:17.032: INFO: Creating resource for dynamic PV I0622 05:06:58.660] Jun 22 04:59:17.032: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 05:06:58.660] [1mSTEP[0m: creating a StorageClass provisioning-2355-e2e-sc8vf78 I0622 05:06:58.660] [1mSTEP[0m: creating a claim I0622 05:06:58.660] Jun 22 04:59:17.038: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0622 05:06:58.660] [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-v7wr I0622 05:06:58.660] [1mSTEP[0m: Checking for subpath error in container status I0622 05:06:58.660] Jun 22 05:04:39.092: INFO: Deleting pod "pod-subpath-test-dynamicpv-v7wr" in namespace "provisioning-2355" I0622 05:06:58.660] Jun 22 05:04:39.103: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-v7wr" to be fully deleted I0622 05:06:58.660] [1mSTEP[0m: Deleting pod I0622 05:06:58.661] Jun 22 05:04:43.114: INFO: Deleting pod "pod-subpath-test-dynamicpv-v7wr" in namespace "provisioning-2355" I0622 05:06:58.661] [1mSTEP[0m: Deleting pvc I0622 05:06:58.661] Jun 22 05:04:43.128: INFO: Deleting PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hddrfqhz" ... skipping 35 lines ... I0622 05:06:58.666] I0622 05:06:58.666] [32m• [SLOW TEST:461.654 seconds][0m I0622 05:06:58.666] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 05:06:58.666] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:06:58.666] [Testpattern: Dynamic PV (default fs)] subPath I0622 05:06:58.667] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:06:58.667] should fail if subpath file is outside the volume [Slow][LinuxOnly] I0622 05:06:58.667] [90mtest/e2e/storage/testsuites/subpath.go:258[0m I0622 05:06:58.667] [90m------------------------------[0m I0622 05:06:58.667] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]","total":-1,"completed":1,"skipped":116,"failed":0} I0622 05:06:58.667] I0622 05:07:00.810] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:07:00.810] [90m------------------------------[0m I0622 05:07:00.810] [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode I0622 05:07:00.810] test/e2e/storage/framework/testsuite.go:51 I0622 05:07:00.811] [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode I0622 05:07:00.811] test/e2e/framework/framework.go:186 I0622 05:07:00.811] [1mSTEP[0m: Creating a kubernetes client I0622 05:07:00.811] Jun 22 05:06:58.666: INFO: >>> kubeConfig: /root/.kube/config I0622 05:07:00.811] [1mSTEP[0m: Building a namespace api object, basename volumemode I0622 05:07:00.812] [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace I0622 05:07:00.812] [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace I0622 05:07:00.812] [It] should fail in binding dynamic provisioned PV to PVC [Slow][LinuxOnly] I0622 05:07:00.812] test/e2e/storage/testsuites/volumemode.go:260 I0622 05:07:00.812] [1mSTEP[0m: Creating sc I0622 05:07:00.812] [1mSTEP[0m: Creating pv and pvc I0622 05:07:00.813] [1mSTEP[0m: Deleting pvc I0622 05:07:00.813] Jun 22 05:07:00.741: INFO: Deleting PersistentVolumeClaim "pvc-f2rs6" I0622 05:07:00.813] [1mSTEP[0m: Deleting sc I0622 05:07:00.813] [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode I0622 05:07:00.813] test/e2e/framework/framework.go:187 I0622 05:07:00.814] Jun 22 05:07:00.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0622 05:07:00.814] [1mSTEP[0m: Destroying namespace "volumemode-4238" for this suite. I0622 05:07:00.814] I0622 05:07:00.814] [32m•[0m I0622 05:07:00.814] [90m------------------------------[0m I0622 05:07:00.815] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow][LinuxOnly]","total":-1,"completed":2,"skipped":122,"failed":0} I0622 05:07:00.815] I0622 05:07:00.829] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:07:00.829] [90m------------------------------[0m I0622 05:07:00.829] [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes I0622 05:07:00.829] test/e2e/storage/framework/testsuite.go:51 I0622 05:07:00.829] Jun 22 05:07:00.828: INFO: Driver csi-gcpfs-fs-sc-basic-hdd doesn't support ext3 -- skipping ... skipping 182 lines ... I0622 05:07:01.032] I0622 05:07:01.032] [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m I0622 05:07:01.032] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 05:07:01.033] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:07:01.033] [Testpattern: Pre-provisioned PV (default fs)] subPath I0622 05:07:01.033] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:07:01.033] [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m I0622 05:07:01.033] [90mtest/e2e/storage/testsuites/subpath.go:280[0m I0622 05:07:01.034] I0622 05:07:01.034] [36mDriver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "PreprovisionedPV" - skipping[0m I0622 05:07:01.034] I0622 05:07:01.034] test/e2e/storage/external/external.go:269 I0622 05:07:01.034] [90m------------------------------[0m ... skipping 77 lines ... I0622 05:10:06.549] Jun 22 05:05:16.124: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 05:10:06.549] [1mSTEP[0m: creating a StorageClass provisioning-549-e2e-scjxx96 I0622 05:10:06.549] [1mSTEP[0m: creating a claim I0622 05:10:06.549] Jun 22 05:05:16.129: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0622 05:10:06.549] [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-pkbd I0622 05:10:06.549] [1mSTEP[0m: Creating a pod to test subpath I0622 05:10:06.550] Jun 22 05:05:16.160: INFO: Waiting up to 10m0s for pod "pod-subpath-test-dynamicpv-pkbd" in namespace "provisioning-549" to be "Succeeded or Failed" I0622 05:10:06.550] Jun 22 05:05:16.166: INFO: Pod "pod-subpath-test-dynamicpv-pkbd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.902831ms I0622 05:10:06.550] Jun 22 05:05:18.234: INFO: Pod "pod-subpath-test-dynamicpv-pkbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073962539s I0622 05:10:06.551] Jun 22 05:05:20.171: INFO: Pod "pod-subpath-test-dynamicpv-pkbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010352391s I0622 05:10:06.551] Jun 22 05:05:22.171: INFO: Pod "pod-subpath-test-dynamicpv-pkbd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01065288s I0622 05:10:06.552] Jun 22 05:05:24.172: INFO: Pod "pod-subpath-test-dynamicpv-pkbd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011691408s I0622 05:10:06.552] Jun 22 05:05:26.173: INFO: Pod "pod-subpath-test-dynamicpv-pkbd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.012287795s ... skipping 65 lines ... I0622 05:10:06.571] Jun 22 05:07:38.171: INFO: Pod "pod-subpath-test-dynamicpv-pkbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.010433815s I0622 05:10:06.571] Jun 22 05:07:40.173: INFO: Pod "pod-subpath-test-dynamicpv-pkbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.012587396s I0622 05:10:06.572] Jun 22 05:07:42.173: INFO: Pod "pod-subpath-test-dynamicpv-pkbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.013082002s I0622 05:10:06.572] Jun 22 05:07:44.171: INFO: Pod "pod-subpath-test-dynamicpv-pkbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.011000003s I0622 05:10:06.572] Jun 22 05:07:46.171: INFO: Pod "pod-subpath-test-dynamicpv-pkbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m30.010387475s I0622 05:10:06.572] [1mSTEP[0m: Saw pod success I0622 05:10:06.572] Jun 22 05:07:46.171: INFO: Pod "pod-subpath-test-dynamicpv-pkbd" satisfied condition "Succeeded or Failed" I0622 05:10:06.572] Jun 22 05:07:46.174: INFO: Trying to get logs from node e2e-test-prow-minion-group-gcmd pod pod-subpath-test-dynamicpv-pkbd container test-container-subpath-dynamicpv-pkbd: <nil> I0622 05:10:06.572] [1mSTEP[0m: delete the pod I0622 05:10:06.573] Jun 22 05:07:46.220: INFO: Waiting for pod pod-subpath-test-dynamicpv-pkbd to disappear I0622 05:10:06.573] Jun 22 05:07:46.224: INFO: Pod pod-subpath-test-dynamicpv-pkbd no longer exists I0622 05:10:06.573] [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-pkbd I0622 05:10:06.573] Jun 22 05:07:46.224: INFO: Deleting pod "pod-subpath-test-dynamicpv-pkbd" in namespace "provisioning-549" ... skipping 43 lines ... I0622 05:10:06.582] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:10:06.582] [Testpattern: Dynamic PV (default fs)] subPath I0622 05:10:06.582] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:10:06.582] should support readOnly directory specified in the volumeMount I0622 05:10:06.583] [90mtest/e2e/storage/testsuites/subpath.go:367[0m I0622 05:10:06.583] [90m------------------------------[0m I0622 05:10:06.583] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":145,"failed":0} I0622 05:10:06.583] I0622 05:10:06.614] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:10:06.615] [90m------------------------------[0m I0622 05:10:06.615] [BeforeEach] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] I0622 05:10:06.615] test/e2e/storage/framework/testsuite.go:51 I0622 05:10:06.615] [BeforeEach] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] ... skipping 62 lines ... I0622 05:10:06.751] I0622 05:10:06.751] [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m I0622 05:10:06.751] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 05:10:06.751] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:10:06.752] [Testpattern: Inline-volume (default fs)] subPath I0622 05:10:06.752] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:10:06.752] [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m I0622 05:10:06.752] [90mtest/e2e/storage/testsuites/subpath.go:258[0m I0622 05:10:06.752] I0622 05:10:06.753] [36mDriver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "InlineVolume" - skipping[0m I0622 05:10:06.753] I0622 05:10:06.753] test/e2e/storage/external/external.go:269 I0622 05:10:06.753] [90m------------------------------[0m ... skipping 340 lines ... I0622 05:11:47.739] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:11:47.739] [Testpattern: Dynamic PV (default fs)] subPath I0622 05:11:47.739] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:11:47.739] should be able to unmount after the subpath directory is deleted [LinuxOnly] I0622 05:11:47.739] [90mtest/e2e/storage/testsuites/subpath.go:447[0m I0622 05:11:47.739] [90m------------------------------[0m I0622 05:11:47.740] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":352,"failed":0} I0622 05:11:47.740] I0622 05:11:47.740] [36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:11:47.740] [90m------------------------------[0m I0622 05:11:47.740] [BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] I0622 05:11:47.740] test/e2e/storage/framework/testsuite.go:51 I0622 05:11:47.740] Jun 22 05:11:47.721: INFO: Driver csi-gcpfs-fs-sc-basic-hdd doesn't support Block -- skipping ... skipping 190 lines ... I0622 05:14:41.069] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:14:41.069] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode I0622 05:14:41.069] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:14:41.069] should not mount / map unused volumes in a pod [LinuxOnly] I0622 05:14:41.070] [90mtest/e2e/storage/testsuites/volumemode.go:354[0m I0622 05:14:41.070] [90m------------------------------[0m I0622 05:14:41.070] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":3,"skipped":308,"failed":0} I0622 05:14:41.070] I0622 05:14:41.144] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:14:41.145] [90m------------------------------[0m I0622 05:14:41.145] [BeforeEach] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] I0622 05:14:41.145] test/e2e/storage/framework/testsuite.go:51 I0622 05:14:41.146] [BeforeEach] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] ... skipping 67 lines ... I0622 05:14:41.261] [36mvolume type "GenericEphemeralVolume" is ephemeral[0m I0622 05:14:41.261] I0622 05:14:41.261] test/e2e/storage/testsuites/snapshottable.go:280 I0622 05:14:41.261] [90m------------------------------[0m I0622 05:15:53.117] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:15:53.117] [90m------------------------------[0m I0622 05:15:53.118] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs","total":-1,"completed":3,"skipped":532,"failed":0} I0622 05:15:53.118] [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath I0622 05:15:53.118] test/e2e/storage/framework/testsuite.go:51 I0622 05:15:53.118] [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath I0622 05:15:53.118] test/e2e/framework/framework.go:186 I0622 05:15:53.118] [1mSTEP[0m: Creating a kubernetes client I0622 05:15:53.118] Jun 22 05:11:03.484: INFO: >>> kubeConfig: /root/.kube/config I0622 05:15:53.119] [1mSTEP[0m: Building a namespace api object, basename provisioning I0622 05:15:53.119] [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace I0622 05:15:53.119] [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace I0622 05:15:53.119] [It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] I0622 05:15:53.119] test/e2e/storage/testsuites/subpath.go:269 I0622 05:15:53.119] Jun 22 05:11:03.519: INFO: Creating resource for dynamic PV I0622 05:15:53.119] Jun 22 05:11:03.519: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 05:15:53.119] [1mSTEP[0m: creating a StorageClass provisioning-6215-e2e-scd65dc I0622 05:15:53.119] [1mSTEP[0m: creating a claim I0622 05:15:53.120] Jun 22 05:11:03.524: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0622 05:15:53.120] [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-p49x I0622 05:15:53.120] [1mSTEP[0m: Checking for subpath error in container status I0622 05:15:53.120] Jun 22 05:13:25.571: INFO: Deleting pod "pod-subpath-test-dynamicpv-p49x" in namespace "provisioning-6215" I0622 05:15:53.120] Jun 22 05:13:25.582: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-p49x" to be fully deleted I0622 05:15:53.120] [1mSTEP[0m: Deleting pod I0622 05:15:53.120] Jun 22 05:13:27.591: INFO: Deleting pod "pod-subpath-test-dynamicpv-p49x" in namespace "provisioning-6215" I0622 05:15:53.120] [1mSTEP[0m: Deleting pvc I0622 05:15:53.121] Jun 22 05:13:27.600: INFO: Deleting PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hdd96ktx" ... skipping 37 lines ... I0622 05:15:53.129] I0622 05:15:53.129] [32m• [SLOW TEST:289.632 seconds][0m I0622 05:15:53.129] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 05:15:53.129] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:15:53.129] [Testpattern: Dynamic PV (default fs)] subPath I0622 05:15:53.129] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:15:53.130] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] I0622 05:15:53.130] [90mtest/e2e/storage/testsuites/subpath.go:269[0m I0622 05:15:53.130] [90m------------------------------[0m I0622 05:15:53.130] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]","total":-1,"completed":4,"skipped":532,"failed":0} I0622 05:15:53.131] I0622 05:15:53.131] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:15:53.131] [90m------------------------------[0m I0622 05:15:53.131] [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning I0622 05:15:53.131] test/e2e/storage/framework/testsuite.go:51 I0622 05:15:53.131] Jun 22 05:15:53.129: INFO: Driver csi-gcpfs-fs-sc-basic-hdd doesn't support Block -- skipping ... skipping 230 lines ... I0622 05:16:59.051] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:16:59.051] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy I0622 05:16:59.051] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:16:59.051] (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents I0622 05:16:59.051] [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m I0622 05:16:59.052] [90m------------------------------[0m I0622 05:16:59.052] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":4,"skipped":416,"failed":0} I0622 05:16:59.052] [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath I0622 05:16:59.053] test/e2e/storage/framework/testsuite.go:51 I0622 05:16:59.053] Jun 22 05:16:59.012: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "InlineVolume" - skipping I0622 05:16:59.054] [AfterEach] [Testpattern: Inline-volume (default fs)] subPath I0622 05:16:59.054] test/e2e/framework/framework.go:187 I0622 05:16:59.054] ... skipping 21 lines ... I0622 05:16:59.057] I0622 05:16:59.057] [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m I0622 05:16:59.057] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 05:16:59.058] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:16:59.058] [Testpattern: Pre-provisioned PV (default fs)] subPath I0622 05:16:59.058] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:16:59.058] [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m I0622 05:16:59.058] [90mtest/e2e/storage/testsuites/subpath.go:269[0m I0622 05:16:59.058] I0622 05:16:59.058] [36mDriver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "PreprovisionedPV" - skipping[0m I0622 05:16:59.058] I0622 05:16:59.059] test/e2e/storage/external/external.go:269 I0622 05:16:59.059] [90m------------------------------[0m ... skipping 207 lines ... I0622 05:20:05.860] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:20:05.860] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral I0622 05:20:05.861] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:20:05.861] should create read-only inline ephemeral volume I0622 05:20:05.861] [90mtest/e2e/storage/testsuites/ephemeral.go:175[0m I0622 05:20:05.861] [90m------------------------------[0m I0622 05:20:05.861] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":4,"skipped":423,"failed":0} I0622 05:20:05.861] I0622 05:20:05.936] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:20:05.937] [90m------------------------------[0m I0622 05:20:05.938] [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning I0622 05:20:05.938] test/e2e/storage/framework/testsuite.go:51 I0622 05:20:05.938] Jun 22 05:20:05.934: INFO: Driver csi-gcpfs-fs-sc-basic-hdd doesn't support ntfs -- skipping ... skipping 51 lines ... I0622 05:20:44.535] Jun 22 05:15:53.207: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 05:20:44.535] [1mSTEP[0m: creating a StorageClass provisioning-7588-e2e-scjbvl2 I0622 05:20:44.535] [1mSTEP[0m: creating a claim I0622 05:20:44.536] Jun 22 05:15:53.213: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0622 05:20:44.536] [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-pr8v I0622 05:20:44.536] [1mSTEP[0m: Creating a pod to test subpath I0622 05:20:44.536] Jun 22 05:15:53.254: INFO: Waiting up to 10m0s for pod "pod-subpath-test-dynamicpv-pr8v" in namespace "provisioning-7588" to be "Succeeded or Failed" I0622 05:20:44.537] Jun 22 05:15:53.272: INFO: Pod "pod-subpath-test-dynamicpv-pr8v": Phase="Pending", Reason="", readiness=false. Elapsed: 17.871367ms I0622 05:20:44.537] Jun 22 05:15:55.312: INFO: Pod "pod-subpath-test-dynamicpv-pr8v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058383133s I0622 05:20:44.538] Jun 22 05:15:57.277: INFO: Pod "pod-subpath-test-dynamicpv-pr8v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022880119s I0622 05:20:44.538] Jun 22 05:15:59.276: INFO: Pod "pod-subpath-test-dynamicpv-pr8v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021837073s I0622 05:20:44.539] Jun 22 05:16:01.276: INFO: Pod "pod-subpath-test-dynamicpv-pr8v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021785288s I0622 05:20:44.539] Jun 22 05:16:03.278: INFO: Pod "pod-subpath-test-dynamicpv-pr8v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023971003s ... skipping 63 lines ... I0622 05:20:44.569] Jun 22 05:18:11.276: INFO: Pod "pod-subpath-test-dynamicpv-pr8v": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.02185986s I0622 05:20:44.569] Jun 22 05:18:13.280: INFO: Pod "pod-subpath-test-dynamicpv-pr8v": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.026505048s I0622 05:20:44.569] Jun 22 05:18:15.276: INFO: Pod "pod-subpath-test-dynamicpv-pr8v": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.022456864s I0622 05:20:44.570] Jun 22 05:18:17.277: INFO: Pod "pod-subpath-test-dynamicpv-pr8v": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.02361913s I0622 05:20:44.570] Jun 22 05:18:19.276: INFO: Pod "pod-subpath-test-dynamicpv-pr8v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m26.02254804s I0622 05:20:44.571] [1mSTEP[0m: Saw pod success I0622 05:20:44.571] Jun 22 05:18:19.276: INFO: Pod "pod-subpath-test-dynamicpv-pr8v" satisfied condition "Succeeded or Failed" I0622 05:20:44.571] Jun 22 05:18:19.280: INFO: Trying to get logs from node e2e-test-prow-minion-group-gcmd pod pod-subpath-test-dynamicpv-pr8v container test-container-subpath-dynamicpv-pr8v: <nil> I0622 05:20:44.572] [1mSTEP[0m: delete the pod I0622 05:20:44.572] Jun 22 05:18:19.310: INFO: Waiting for pod pod-subpath-test-dynamicpv-pr8v to disappear I0622 05:20:44.572] Jun 22 05:18:19.316: INFO: Pod pod-subpath-test-dynamicpv-pr8v no longer exists I0622 05:20:44.572] [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-pr8v I0622 05:20:44.573] Jun 22 05:18:19.316: INFO: Deleting pod "pod-subpath-test-dynamicpv-pr8v" in namespace "provisioning-7588" ... skipping 44 lines ... I0622 05:20:44.587] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:20:44.587] [Testpattern: Dynamic PV (default fs)] subPath I0622 05:20:44.587] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:20:44.587] should support readOnly file specified in the volumeMount [LinuxOnly] I0622 05:20:44.588] [90mtest/e2e/storage/testsuites/subpath.go:382[0m I0622 05:20:44.588] [90m------------------------------[0m I0622 05:20:44.588] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":5,"skipped":563,"failed":0} I0622 05:20:44.588] I0622 05:20:44.588] [36mS[0m I0622 05:20:44.589] [90m------------------------------[0m I0622 05:20:44.589] [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes I0622 05:20:44.589] test/e2e/storage/framework/testsuite.go:51 I0622 05:20:44.589] Jun 22 05:20:44.534: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "InlineVolume" - skipping ... skipping 72 lines ... I0622 05:22:18.528] Jun 22 05:16:59.230: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 05:22:18.528] [1mSTEP[0m: creating a StorageClass provisioning-5064-e2e-schlllv I0622 05:22:18.528] [1mSTEP[0m: creating a claim I0622 05:22:18.528] Jun 22 05:16:59.235: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0622 05:22:18.529] [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-rbvh I0622 05:22:18.529] [1mSTEP[0m: Creating a pod to test subpath I0622 05:22:18.529] Jun 22 05:16:59.272: INFO: Waiting up to 10m0s for pod "pod-subpath-test-dynamicpv-rbvh" in namespace "provisioning-5064" to be "Succeeded or Failed" I0622 05:22:18.529] Jun 22 05:16:59.280: INFO: Pod "pod-subpath-test-dynamicpv-rbvh": Phase="Pending", Reason="", readiness=false. Elapsed: 7.728973ms I0622 05:22:18.530] Jun 22 05:17:01.284: INFO: Pod "pod-subpath-test-dynamicpv-rbvh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012355774s I0622 05:22:18.530] Jun 22 05:17:03.286: INFO: Pod "pod-subpath-test-dynamicpv-rbvh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01384705s I0622 05:22:18.531] Jun 22 05:17:05.289: INFO: Pod "pod-subpath-test-dynamicpv-rbvh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016877363s I0622 05:22:18.531] Jun 22 05:17:07.283: INFO: Pod "pod-subpath-test-dynamicpv-rbvh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011594324s I0622 05:22:18.531] Jun 22 05:17:09.284: INFO: Pod "pod-subpath-test-dynamicpv-rbvh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.01188799s ... skipping 72 lines ... I0622 05:22:18.554] Jun 22 05:19:35.346: INFO: Pod "pod-subpath-test-dynamicpv-rbvh": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.073723137s I0622 05:22:18.554] Jun 22 05:19:37.284: INFO: Pod "pod-subpath-test-dynamicpv-rbvh": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.012181689s I0622 05:22:18.554] Jun 22 05:19:39.284: INFO: Pod "pod-subpath-test-dynamicpv-rbvh": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.012368726s I0622 05:22:18.554] Jun 22 05:19:41.284: INFO: Pod "pod-subpath-test-dynamicpv-rbvh": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.01196971s I0622 05:22:18.555] Jun 22 05:19:43.284: INFO: Pod "pod-subpath-test-dynamicpv-rbvh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m44.012044956s I0622 05:22:18.555] [1mSTEP[0m: Saw pod success I0622 05:22:18.555] Jun 22 05:19:43.284: INFO: Pod "pod-subpath-test-dynamicpv-rbvh" satisfied condition "Succeeded or Failed" I0622 05:22:18.555] Jun 22 05:19:43.287: INFO: Trying to get logs from node e2e-test-prow-minion-group-gcmd pod pod-subpath-test-dynamicpv-rbvh container test-container-subpath-dynamicpv-rbvh: <nil> I0622 05:22:18.556] [1mSTEP[0m: delete the pod I0622 05:22:18.556] Jun 22 05:19:43.314: INFO: Waiting for pod pod-subpath-test-dynamicpv-rbvh to disappear I0622 05:22:18.556] Jun 22 05:19:43.321: INFO: Pod pod-subpath-test-dynamicpv-rbvh no longer exists I0622 05:22:18.556] [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-rbvh I0622 05:22:18.556] Jun 22 05:19:43.321: INFO: Deleting pod "pod-subpath-test-dynamicpv-rbvh" in namespace "provisioning-5064" ... skipping 46 lines ... I0622 05:22:18.567] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:22:18.567] [Testpattern: Dynamic PV (default fs)] subPath I0622 05:22:18.568] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:22:18.568] should support existing single file [LinuxOnly] I0622 05:22:18.568] [90mtest/e2e/storage/testsuites/subpath.go:221[0m I0622 05:22:18.568] [90m------------------------------[0m I0622 05:22:18.568] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":5,"skipped":567,"failed":0} I0622 05:22:18.568] I0622 05:22:18.569] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:22:18.569] [90m------------------------------[0m I0622 05:22:18.569] [BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] I0622 05:22:18.569] test/e2e/storage/framework/testsuite.go:51 I0622 05:22:18.569] Jun 22 05:22:18.539: INFO: Driver csi-gcpfs-fs-sc-basic-hdd doesn't support Block -- skipping ... skipping 24 lines ... I0622 05:22:18.573] I0622 05:22:18.574] [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m I0622 05:22:18.574] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 05:22:18.574] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:22:18.574] [Testpattern: Inline-volume (default fs)] subPath I0622 05:22:18.574] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:22:18.575] [36m[1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach][0m I0622 05:22:18.575] [90mtest/e2e/storage/testsuites/subpath.go:242[0m I0622 05:22:18.575] I0622 05:22:18.575] [36mDriver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "InlineVolume" - skipping[0m I0622 05:22:18.575] I0622 05:22:18.575] test/e2e/storage/external/external.go:269 I0622 05:22:18.575] [90m------------------------------[0m ... skipping 199 lines ... I0622 05:25:36.599] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:25:36.599] [Testpattern: Dynamic PV (default fs)] provisioning I0622 05:25:36.599] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:25:36.599] should mount multiple PV pointing to the same storage on the same node I0622 05:25:36.600] [90mtest/e2e/storage/testsuites/provisioning.go:525[0m I0622 05:25:36.600] [90m------------------------------[0m I0622 05:25:36.600] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node","total":-1,"completed":5,"skipped":569,"failed":0} I0622 05:25:36.600] I0622 05:25:36.600] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:25:36.600] [90m------------------------------[0m I0622 05:25:36.600] [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath I0622 05:25:36.600] test/e2e/storage/framework/testsuite.go:51 I0622 05:25:36.601] Jun 22 05:25:36.572: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "PreprovisionedPV" - skipping ... skipping 63 lines ... I0622 05:25:36.768] test/e2e/framework/framework.go:186 I0622 05:25:36.768] [1mSTEP[0m: Creating a kubernetes client I0622 05:25:36.768] Jun 22 05:25:36.716: INFO: >>> kubeConfig: /root/.kube/config I0622 05:25:36.768] [1mSTEP[0m: Building a namespace api object, basename volumemode I0622 05:25:36.768] [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace I0622 05:25:36.768] [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace I0622 05:25:36.769] [It] should fail to use a volume in a pod with mismatched mode [Slow] I0622 05:25:36.769] test/e2e/storage/testsuites/volumemode.go:299 I0622 05:25:36.769] Jun 22 05:25:36.756: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not provide raw block - skipping I0622 05:25:36.769] [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode I0622 05:25:36.769] test/e2e/framework/framework.go:187 I0622 05:25:36.769] Jun 22 05:25:36.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0622 05:25:36.769] [1mSTEP[0m: Destroying namespace "volumemode-2771" for this suite. I0622 05:25:36.770] I0622 05:25:36.770] I0622 05:25:36.770] [36m[1mS [SKIPPING] [0.049 seconds][0m I0622 05:25:36.770] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 05:25:36.770] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:25:36.770] [Testpattern: Dynamic PV (block volmode)] volumeMode I0622 05:25:36.770] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:25:36.770] [36m[1mshould fail to use a volume in a pod with mismatched mode [Slow] [It][0m I0622 05:25:36.770] [90mtest/e2e/storage/testsuites/volumemode.go:299[0m I0622 05:25:36.770] I0622 05:25:36.771] [36mDriver "csi-gcpfs-fs-sc-basic-hdd" does not provide raw block - skipping[0m I0622 05:25:36.771] I0622 05:25:36.771] test/e2e/storage/testsuites/volumes.go:114 I0622 05:25:36.771] [90m------------------------------[0m ... skipping 395 lines ... I0622 05:25:54.405] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:25:54.405] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] I0622 05:25:54.405] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:25:54.405] should concurrently access the single volume from pods on the same node I0622 05:25:54.405] [90mtest/e2e/storage/testsuites/multivolume.go:298[0m I0622 05:25:54.405] [90m------------------------------[0m I0622 05:25:54.406] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node","total":-1,"completed":6,"skipped":630,"failed":0} I0622 05:25:54.406] I0622 05:25:54.406] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:25:54.407] [90m------------------------------[0m I0622 05:25:54.407] [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] I0622 05:25:54.407] test/e2e/storage/framework/testsuite.go:51 I0622 05:25:54.407] Jun 22 05:25:54.380: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "PreprovisionedPV" - skipping ... skipping 63 lines ... I0622 05:25:54.513] test/e2e/framework/framework.go:186 I0622 05:25:54.514] [1mSTEP[0m: Creating a kubernetes client I0622 05:25:54.514] Jun 22 05:25:54.464: INFO: >>> kubeConfig: /root/.kube/config I0622 05:25:54.514] [1mSTEP[0m: Building a namespace api object, basename volumemode I0622 05:25:54.514] [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace I0622 05:25:54.514] [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace I0622 05:25:54.515] [It] should fail to use a volume in a pod with mismatched mode [Slow] I0622 05:25:54.515] test/e2e/storage/testsuites/volumemode.go:299 I0622 05:25:54.515] Jun 22 05:25:54.499: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not provide raw block - skipping I0622 05:25:54.515] [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode I0622 05:25:54.515] test/e2e/framework/framework.go:187 I0622 05:25:54.516] Jun 22 05:25:54.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0622 05:25:54.516] [1mSTEP[0m: Destroying namespace "volumemode-7594" for this suite. I0622 05:25:54.516] I0622 05:25:54.516] I0622 05:25:54.516] [36m[1mS [SKIPPING] [0.047 seconds][0m I0622 05:25:54.516] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 05:25:54.516] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:25:54.516] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode I0622 05:25:54.516] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:25:54.517] [36m[1mshould fail to use a volume in a pod with mismatched mode [Slow] [It][0m I0622 05:25:54.517] [90mtest/e2e/storage/testsuites/volumemode.go:299[0m I0622 05:25:54.517] I0622 05:25:54.517] [36mDriver "csi-gcpfs-fs-sc-basic-hdd" does not provide raw block - skipping[0m I0622 05:25:54.517] I0622 05:25:54.517] test/e2e/storage/testsuites/volumes.go:114 I0622 05:25:54.517] [90m------------------------------[0m ... skipping 195 lines ... I0622 05:31:04.566] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:31:04.566] [Testpattern: Dynamic PV (default fs)] volumeIO I0622 05:31:04.566] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:31:04.567] should write files of various sizes, verify size, validate content [Slow] I0622 05:31:04.567] [90mtest/e2e/storage/testsuites/volume_io.go:149[0m I0622 05:31:04.567] [90m------------------------------[0m I0622 05:31:04.567] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]","total":-1,"completed":6,"skipped":1024,"failed":0} I0622 05:31:04.567] I0622 05:31:04.602] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:31:04.603] [90m------------------------------[0m I0622 05:31:04.603] [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] I0622 05:31:04.603] test/e2e/storage/framework/testsuite.go:51 I0622 05:31:04.603] Jun 22 05:31:04.601: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "PreprovisionedPV" - skipping ... skipping 382 lines ... I0622 05:31:06.381] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:31:06.381] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy I0622 05:31:06.381] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:31:06.381] (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents I0622 05:31:06.382] [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m I0622 05:31:06.382] [90m------------------------------[0m I0622 05:31:06.382] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":7,"skipped":887,"failed":0} I0622 05:31:06.382] I0622 05:39:55.752] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:39:55.753] [90m------------------------------[0m I0622 05:39:55.753] [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes I0622 05:39:55.754] test/e2e/storage/framework/testsuite.go:51 I0622 05:39:55.754] [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes ... skipping 9 lines ... I0622 05:39:55.755] Jun 22 05:31:06.379: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 05:39:55.755] [1mSTEP[0m: creating a StorageClass volume-717-e2e-sczl26h I0622 05:39:55.756] [1mSTEP[0m: creating a claim I0622 05:39:55.756] Jun 22 05:31:06.384: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0622 05:39:55.756] [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-sq6w I0622 05:39:55.756] [1mSTEP[0m: Creating a pod to test exec-volume-test I0622 05:39:55.756] Jun 22 05:31:06.418: INFO: Waiting up to 10m0s for pod "exec-volume-test-dynamicpv-sq6w" in namespace "volume-717" to be "Succeeded or Failed" I0622 05:39:55.757] Jun 22 05:31:06.432: INFO: Pod "exec-volume-test-dynamicpv-sq6w": Phase="Pending", Reason="", readiness=false. Elapsed: 13.781071ms I0622 05:39:55.757] Jun 22 05:31:08.437: INFO: Pod "exec-volume-test-dynamicpv-sq6w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018159273s I0622 05:39:55.757] Jun 22 05:31:10.438: INFO: Pod "exec-volume-test-dynamicpv-sq6w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019565534s I0622 05:39:55.757] Jun 22 05:31:12.438: INFO: Pod "exec-volume-test-dynamicpv-sq6w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019548193s I0622 05:39:55.757] Jun 22 05:31:14.437: INFO: Pod "exec-volume-test-dynamicpv-sq6w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018290361s I0622 05:39:55.757] Jun 22 05:31:16.437: INFO: Pod "exec-volume-test-dynamicpv-sq6w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018258538s ... skipping 182 lines ... I0622 05:39:55.808] Jun 22 05:37:22.440: INFO: Pod "exec-volume-test-dynamicpv-sq6w": Phase="Pending", Reason="", readiness=false. Elapsed: 6m16.022035188s I0622 05:39:55.808] Jun 22 05:37:24.437: INFO: Pod "exec-volume-test-dynamicpv-sq6w": Phase="Pending", Reason="", readiness=false. Elapsed: 6m18.019050355s I0622 05:39:55.808] Jun 22 05:37:26.444: INFO: Pod "exec-volume-test-dynamicpv-sq6w": Phase="Pending", Reason="", readiness=false. Elapsed: 6m20.025313409s I0622 05:39:55.808] Jun 22 05:37:28.437: INFO: Pod "exec-volume-test-dynamicpv-sq6w": Phase="Pending", Reason="", readiness=false. Elapsed: 6m22.018599249s I0622 05:39:55.809] Jun 22 05:37:30.436: INFO: Pod "exec-volume-test-dynamicpv-sq6w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6m24.017867108s I0622 05:39:55.809] [1mSTEP[0m: Saw pod success I0622 05:39:55.809] Jun 22 05:37:30.436: INFO: Pod "exec-volume-test-dynamicpv-sq6w" satisfied condition "Succeeded or Failed" I0622 05:39:55.809] Jun 22 05:37:30.443: INFO: Trying to get logs from node e2e-test-prow-minion-group-gcmd pod exec-volume-test-dynamicpv-sq6w container exec-container-dynamicpv-sq6w: <nil> I0622 05:39:55.810] [1mSTEP[0m: delete the pod I0622 05:39:55.810] Jun 22 05:37:30.492: INFO: Waiting for pod exec-volume-test-dynamicpv-sq6w to disappear I0622 05:39:55.810] Jun 22 05:37:30.496: INFO: Pod exec-volume-test-dynamicpv-sq6w no longer exists I0622 05:39:55.810] [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-sq6w I0622 05:39:55.811] Jun 22 05:37:30.496: INFO: Deleting pod "exec-volume-test-dynamicpv-sq6w" in namespace "volume-717" ... skipping 42 lines ... I0622 05:39:55.822] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:39:55.822] [Testpattern: Dynamic PV (default fs)] volumes I0622 05:39:55.822] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:39:55.822] should allow exec of files on the volume I0622 05:39:55.822] [90mtest/e2e/storage/testsuites/volumes.go:198[0m I0622 05:39:55.822] [90m------------------------------[0m I0622 05:39:55.823] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":8,"skipped":893,"failed":0} I0622 05:39:55.823] I0622 05:39:55.823] [36mS[0m[36mS[0m I0622 05:39:55.823] [90m------------------------------[0m I0622 05:39:55.823] [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath I0622 05:39:55.823] test/e2e/storage/framework/testsuite.go:51 I0622 05:39:55.823] Jun 22 05:39:55.756: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "PreprovisionedPV" - skipping ... skipping 387 lines ... I0622 05:41:24.702] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:41:24.702] [Testpattern: Dynamic PV (default fs)] subPath I0622 05:41:24.702] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:41:24.702] should support restarting containers using file as subpath [Slow][LinuxOnly] I0622 05:41:24.702] [90mtest/e2e/storage/testsuites/subpath.go:333[0m I0622 05:41:24.703] [90m------------------------------[0m I0622 05:41:24.703] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]","total":-1,"completed":7,"skipped":1260,"failed":0} I0622 05:41:24.703] I0622 05:45:09.444] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:45:09.444] [90m------------------------------[0m I0622 05:45:09.444] [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral I0622 05:45:09.445] test/e2e/storage/framework/testsuite.go:51 I0622 05:45:09.445] [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral ... skipping 136 lines ... I0622 05:45:09.479] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:45:09.479] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral I0622 05:45:09.480] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:45:09.480] should create read/write inline ephemeral volume I0622 05:45:09.480] [90mtest/e2e/storage/testsuites/ephemeral.go:196[0m I0622 05:45:09.480] [90m------------------------------[0m I0622 05:45:09.480] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":9,"skipped":975,"failed":0} I0622 05:45:09.481] I0622 05:45:09.483] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:45:09.483] [90m------------------------------[0m I0622 05:45:09.483] [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] I0622 05:45:09.483] test/e2e/storage/framework/testsuite.go:51 I0622 05:45:09.484] Jun 22 05:45:09.481: INFO: Driver csi-gcpfs-fs-sc-basic-hdd doesn't support ntfs -- skipping ... skipping 27 lines ... I0622 05:49:24.097] [It] should check snapshot fields, check restore correctly works, check deletion (ephemeral) I0622 05:49:24.097] test/e2e/storage/testsuites/snapshottable.go:177 I0622 05:49:24.098] Jun 22 05:22:18.666: INFO: Creating resource for dynamic PV I0622 05:49:24.098] Jun 22 05:22:18.666: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 05:49:24.098] [1mSTEP[0m: creating a StorageClass snapshotting-9436-e2e-sczqs98 I0622 05:49:24.098] [1mSTEP[0m: [init] starting a pod to use the claim I0622 05:49:24.098] Jun 22 05:22:18.682: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-9vp2g" in namespace "snapshotting-9436" to be "Succeeded or Failed" I0622 05:49:24.099] Jun 22 05:22:18.704: INFO: Pod "pvc-snapshottable-tester-9vp2g": Phase="Pending", Reason="", readiness=false. Elapsed: 22.051375ms I0622 05:49:24.099] Jun 22 05:22:20.711: INFO: Pod "pvc-snapshottable-tester-9vp2g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028179121s I0622 05:49:24.099] Jun 22 05:22:22.709: INFO: Pod "pvc-snapshottable-tester-9vp2g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0269s I0622 05:49:24.099] Jun 22 05:22:24.710: INFO: Pod "pvc-snapshottable-tester-9vp2g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027495303s I0622 05:49:24.099] Jun 22 05:22:26.711: INFO: Pod "pvc-snapshottable-tester-9vp2g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.029081044s I0622 05:49:24.099] Jun 22 05:22:28.709: INFO: Pod "pvc-snapshottable-tester-9vp2g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.026728934s ... skipping 73 lines ... I0622 05:49:24.112] Jun 22 05:24:56.712: INFO: Pod "pvc-snapshottable-tester-9vp2g": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.02941704s I0622 05:49:24.112] Jun 22 05:24:58.709: INFO: Pod "pvc-snapshottable-tester-9vp2g": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.02689189s I0622 05:49:24.112] Jun 22 05:25:00.709: INFO: Pod "pvc-snapshottable-tester-9vp2g": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.026562835s I0622 05:49:24.112] Jun 22 05:25:02.709: INFO: Pod "pvc-snapshottable-tester-9vp2g": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.026501939s I0622 05:49:24.112] Jun 22 05:25:04.713: INFO: Pod "pvc-snapshottable-tester-9vp2g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m46.030317426s I0622 05:49:24.112] [1mSTEP[0m: Saw pod success I0622 05:49:24.113] Jun 22 05:25:04.713: INFO: Pod "pvc-snapshottable-tester-9vp2g" satisfied condition "Succeeded or Failed" I0622 05:49:24.113] [1mSTEP[0m: [init] checking the claim I0622 05:49:24.113] [1mSTEP[0m: creating a SnapshotClass I0622 05:49:24.113] [1mSTEP[0m: creating a dynamic VolumeSnapshot I0622 05:49:24.113] Jun 22 05:25:04.784: INFO: Waiting up to 5m0s for VolumeSnapshot snapshot-wv8d7 to become ready I0622 05:49:24.113] Jun 22 05:25:04.804: INFO: VolumeSnapshot snapshot-wv8d7 found but is not ready. I0622 05:49:24.113] Jun 22 05:25:06.810: INFO: VolumeSnapshot snapshot-wv8d7 found but is not ready. ... skipping 142 lines ... I0622 05:49:24.131] Jun 22 05:29:53.842: INFO: VolumeSnapshot snapshot-wv8d7 found but is not ready. I0622 05:49:24.131] Jun 22 05:29:55.846: INFO: VolumeSnapshot snapshot-wv8d7 found but is not ready. I0622 05:49:24.132] Jun 22 05:29:57.852: INFO: VolumeSnapshot snapshot-wv8d7 found but is not ready. I0622 05:49:24.132] Jun 22 05:29:59.857: INFO: VolumeSnapshot snapshot-wv8d7 found but is not ready. I0622 05:49:24.132] Jun 22 05:30:01.862: INFO: VolumeSnapshot snapshot-wv8d7 found but is not ready. I0622 05:49:24.132] Jun 22 05:30:03.866: INFO: VolumeSnapshot snapshot-wv8d7 found but is not ready. I0622 05:49:24.132] Jun 22 05:30:05.867: INFO: WaitUntil failed after reaching the timeout 5m0s I0622 05:49:24.132] Jun 22 05:30:05.867: INFO: Unexpected error: I0622 05:49:24.132] <*errors.errorString | 0xc003397450>: { I0622 05:49:24.132] s: "VolumeSnapshot snapshot-wv8d7 is not ready within 5m0s", I0622 05:49:24.132] } I0622 05:49:24.132] Jun 22 05:30:05.867: FAIL: VolumeSnapshot snapshot-wv8d7 is not ready within 5m0s I0622 05:49:24.133] I0622 05:49:24.133] Full Stack Trace I0622 05:49:24.133] k8s.io/kubernetes/test/e2e/storage/utils.GetSnapshotContentFromSnapshot({0x79f49e0, 0xc00344c758?}, 0xc00344c300) I0622 05:49:24.133] test/e2e/storage/utils/snapshot.go:86 +0x1ad I0622 05:49:24.133] k8s.io/kubernetes/test/e2e/storage/framework.CreateSnapshotResource({0x7f2e44473020, 0xc000b20160}, 0xc001a2e120, {{0x71d38cb, 0x22}, {0x0, 0x0}, {0x717cceb, 0x16}, {0x0, ...}, ...}, ...) I0622 05:49:24.133] test/e2e/storage/framework/snapshot_resource.go:92 +0x246 ... skipping 464 lines ... I0622 05:49:24.211] Jun 22 05:44:57.933: INFO: Pod "restored-pvc-tester-qdhlh": Phase="Pending", Reason="", readiness=false. Elapsed: 14m52.033161533s I0622 05:49:24.211] Jun 22 05:44:59.938: INFO: Pod "restored-pvc-tester-qdhlh": Phase="Pending", Reason="", readiness=false. Elapsed: 14m54.037999754s I0622 05:49:24.211] Jun 22 05:45:01.934: INFO: Pod "restored-pvc-tester-qdhlh": Phase="Pending", Reason="", readiness=false. Elapsed: 14m56.033757669s I0622 05:49:24.211] Jun 22 05:45:03.933: INFO: Pod "restored-pvc-tester-qdhlh": Phase="Pending", Reason="", readiness=false. Elapsed: 14m58.033074417s I0622 05:49:24.211] Jun 22 05:45:05.933: INFO: Pod "restored-pvc-tester-qdhlh": Phase="Pending", Reason="", readiness=false. Elapsed: 15m0.032683439s I0622 05:49:24.212] Jun 22 05:45:05.937: INFO: Pod "restored-pvc-tester-qdhlh": Phase="Pending", Reason="", readiness=false. Elapsed: 15m0.03652391s I0622 05:49:24.212] Jun 22 05:45:05.937: INFO: Unexpected error: I0622 05:49:24.212] <*pod.timeoutError | 0xc0034bfce0>: { I0622 05:49:24.212] msg: "timed out while waiting for pod snapshotting-9436/restored-pvc-tester-qdhlh to be running", I0622 05:49:24.212] observedObjects: [ I0622 05:49:24.212] { I0622 05:49:24.212] TypeMeta: {Kind: "", APIVersion: ""}, I0622 05:49:24.212] ObjectMeta: { ... skipping 356 lines ... I0622 05:49:24.252] QOSClass: "BestEffort", I0622 05:49:24.252] EphemeralContainerStatuses: nil, I0622 05:49:24.252] }, I0622 05:49:24.252] }, I0622 05:49:24.252] ], I0622 05:49:24.252] } I0622 05:49:24.252] Jun 22 05:45:05.938: FAIL: timed out while waiting for pod snapshotting-9436/restored-pvc-tester-qdhlh to be running I0622 05:49:24.252] I0622 05:49:24.252] Full Stack Trace I0622 05:49:24.252] k8s.io/kubernetes/test/e2e/storage/testsuites.(*snapshottableTestSuite).DefineTests.func1.4.1() I0622 05:49:24.252] test/e2e/storage/testsuites/snapshottable.go:260 +0xf05 I0622 05:49:24.253] k8s.io/kubernetes/test/e2e.RunE2ETests(0x2565617?) I0622 05:49:24.253] test/e2e/e2e.go:130 +0x686 ... skipping 159 lines ... I0622 05:49:24.278] Jun 22 05:49:23.019: INFO: At 2022-06-22 05:22:19 +0000 UTC - event for pvc-snapshottable-tester-9vp2g-my-volume: {filestore.csi.storage.gke.io_e2e-test-prow-minion-group-qhf3_fd79bb18-f088-4002-8d73-e99bd9a91597 } Provisioning: External provisioner is provisioning volume for claim "snapshotting-9436/pvc-snapshottable-tester-9vp2g-my-volume" I0622 05:49:24.278] Jun 22 05:49:23.019: INFO: At 2022-06-22 05:24:55 +0000 UTC - event for pvc-snapshottable-tester-9vp2g: {default-scheduler } Scheduled: Successfully assigned snapshotting-9436/pvc-snapshottable-tester-9vp2g to e2e-test-prow-minion-group-gcmd I0622 05:49:24.279] Jun 22 05:49:23.019: INFO: At 2022-06-22 05:24:55 +0000 UTC - event for pvc-snapshottable-tester-9vp2g-my-volume: {filestore.csi.storage.gke.io_e2e-test-prow-minion-group-qhf3_fd79bb18-f088-4002-8d73-e99bd9a91597 } ProvisioningSucceeded: Successfully provisioned volume pvc-5791a644-d0f0-4e67-ae6f-42c787938c63 I0622 05:49:24.279] Jun 22 05:49:23.019: INFO: At 2022-06-22 05:24:59 +0000 UTC - event for pvc-snapshottable-tester-9vp2g: {kubelet e2e-test-prow-minion-group-gcmd} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-2" already present on machine I0622 05:49:24.279] Jun 22 05:49:23.019: INFO: At 2022-06-22 05:24:59 +0000 UTC - event for pvc-snapshottable-tester-9vp2g: {kubelet e2e-test-prow-minion-group-gcmd} Created: Created container volume-tester I0622 05:49:24.279] Jun 22 05:49:23.019: INFO: At 2022-06-22 05:25:00 +0000 UTC - event for pvc-snapshottable-tester-9vp2g: {kubelet e2e-test-prow-minion-group-gcmd} Started: Started container volume-tester I0622 05:49:24.280] Jun 22 05:49:23.019: INFO: At 2022-06-22 05:25:04 +0000 UTC - event for snapshot-wv8d7: {snapshot-controller } SnapshotFinalizerError: Failed to check and update snapshot: snapshot controller failed to update snapshotting-9436/snapshot-wv8d7 on API server: volumesnapshots.snapshot.storage.k8s.io "snapshot-wv8d7" is forbidden: User "system:serviceaccount:kube-system:volume-snapshot-controller" cannot patch resource "volumesnapshots" in API group "snapshot.storage.k8s.io" in the namespace "snapshotting-9436" I0622 05:49:24.280] Jun 22 05:49:23.019: INFO: At 2022-06-22 05:25:04 +0000 UTC - event for snapshot-wv8d7: {snapshot-controller } CreatingSnapshot: Waiting for a snapshot snapshotting-9436/snapshot-wv8d7 to be created by the CSI driver. I0622 05:49:24.280] Jun 22 05:49:23.019: INFO: At 2022-06-22 05:30:05 +0000 UTC - event for restored-pvc-tester-qdhlh: {default-scheduler } FailedScheduling: 0/4 nodes are available: 4 waiting for ephemeral volume controller to create the persistentvolumeclaim "restored-pvc-tester-qdhlh-my-volume". preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling. I0622 05:49:24.281] Jun 22 05:49:23.019: INFO: At 2022-06-22 05:30:05 +0000 UTC - event for restored-pvc-tester-qdhlh-my-volume: {persistentvolume-controller } WaitForPodScheduled: waiting for pod restored-pvc-tester-qdhlh to be scheduled I0622 05:49:24.281] Jun 22 05:49:23.019: INFO: At 2022-06-22 05:30:07 +0000 UTC - event for restored-pvc-tester-qdhlh-my-volume: {filestore.csi.storage.gke.io_e2e-test-prow-minion-group-qhf3_fd79bb18-f088-4002-8d73-e99bd9a91597 } ProvisioningFailed: failed to provision volume with StorageClass "snapshotting-9436-e2e-sczqs98": error getting handle for DataSource Type VolumeSnapshot by Name snapshot-wv8d7: snapshot snapshot-wv8d7 is not Ready I0622 05:49:24.281] Jun 22 05:49:23.019: INFO: At 2022-06-22 05:30:07 +0000 UTC - event for restored-pvc-tester-qdhlh-my-volume: {filestore.csi.storage.gke.io_e2e-test-prow-minion-group-qhf3_fd79bb18-f088-4002-8d73-e99bd9a91597 } Provisioning: External provisioner is provisioning volume for claim "snapshotting-9436/restored-pvc-tester-qdhlh-my-volume" I0622 05:49:24.282] Jun 22 05:49:23.019: INFO: At 2022-06-22 05:30:07 +0000 UTC - event for restored-pvc-tester-qdhlh-my-volume: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "filestore.csi.storage.gke.io" or manually created by system administrator I0622 05:49:24.282] Jun 22 05:49:23.019: INFO: At 2022-06-22 05:40:07 +0000 UTC - event for restored-pvc-tester-qdhlh: {default-scheduler } FailedScheduling: running PreBind plugin "VolumeBinding": binding volumes: timed out waiting for the condition I0622 05:49:24.282] Jun 22 05:49:23.019: INFO: At 2022-06-22 05:45:06 +0000 UTC - event for restored-pvc-tester-qdhlh: {default-scheduler } FailedScheduling: running PreBind plugin "VolumeBinding": binding volumes: pod does not exist any more: pod "restored-pvc-tester-qdhlh" not found I0622 05:49:24.282] Jun 22 05:49:23.019: INFO: At 2022-06-22 05:45:06 +0000 UTC - event for snapshot-wv8d7: {snapshot-controller } SnapshotDeletePending: Snapshot is being used to restore a PVC I0622 05:49:24.283] Jun 22 05:49:23.030: INFO: POD NODE PHASE GRACE CONDITIONS ... skipping 138 lines ... I0622 05:49:24.343] [90mtest/e2e/storage/testsuites/snapshottable.go:177[0m I0622 05:49:24.344] I0622 05:49:24.344] [91mJun 22 05:30:05.867: VolumeSnapshot snapshot-wv8d7 is not ready within 5m0s[0m I0622 05:49:24.344] I0622 05:49:24.344] test/e2e/storage/utils/snapshot.go:86 I0622 05:49:24.344] [90m------------------------------[0m I0622 05:49:24.344] {"msg":"FAILED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)","total":-1,"completed":5,"skipped":642,"failed":1,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)"]} I0622 05:49:24.344] I0622 05:49:24.392] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:49:24.393] [90m------------------------------[0m I0622 05:49:24.393] [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] I0622 05:49:24.393] test/e2e/storage/framework/testsuite.go:51 I0622 05:49:24.393] Jun 22 05:49:24.391: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "PreprovisionedPV" - skipping ... skipping 135 lines ... I0622 05:49:53.901] Jun 22 05:45:09.604: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 05:49:53.902] [1mSTEP[0m: creating a StorageClass provisioning-7337-e2e-sckq2bs I0622 05:49:53.902] [1mSTEP[0m: creating a claim I0622 05:49:53.902] Jun 22 05:45:09.610: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0622 05:49:53.902] [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-vgdr I0622 05:49:53.902] [1mSTEP[0m: Creating a pod to test multi_subpath I0622 05:49:53.902] Jun 22 05:45:09.642: INFO: Waiting up to 10m0s for pod "pod-subpath-test-dynamicpv-vgdr" in namespace "provisioning-7337" to be "Succeeded or Failed" I0622 05:49:53.902] Jun 22 05:45:09.651: INFO: Pod "pod-subpath-test-dynamicpv-vgdr": Phase="Pending", Reason="", readiness=false. Elapsed: 9.220668ms I0622 05:49:53.902] Jun 22 05:45:11.657: INFO: Pod "pod-subpath-test-dynamicpv-vgdr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015106568s I0622 05:49:53.903] Jun 22 05:45:13.656: INFO: Pod "pod-subpath-test-dynamicpv-vgdr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014646228s I0622 05:49:53.903] Jun 22 05:45:15.657: INFO: Pod "pod-subpath-test-dynamicpv-vgdr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015732695s I0622 05:49:53.903] Jun 22 05:45:17.656: INFO: Pod "pod-subpath-test-dynamicpv-vgdr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014285211s I0622 05:49:53.903] Jun 22 05:45:19.655: INFO: Pod "pod-subpath-test-dynamicpv-vgdr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.013172648s ... skipping 62 lines ... I0622 05:49:53.918] Jun 22 05:47:25.655: INFO: Pod "pod-subpath-test-dynamicpv-vgdr": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.01365645s I0622 05:49:53.918] Jun 22 05:47:27.657: INFO: Pod "pod-subpath-test-dynamicpv-vgdr": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.015070958s I0622 05:49:53.918] Jun 22 05:47:29.656: INFO: Pod "pod-subpath-test-dynamicpv-vgdr": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.014454934s I0622 05:49:53.918] Jun 22 05:47:31.655: INFO: Pod "pod-subpath-test-dynamicpv-vgdr": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.013628187s I0622 05:49:53.918] Jun 22 05:47:33.656: INFO: Pod "pod-subpath-test-dynamicpv-vgdr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m24.014259664s I0622 05:49:53.919] [1mSTEP[0m: Saw pod success I0622 05:49:53.919] Jun 22 05:47:33.656: INFO: Pod "pod-subpath-test-dynamicpv-vgdr" satisfied condition "Succeeded or Failed" I0622 05:49:53.919] Jun 22 05:47:33.660: INFO: Trying to get logs from node e2e-test-prow-minion-group-gcmd pod pod-subpath-test-dynamicpv-vgdr container test-container-subpath-dynamicpv-vgdr: <nil> I0622 05:49:53.919] [1mSTEP[0m: delete the pod I0622 05:49:53.919] Jun 22 05:47:33.706: INFO: Waiting for pod pod-subpath-test-dynamicpv-vgdr to disappear I0622 05:49:53.919] Jun 22 05:47:33.711: INFO: Pod pod-subpath-test-dynamicpv-vgdr no longer exists I0622 05:49:53.919] [1mSTEP[0m: Deleting pod I0622 05:49:53.919] Jun 22 05:47:33.711: INFO: Deleting pod "pod-subpath-test-dynamicpv-vgdr" in namespace "provisioning-7337" ... skipping 41 lines ... I0622 05:49:53.925] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:49:53.925] [Testpattern: Dynamic PV (default fs)] subPath I0622 05:49:53.926] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:49:53.926] should support creating multiple subpath from same volumes [Slow] I0622 05:49:53.926] [90mtest/e2e/storage/testsuites/subpath.go:296[0m I0622 05:49:53.926] [90m------------------------------[0m I0622 05:49:53.926] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]","total":-1,"completed":10,"skipped":1054,"failed":0} I0622 05:49:53.926] I0622 05:49:53.926] [36mS[0m[36mS[0m[36mS[0m I0622 05:49:53.926] [90m------------------------------[0m I0622 05:49:53.927] [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes I0622 05:49:53.927] test/e2e/storage/framework/testsuite.go:51 I0622 05:49:53.927] Jun 22 05:49:53.905: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "InlineVolume" - skipping ... skipping 203 lines ... I0622 05:54:20.817] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:54:20.817] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy I0622 05:54:20.817] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:54:20.818] (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents I0622 05:54:20.818] [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m I0622 05:54:20.818] [90m------------------------------[0m I0622 05:54:20.819] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":6,"skipped":878,"failed":1,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)"]} I0622 05:54:20.819] I0622 05:54:20.819] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:54:20.819] [90m------------------------------[0m I0622 05:54:20.819] [BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] I0622 05:54:20.820] test/e2e/storage/framework/testsuite.go:51 I0622 05:54:20.820] Jun 22 05:54:20.786: INFO: Driver csi-gcpfs-fs-sc-basic-hdd doesn't support Block -- skipping ... skipping 192 lines ... I0622 05:54:21.136] I0622 05:54:21.136] [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m I0622 05:54:21.136] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 05:54:21.136] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:54:21.136] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode I0622 05:54:21.137] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:54:21.137] [36m[1mshould fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach][0m I0622 05:54:21.137] [90mtest/e2e/storage/testsuites/volumemode.go:299[0m I0622 05:54:21.137] I0622 05:54:21.137] [36mDriver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "PreprovisionedPV" - skipping[0m I0622 05:54:21.137] I0622 05:54:21.137] test/e2e/storage/external/external.go:269 I0622 05:54:21.137] [90m------------------------------[0m ... skipping 383 lines ... I0622 05:57:22.055] Jun 22 05:48:38.445: INFO: VolumeSnapshot snapshot-d7h8z found but is not ready. I0622 05:57:22.055] Jun 22 05:48:40.450: INFO: VolumeSnapshot snapshot-d7h8z found but is not ready. I0622 05:57:22.055] Jun 22 05:48:42.455: INFO: VolumeSnapshot snapshot-d7h8z found but is not ready. I0622 05:57:22.055] Jun 22 05:48:44.460: INFO: VolumeSnapshot snapshot-d7h8z found but is not ready. I0622 05:57:22.055] Jun 22 05:48:46.467: INFO: VolumeSnapshot snapshot-d7h8z found but is not ready. I0622 05:57:22.055] Jun 22 05:48:48.471: INFO: VolumeSnapshot snapshot-d7h8z found but is not ready. I0622 05:57:22.055] Jun 22 05:48:50.471: INFO: WaitUntil failed after reaching the timeout 5m0s I0622 05:57:22.055] Jun 22 05:48:50.471: INFO: Unexpected error: I0622 05:57:22.056] <*errors.errorString | 0xc000dbae90>: { I0622 05:57:22.056] s: "VolumeSnapshot snapshot-d7h8z is not ready within 5m0s", I0622 05:57:22.056] } I0622 05:57:22.056] Jun 22 05:48:50.472: FAIL: VolumeSnapshot snapshot-d7h8z is not ready within 5m0s I0622 05:57:22.056] I0622 05:57:22.056] Full Stack Trace I0622 05:57:22.056] k8s.io/kubernetes/test/e2e/storage/utils.GetSnapshotContentFromSnapshot({0x79f49e0, 0xc0038241b8?}, 0xc003704320) I0622 05:57:22.056] test/e2e/storage/utils/snapshot.go:86 +0x1ad I0622 05:57:22.056] k8s.io/kubernetes/test/e2e/storage/framework.CreateSnapshotResource({0x7f2cf4422f88, 0xc000ab6580}, 0xc002480cc0, {{0x7182fbb, 0x17}, {0x0, 0x0}, {0x713ca0a, 0x9}, {0x0, ...}, ...}, ...) I0622 05:57:22.057] test/e2e/storage/framework/snapshot_resource.go:92 +0x246 ... skipping 161 lines ... I0622 05:57:22.086] Jun 22 05:53:42.513: INFO: Pod "pod-c84c1a95-1462-4a7e-b151-3489df85fce0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.008187509s I0622 05:57:22.087] Jun 22 05:53:44.514: INFO: Pod "pod-c84c1a95-1462-4a7e-b151-3489df85fce0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.009548142s I0622 05:57:22.087] Jun 22 05:53:46.516: INFO: Pod "pod-c84c1a95-1462-4a7e-b151-3489df85fce0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.011345283s I0622 05:57:22.087] Jun 22 05:53:48.514: INFO: Pod "pod-c84c1a95-1462-4a7e-b151-3489df85fce0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.008901449s I0622 05:57:22.087] Jun 22 05:53:50.515: INFO: Pod "pod-c84c1a95-1462-4a7e-b151-3489df85fce0": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.010316698s I0622 05:57:22.088] Jun 22 05:53:50.519: INFO: Pod "pod-c84c1a95-1462-4a7e-b151-3489df85fce0": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.013882263s I0622 05:57:22.088] Jun 22 05:53:50.519: INFO: Unexpected error: I0622 05:57:22.088] <*errors.errorString | 0xc00180adb0>: { I0622 05:57:22.088] s: "pod \"pod-c84c1a95-1462-4a7e-b151-3489df85fce0\" is not Running: timed out while waiting for pod provisioning-7574/pod-c84c1a95-1462-4a7e-b151-3489df85fce0 to be running", I0622 05:57:22.088] } I0622 05:57:22.088] Jun 22 05:53:50.519: FAIL: pod "pod-c84c1a95-1462-4a7e-b151-3489df85fce0" is not Running: timed out while waiting for pod provisioning-7574/pod-c84c1a95-1462-4a7e-b151-3489df85fce0 to be running I0622 05:57:22.089] I0622 05:57:22.089] Full Stack Trace I0622 05:57:22.089] k8s.io/kubernetes/test/e2e/storage/testsuites.StorageClassTest.TestDynamicProvisioning({{0x7a4f8e8, 0xc00186ca80}, 0xc002c91400, 0xc000c07a40, 0xc000c07c00, 0xc001733080, {0x0, 0x0}, {0x0, 0x0, ...}, ...}) I0622 05:57:22.089] test/e2e/storage/testsuites/provisioning.go:631 +0x828 I0622 05:57:22.089] k8s.io/kubernetes/test/e2e/storage/testsuites.(*provisioningTestSuite).DefineTests.func4() I0622 05:57:22.090] test/e2e/storage/testsuites/provisioning.go:243 +0x618 ... skipping 134 lines ... I0622 05:57:22.118] Jun 22 05:57:21.352: INFO: At 2022-06-22 05:43:40 +0000 UTC - event for pvc-nrxdg: {filestore.csi.storage.gke.io_e2e-test-prow-minion-group-qhf3_fd79bb18-f088-4002-8d73-e99bd9a91597 } ProvisioningSucceeded: Successfully provisioned volume pvc-66dc7fa9-ba84-4800-a3f6-8692258713d7 I0622 05:57:22.118] Jun 22 05:57:21.352: INFO: At 2022-06-22 05:43:44 +0000 UTC - event for external-injector: {kubelet e2e-test-prow-minion-group-gcmd} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-2" already present on machine I0622 05:57:22.118] Jun 22 05:57:21.352: INFO: At 2022-06-22 05:43:44 +0000 UTC - event for external-injector: {kubelet e2e-test-prow-minion-group-gcmd} Created: Created container external-injector I0622 05:57:22.118] Jun 22 05:57:21.352: INFO: At 2022-06-22 05:43:44 +0000 UTC - event for external-injector: {kubelet e2e-test-prow-minion-group-gcmd} Started: Started container external-injector I0622 05:57:22.119] Jun 22 05:57:21.352: INFO: At 2022-06-22 05:43:47 +0000 UTC - event for external-injector: {kubelet e2e-test-prow-minion-group-gcmd} Killing: Stopping container external-injector I0622 05:57:22.119] Jun 22 05:57:21.352: INFO: At 2022-06-22 05:43:49 +0000 UTC - event for snapshot-d7h8z: {snapshot-controller } CreatingSnapshot: Waiting for a snapshot provisioning-7574/snapshot-d7h8z to be created by the CSI driver. I0622 05:57:22.119] Jun 22 05:57:21.352: INFO: At 2022-06-22 05:43:49 +0000 UTC - event for snapshot-d7h8z: {snapshot-controller } SnapshotFinalizerError: Failed to check and update snapshot: snapshot controller failed to update provisioning-7574/snapshot-d7h8z on API server: volumesnapshots.snapshot.storage.k8s.io "snapshot-d7h8z" is forbidden: User "system:serviceaccount:kube-system:volume-snapshot-controller" cannot patch resource "volumesnapshots" in API group "snapshot.storage.k8s.io" in the namespace "provisioning-7574" I0622 05:57:22.120] Jun 22 05:57:21.352: INFO: At 2022-06-22 05:48:50 +0000 UTC - event for pvc-mqw96: {filestore.csi.storage.gke.io_e2e-test-prow-minion-group-qhf3_fd79bb18-f088-4002-8d73-e99bd9a91597 } ProvisioningFailed: failed to provision volume with StorageClass "provisioning-7574-e2e-scg69dm": error getting handle for DataSource Type VolumeSnapshot by Name snapshot-d7h8z: snapshot snapshot-d7h8z is not Ready I0622 05:57:22.120] Jun 22 05:57:21.352: INFO: At 2022-06-22 05:48:50 +0000 UTC - event for pvc-mqw96: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "filestore.csi.storage.gke.io" or manually created by system administrator I0622 05:57:22.120] Jun 22 05:57:21.352: INFO: At 2022-06-22 05:48:50 +0000 UTC - event for pvc-mqw96: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding I0622 05:57:22.120] Jun 22 05:57:21.352: INFO: At 2022-06-22 05:48:50 +0000 UTC - event for pvc-mqw96: {filestore.csi.storage.gke.io_e2e-test-prow-minion-group-qhf3_fd79bb18-f088-4002-8d73-e99bd9a91597 } Provisioning: External provisioner is provisioning volume for claim "provisioning-7574/pvc-mqw96" I0622 05:57:22.121] Jun 22 05:57:21.352: INFO: At 2022-06-22 05:53:51 +0000 UTC - event for pod-c84c1a95-1462-4a7e-b151-3489df85fce0: {default-scheduler } FailedScheduling: running PreBind plugin "VolumeBinding": binding volumes: failed to check provisioning pvc: could not find v1.PersistentVolumeClaim "provisioning-7574/pvc-mqw96" I0622 05:57:22.121] Jun 22 05:57:21.352: INFO: At 2022-06-22 05:53:53 +0000 UTC - event for pod-c84c1a95-1462-4a7e-b151-3489df85fce0: {default-scheduler } FailedScheduling: 0/4 nodes are available: 4 persistentvolumeclaim "pvc-mqw96" not found. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling. I0622 05:57:22.121] Jun 22 05:57:21.357: INFO: POD NODE PHASE GRACE CONDITIONS I0622 05:57:22.121] Jun 22 05:57:21.357: INFO: pod-c84c1a95-1462-4a7e-b151-3489df85fce0 Pending [{PodScheduled False 0001-01-01 00:00:00 +0000 UTC 2022-06-22 05:53:51 +0000 UTC Unschedulable 0/4 nodes are available: 4 persistentvolumeclaim "pvc-mqw96" not found. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.}] I0622 05:57:22.121] Jun 22 05:57:21.357: INFO: I0622 05:57:22.122] Jun 22 05:57:21.365: INFO: I0622 05:57:22.122] Logging node info for node e2e-test-prow-master ... skipping 131 lines ... I0622 05:57:22.174] [90mtest/e2e/storage/testsuites/provisioning.go:208[0m I0622 05:57:22.174] I0622 05:57:22.174] [91mJun 22 05:48:50.472: VolumeSnapshot snapshot-d7h8z is not ready within 5m0s[0m I0622 05:57:22.174] I0622 05:57:22.174] test/e2e/storage/utils/snapshot.go:86 I0622 05:57:22.175] [90m------------------------------[0m I0622 05:57:22.175] {"msg":"FAILED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]","total":-1,"completed":7,"skipped":1266,"failed":1,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]"]} I0622 05:57:22.175] I0622 05:59:21.182] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:59:21.182] [90m------------------------------[0m I0622 05:59:21.182] [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy I0622 05:59:21.182] test/e2e/storage/framework/testsuite.go:51 I0622 05:59:21.182] [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy ... skipping 208 lines ... I0622 05:59:21.227] [90mtest/e2e/storage/external/external.go:174[0m I0622 05:59:21.227] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy I0622 05:59:21.227] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 05:59:21.227] (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents I0622 05:59:21.227] [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m I0622 05:59:21.227] [90m------------------------------[0m I0622 05:59:21.228] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":-1,"completed":7,"skipped":1419,"failed":1,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)"]} I0622 05:59:21.228] I0622 05:59:21.228] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 05:59:21.228] [90m------------------------------[0m I0622 05:59:21.228] [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath I0622 05:59:21.228] test/e2e/storage/framework/testsuite.go:51 I0622 05:59:21.229] Jun 22 05:59:21.196: INFO: Driver csi-gcpfs-fs-sc-basic-hdd doesn't support ntfs -- skipping ... skipping 180 lines ... I0622 06:02:48.720] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:02:48.721] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand I0622 06:02:48.721] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:02:48.721] Verify if offline PVC expansion works I0622 06:02:48.721] [90mtest/e2e/storage/testsuites/volume_expand.go:176[0m I0622 06:02:48.721] [90m------------------------------[0m I0622 06:02:48.721] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":8,"skipped":1345,"failed":1,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]"]} I0622 06:02:48.721] I0622 06:02:48.808] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:02:48.808] [90m------------------------------[0m I0622 06:02:48.809] [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] I0622 06:02:48.809] test/e2e/storage/framework/testsuite.go:51 I0622 06:02:48.809] [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] ... skipping 172 lines ... I0622 06:04:42.787] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:04:42.788] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand I0622 06:04:42.788] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:04:42.788] should resize volume when PVC is edited while pod is using it I0622 06:04:42.788] [90mtest/e2e/storage/testsuites/volume_expand.go:252[0m I0622 06:04:42.788] [90m------------------------------[0m I0622 06:04:42.789] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":8,"skipped":1472,"failed":1,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)"]} I0622 06:04:42.789] I0622 06:04:42.796] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:04:42.797] [90m------------------------------[0m I0622 06:04:42.797] [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] I0622 06:04:42.797] test/e2e/storage/framework/testsuite.go:51 I0622 06:04:42.797] Jun 22 06:04:42.795: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "PreprovisionedPV" - skipping ... skipping 56 lines ... I0622 06:04:42.962] I0622 06:04:42.962] [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m I0622 06:04:42.963] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 06:04:42.963] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:04:42.963] [Testpattern: Pre-provisioned PV (default fs)] subPath I0622 06:04:42.963] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:04:42.963] [36m[1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach][0m I0622 06:04:42.963] [90mtest/e2e/storage/testsuites/subpath.go:242[0m I0622 06:04:42.963] I0622 06:04:42.963] [36mDriver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "PreprovisionedPV" - skipping[0m I0622 06:04:42.963] I0622 06:04:42.963] test/e2e/storage/external/external.go:269 I0622 06:04:42.963] [90m------------------------------[0m ... skipping 8 lines ... I0622 06:04:42.997] I0622 06:04:42.997] [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m I0622 06:04:42.997] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 06:04:42.997] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:04:42.998] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath I0622 06:04:42.998] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:04:42.998] [36m[1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach][0m I0622 06:04:42.998] [90mtest/e2e/storage/testsuites/subpath.go:242[0m I0622 06:04:42.998] I0622 06:04:42.998] [36mDriver csi-gcpfs-fs-sc-basic-hdd doesn't support ntfs -- skipping[0m I0622 06:04:42.998] I0622 06:04:42.998] test/e2e/storage/framework/testsuite.go:121 I0622 06:04:42.998] [90m------------------------------[0m ... skipping 77 lines ... I0622 06:07:28.298] Jun 22 06:02:48.848: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 06:07:28.298] [1mSTEP[0m: creating a StorageClass provisioning-6287-e2e-scnptqb I0622 06:07:28.298] [1mSTEP[0m: creating a claim I0622 06:07:28.299] Jun 22 06:02:48.853: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0622 06:07:28.299] [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-66j7 I0622 06:07:28.299] [1mSTEP[0m: Creating a pod to test subpath I0622 06:07:28.299] Jun 22 06:02:48.882: INFO: Waiting up to 10m0s for pod "pod-subpath-test-dynamicpv-66j7" in namespace "provisioning-6287" to be "Succeeded or Failed" I0622 06:07:28.299] Jun 22 06:02:48.890: INFO: Pod "pod-subpath-test-dynamicpv-66j7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.87993ms I0622 06:07:28.299] Jun 22 06:02:50.895: INFO: Pod "pod-subpath-test-dynamicpv-66j7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013104449s I0622 06:07:28.299] Jun 22 06:02:52.899: INFO: Pod "pod-subpath-test-dynamicpv-66j7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01648313s I0622 06:07:28.300] Jun 22 06:02:54.897: INFO: Pod "pod-subpath-test-dynamicpv-66j7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014871834s I0622 06:07:28.300] Jun 22 06:02:56.897: INFO: Pod "pod-subpath-test-dynamicpv-66j7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01433334s I0622 06:07:28.300] Jun 22 06:02:58.895: INFO: Pod "pod-subpath-test-dynamicpv-66j7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.012665101s ... skipping 62 lines ... I0622 06:07:28.310] Jun 22 06:05:04.895: INFO: Pod "pod-subpath-test-dynamicpv-66j7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.012405097s I0622 06:07:28.311] Jun 22 06:05:06.896: INFO: Pod "pod-subpath-test-dynamicpv-66j7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.013553587s I0622 06:07:28.311] Jun 22 06:05:08.895: INFO: Pod "pod-subpath-test-dynamicpv-66j7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.012454545s I0622 06:07:28.311] Jun 22 06:05:10.894: INFO: Pod "pod-subpath-test-dynamicpv-66j7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.011933846s I0622 06:07:28.311] Jun 22 06:05:12.895: INFO: Pod "pod-subpath-test-dynamicpv-66j7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m24.012854816s I0622 06:07:28.311] [1mSTEP[0m: Saw pod success I0622 06:07:28.311] Jun 22 06:05:12.895: INFO: Pod "pod-subpath-test-dynamicpv-66j7" satisfied condition "Succeeded or Failed" I0622 06:07:28.312] Jun 22 06:05:12.900: INFO: Trying to get logs from node e2e-test-prow-minion-group-gcmd pod pod-subpath-test-dynamicpv-66j7 container test-container-volume-dynamicpv-66j7: <nil> I0622 06:07:28.312] [1mSTEP[0m: delete the pod I0622 06:07:28.312] Jun 22 06:05:12.966: INFO: Waiting for pod pod-subpath-test-dynamicpv-66j7 to disappear I0622 06:07:28.312] Jun 22 06:05:12.971: INFO: Pod pod-subpath-test-dynamicpv-66j7 no longer exists I0622 06:07:28.312] [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-66j7 I0622 06:07:28.312] Jun 22 06:05:12.971: INFO: Deleting pod "pod-subpath-test-dynamicpv-66j7" in namespace "provisioning-6287" ... skipping 42 lines ... I0622 06:07:28.318] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:07:28.318] [Testpattern: Dynamic PV (default fs)] subPath I0622 06:07:28.318] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:07:28.318] should support non-existent path I0622 06:07:28.318] [90mtest/e2e/storage/testsuites/subpath.go:196[0m I0622 06:07:28.319] [90m------------------------------[0m I0622 06:07:28.319] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":9,"skipped":1391,"failed":1,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]"]} I0622 06:07:28.319] I0622 06:07:28.319] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:07:28.319] [90m------------------------------[0m I0622 06:07:28.319] [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath I0622 06:07:28.319] test/e2e/storage/framework/testsuite.go:51 I0622 06:07:28.320] Jun 22 06:07:28.307: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "PreprovisionedPV" - skipping ... skipping 230 lines ... I0622 06:09:48.713] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:09:48.713] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy I0622 06:09:48.713] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:09:48.713] (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents I0622 06:09:48.713] [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m I0622 06:09:48.714] [90m------------------------------[0m I0622 06:09:48.714] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents","total":-1,"completed":9,"skipped":1734,"failed":1,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)"]} I0622 06:09:48.714] I0622 06:09:48.734] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:09:48.735] [90m------------------------------[0m I0622 06:09:48.735] [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath I0622 06:09:48.735] test/e2e/storage/framework/testsuite.go:51 I0622 06:09:48.736] Jun 22 06:09:48.733: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "InlineVolume" - skipping ... skipping 3 lines ... I0622 06:09:48.736] I0622 06:09:48.737] [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m I0622 06:09:48.737] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 06:09:48.737] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:09:48.737] [Testpattern: Inline-volume (default fs)] subPath I0622 06:09:48.738] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:09:48.738] [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m I0622 06:09:48.738] [90mtest/e2e/storage/testsuites/subpath.go:280[0m I0622 06:09:48.738] I0622 06:09:48.739] [36mDriver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "InlineVolume" - skipping[0m I0622 06:09:48.739] I0622 06:09:48.739] test/e2e/storage/external/external.go:269 I0622 06:09:48.739] [90m------------------------------[0m ... skipping 154 lines ... I0622 06:13:41.120] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:13:41.121] [Testpattern: Dynamic PV (default fs)] subPath I0622 06:13:41.121] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:13:41.121] should support restarting containers using directory as subpath [Slow] I0622 06:13:41.121] [90mtest/e2e/storage/testsuites/subpath.go:322[0m I0622 06:13:41.121] [90m------------------------------[0m I0622 06:13:41.122] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]","total":-1,"completed":10,"skipped":1400,"failed":1,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]"]} I0622 06:13:41.122] I0622 06:13:41.123] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:13:41.123] [90m------------------------------[0m I0622 06:13:41.123] [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath I0622 06:13:41.123] test/e2e/storage/framework/testsuite.go:51 I0622 06:13:41.123] Jun 22 06:13:41.100: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "PreprovisionedPV" - skipping ... skipping 306 lines ... I0622 06:14:50.013] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:14:50.013] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] I0622 06:14:50.013] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:14:50.014] should concurrently access the single volume from pods on different node I0622 06:14:50.014] [90mtest/e2e/storage/testsuites/multivolume.go:451[0m I0622 06:14:50.014] [90m------------------------------[0m I0622 06:14:50.015] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node","total":-1,"completed":10,"skipped":1894,"failed":1,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)"]} I0622 06:14:50.015] I0622 06:14:50.015] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:14:50.015] [90m------------------------------[0m I0622 06:14:50.015] [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes I0622 06:14:50.015] test/e2e/storage/framework/testsuite.go:51 I0622 06:14:50.016] Jun 22 06:14:49.960: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "InlineVolume" - skipping ... skipping 51 lines ... I0622 06:18:44.341] Jun 22 05:49:53.999: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 06:18:44.341] [1mSTEP[0m: creating a StorageClass snapshotting-7502-e2e-sc6kr4v I0622 06:18:44.341] [1mSTEP[0m: creating a claim I0622 06:18:44.342] Jun 22 05:49:54.004: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0622 06:18:44.342] [1mSTEP[0m: [init] starting a pod to use the claim I0622 06:18:44.342] [1mSTEP[0m: [init] check pod success I0622 06:18:44.342] Jun 22 05:49:54.037: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-vsm8w" in namespace "snapshotting-7502" to be "Succeeded or Failed" I0622 06:18:44.343] Jun 22 05:49:54.042: INFO: Pod "pvc-snapshottable-tester-vsm8w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039886ms I0622 06:18:44.343] Jun 22 05:49:56.046: INFO: Pod "pvc-snapshottable-tester-vsm8w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008785811s I0622 06:18:44.343] Jun 22 05:49:58.047: INFO: Pod "pvc-snapshottable-tester-vsm8w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009744119s I0622 06:18:44.343] Jun 22 05:50:00.046: INFO: Pod "pvc-snapshottable-tester-vsm8w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008943629s I0622 06:18:44.343] Jun 22 05:50:02.047: INFO: Pod "pvc-snapshottable-tester-vsm8w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009478684s I0622 06:18:44.344] Jun 22 05:50:04.046: INFO: Pod "pvc-snapshottable-tester-vsm8w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.008427673s ... skipping 77 lines ... I0622 06:18:44.368] Jun 22 05:52:40.046: INFO: Pod "pvc-snapshottable-tester-vsm8w": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.008588479s I0622 06:18:44.368] Jun 22 05:52:42.051: INFO: Pod "pvc-snapshottable-tester-vsm8w": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.013288224s I0622 06:18:44.368] Jun 22 05:52:44.046: INFO: Pod "pvc-snapshottable-tester-vsm8w": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.008458637s I0622 06:18:44.369] Jun 22 05:52:46.046: INFO: Pod "pvc-snapshottable-tester-vsm8w": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.008666172s I0622 06:18:44.369] Jun 22 05:52:48.067: INFO: Pod "pvc-snapshottable-tester-vsm8w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m54.029642146s I0622 06:18:44.369] [1mSTEP[0m: Saw pod success I0622 06:18:44.369] Jun 22 05:52:48.067: INFO: Pod "pvc-snapshottable-tester-vsm8w" satisfied condition "Succeeded or Failed" I0622 06:18:44.370] [1mSTEP[0m: [init] checking the claim I0622 06:18:44.370] Jun 22 05:52:48.082: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-gcpfs-fs-sc-basic-hdd75f68] to have phase Bound I0622 06:18:44.370] Jun 22 05:52:48.130: INFO: PersistentVolumeClaim csi-gcpfs-fs-sc-basic-hdd75f68 found and phase=Bound (47.952983ms) I0622 06:18:44.370] [1mSTEP[0m: [init] checking the PV I0622 06:18:44.370] [1mSTEP[0m: [init] deleting the pod I0622 06:18:44.370] Jun 22 05:52:48.278: INFO: Pod pvc-snapshottable-tester-vsm8w has the following logs: ... skipping 152 lines ... I0622 06:18:44.398] Jun 22 05:57:37.306: INFO: VolumeSnapshot snapshot-bxs8b found but is not ready. I0622 06:18:44.398] Jun 22 05:57:39.323: INFO: VolumeSnapshot snapshot-bxs8b found but is not ready. I0622 06:18:44.399] Jun 22 05:57:41.328: INFO: VolumeSnapshot snapshot-bxs8b found but is not ready. I0622 06:18:44.399] Jun 22 05:57:43.333: INFO: VolumeSnapshot snapshot-bxs8b found but is not ready. I0622 06:18:44.399] Jun 22 05:57:45.339: INFO: VolumeSnapshot snapshot-bxs8b found but is not ready. I0622 06:18:44.399] Jun 22 05:57:47.344: INFO: VolumeSnapshot snapshot-bxs8b found but is not ready. I0622 06:18:44.399] Jun 22 05:57:49.345: INFO: WaitUntil failed after reaching the timeout 5m0s I0622 06:18:44.399] Jun 22 05:57:49.346: INFO: Unexpected error: I0622 06:18:44.399] <*errors.errorString | 0xc003541f10>: { I0622 06:18:44.400] s: "VolumeSnapshot snapshot-bxs8b is not ready within 5m0s", I0622 06:18:44.400] } I0622 06:18:44.400] Jun 22 05:57:49.346: FAIL: VolumeSnapshot snapshot-bxs8b is not ready within 5m0s I0622 06:18:44.400] I0622 06:18:44.400] Full Stack Trace I0622 06:18:44.400] k8s.io/kubernetes/test/e2e/storage/utils.GetSnapshotContentFromSnapshot({0x79f49e0, 0xc0007648c0?}, 0xc002d26580) I0622 06:18:44.400] test/e2e/storage/utils/snapshot.go:86 +0x1ad I0622 06:18:44.401] k8s.io/kubernetes/test/e2e/storage/framework.CreateSnapshotResource({0x7fd2a034bd60, 0xc000ba5b80}, 0xc001e2cf00, {{0x71c3d50, 0x20}, {0x0, 0x0}, {0x713ca0a, 0x9}, {0x0, ...}, ...}, ...) I0622 06:18:44.401] test/e2e/storage/framework/snapshot_resource.go:92 +0x246 ... skipping 8 lines ... I0622 06:18:44.402] created by testing.(*T).Run I0622 06:18:44.402] /usr/local/go/src/testing/testing.go:1486 +0x35f I0622 06:18:44.402] [1mSTEP[0m: checking the snapshot I0622 06:18:44.403] [1mSTEP[0m: checking the SnapshotContent I0622 06:18:44.403] [1mSTEP[0m: Modifying source data test I0622 06:18:44.403] [1mSTEP[0m: modifying the data in the source PVC I0622 06:18:44.403] Jun 22 05:57:49.381: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-vp87q" in namespace "snapshotting-7502" to be "Succeeded or Failed" I0622 06:18:44.403] Jun 22 05:57:49.392: INFO: Pod "pvc-snapshottable-data-tester-vp87q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.852305ms I0622 06:18:44.404] Jun 22 05:57:51.398: INFO: Pod "pvc-snapshottable-data-tester-vp87q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016194362s I0622 06:18:44.404] Jun 22 05:57:53.397: INFO: Pod "pvc-snapshottable-data-tester-vp87q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015323203s I0622 06:18:44.404] Jun 22 05:57:55.396: INFO: Pod "pvc-snapshottable-data-tester-vp87q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014806258s I0622 06:18:44.404] Jun 22 05:57:57.398: INFO: Pod "pvc-snapshottable-data-tester-vp87q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01708884s I0622 06:18:44.404] [1mSTEP[0m: Saw pod success I0622 06:18:44.405] Jun 22 05:57:57.399: INFO: Pod "pvc-snapshottable-data-tester-vp87q" satisfied condition "Succeeded or Failed" I0622 06:18:44.405] Jun 22 05:57:57.490: INFO: Pod pvc-snapshottable-data-tester-vp87q has the following logs: I0622 06:18:44.405] Jun 22 05:57:57.490: INFO: Deleting pod "pvc-snapshottable-data-tester-vp87q" in namespace "snapshotting-7502" I0622 06:18:44.405] Jun 22 05:57:57.519: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-vp87q" to be fully deleted I0622 06:18:44.405] [1mSTEP[0m: creating a pvc from the snapshot I0622 06:18:44.405] [1mSTEP[0m: starting a pod to use the snapshot I0622 06:18:44.406] Jun 22 05:57:57.553: INFO: Waiting up to 15m0s for pod "restored-pvc-tester-jq5dz" in namespace "snapshotting-7502" to be "running" ... skipping 446 lines ... I0622 06:18:44.487] Jun 22 06:12:49.594: INFO: Pod "restored-pvc-tester-jq5dz": Phase="Pending", Reason="", readiness=false. Elapsed: 14m52.041258558s I0622 06:18:44.487] Jun 22 06:12:51.586: INFO: Pod "restored-pvc-tester-jq5dz": Phase="Pending", Reason="", readiness=false. Elapsed: 14m54.032442806s I0622 06:18:44.487] Jun 22 06:12:53.583: INFO: Pod "restored-pvc-tester-jq5dz": Phase="Pending", Reason="", readiness=false. Elapsed: 14m56.029998135s I0622 06:18:44.487] Jun 22 06:12:55.584: INFO: Pod "restored-pvc-tester-jq5dz": Phase="Pending", Reason="", readiness=false. Elapsed: 14m58.031215077s I0622 06:18:44.488] Jun 22 06:12:57.584: INFO: Pod "restored-pvc-tester-jq5dz": Phase="Pending", Reason="", readiness=false. Elapsed: 15m0.031219395s I0622 06:18:44.488] Jun 22 06:12:57.590: INFO: Pod "restored-pvc-tester-jq5dz": Phase="Pending", Reason="", readiness=false. Elapsed: 15m0.037204016s I0622 06:18:44.488] Jun 22 06:12:57.591: INFO: Unexpected error: I0622 06:18:44.488] <*pod.timeoutError | 0xc002f116e0>: { I0622 06:18:44.488] msg: "timed out while waiting for pod snapshotting-7502/restored-pvc-tester-jq5dz to be running", I0622 06:18:44.488] observedObjects: [ I0622 06:18:44.488] { I0622 06:18:44.488] TypeMeta: {Kind: "", APIVersion: ""}, I0622 06:18:44.489] ObjectMeta: { ... skipping 329 lines ... I0622 06:18:44.524] QOSClass: "BestEffort", I0622 06:18:44.524] EphemeralContainerStatuses: nil, I0622 06:18:44.525] }, I0622 06:18:44.525] }, I0622 06:18:44.525] ], I0622 06:18:44.525] } I0622 06:18:44.525] Jun 22 06:12:57.592: FAIL: timed out while waiting for pod snapshotting-7502/restored-pvc-tester-jq5dz to be running I0622 06:18:44.525] I0622 06:18:44.525] Full Stack Trace I0622 06:18:44.525] k8s.io/kubernetes/test/e2e/storage/testsuites.(*snapshottableTestSuite).DefineTests.func1.4.2() I0622 06:18:44.525] test/e2e/storage/testsuites/snapshottable.go:410 +0x16c5 I0622 06:18:44.525] k8s.io/kubernetes/test/e2e.RunE2ETests(0x2565617?) I0622 06:18:44.525] test/e2e/e2e.go:130 +0x686 ... skipping 163 lines ... I0622 06:18:44.554] Jun 22 06:18:43.589: INFO: At 2022-06-22 05:52:39 +0000 UTC - event for csi-gcpfs-fs-sc-basic-hdd75f68: {filestore.csi.storage.gke.io_e2e-test-prow-minion-group-qhf3_fd79bb18-f088-4002-8d73-e99bd9a91597 } ProvisioningSucceeded: Successfully provisioned volume pvc-4f18c6bf-c5a4-4a05-818f-8cb140207644 I0622 06:18:44.554] Jun 22 06:18:43.589: INFO: At 2022-06-22 05:52:40 +0000 UTC - event for pvc-snapshottable-tester-vsm8w: {default-scheduler } Scheduled: Successfully assigned snapshotting-7502/pvc-snapshottable-tester-vsm8w to e2e-test-prow-minion-group-gcmd I0622 06:18:44.555] Jun 22 06:18:43.589: INFO: At 2022-06-22 05:52:43 +0000 UTC - event for pvc-snapshottable-tester-vsm8w: {kubelet e2e-test-prow-minion-group-gcmd} Created: Created container volume-tester I0622 06:18:44.555] Jun 22 06:18:43.589: INFO: At 2022-06-22 05:52:43 +0000 UTC - event for pvc-snapshottable-tester-vsm8w: {kubelet e2e-test-prow-minion-group-gcmd} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-2" already present on machine I0622 06:18:44.555] Jun 22 06:18:43.589: INFO: At 2022-06-22 05:52:44 +0000 UTC - event for pvc-snapshottable-tester-vsm8w: {kubelet e2e-test-prow-minion-group-gcmd} Started: Started container volume-tester I0622 06:18:44.555] Jun 22 06:18:43.589: INFO: At 2022-06-22 05:52:48 +0000 UTC - event for snapshot-bxs8b: {snapshot-controller } CreatingSnapshot: Waiting for a snapshot snapshotting-7502/snapshot-bxs8b to be created by the CSI driver. I0622 06:18:44.556] Jun 22 06:18:43.589: INFO: At 2022-06-22 05:52:48 +0000 UTC - event for snapshot-bxs8b: {snapshot-controller } SnapshotFinalizerError: Failed to check and update snapshot: snapshot controller failed to update snapshotting-7502/snapshot-bxs8b on API server: volumesnapshots.snapshot.storage.k8s.io "snapshot-bxs8b" is forbidden: User "system:serviceaccount:kube-system:volume-snapshot-controller" cannot patch resource "volumesnapshots" in API group "snapshot.storage.k8s.io" in the namespace "snapshotting-7502" I0622 06:18:44.556] Jun 22 06:18:43.589: INFO: At 2022-06-22 05:57:49 +0000 UTC - event for pvc-snapshottable-data-tester-vp87q: {default-scheduler } Scheduled: Successfully assigned snapshotting-7502/pvc-snapshottable-data-tester-vp87q to e2e-test-prow-minion-group-gcmd I0622 06:18:44.557] Jun 22 06:18:43.589: INFO: At 2022-06-22 05:57:53 +0000 UTC - event for pvc-snapshottable-data-tester-vp87q: {kubelet e2e-test-prow-minion-group-gcmd} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-2" already present on machine I0622 06:18:44.557] Jun 22 06:18:43.589: INFO: At 2022-06-22 05:57:53 +0000 UTC - event for pvc-snapshottable-data-tester-vp87q: {kubelet e2e-test-prow-minion-group-gcmd} Started: Started container volume-tester I0622 06:18:44.557] Jun 22 06:18:43.589: INFO: At 2022-06-22 05:57:53 +0000 UTC - event for pvc-snapshottable-data-tester-vp87q: {kubelet e2e-test-prow-minion-group-gcmd} Created: Created container volume-tester I0622 06:18:44.557] Jun 22 06:18:43.589: INFO: At 2022-06-22 05:57:57 +0000 UTC - event for pvc-r9frw: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "filestore.csi.storage.gke.io" or manually created by system administrator I0622 06:18:44.558] Jun 22 06:18:43.589: INFO: At 2022-06-22 05:57:57 +0000 UTC - event for pvc-r9frw: {filestore.csi.storage.gke.io_e2e-test-prow-minion-group-qhf3_fd79bb18-f088-4002-8d73-e99bd9a91597 } Provisioning: External provisioner is provisioning volume for claim "snapshotting-7502/pvc-r9frw" I0622 06:18:44.558] Jun 22 06:18:43.589: INFO: At 2022-06-22 05:57:57 +0000 UTC - event for pvc-r9frw: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding I0622 06:18:44.559] Jun 22 06:18:43.589: INFO: At 2022-06-22 05:57:57 +0000 UTC - event for pvc-r9frw: {filestore.csi.storage.gke.io_e2e-test-prow-minion-group-qhf3_fd79bb18-f088-4002-8d73-e99bd9a91597 } ProvisioningFailed: failed to provision volume with StorageClass "snapshotting-7502-e2e-sc6kr4v": error getting handle for DataSource Type VolumeSnapshot by Name snapshot-bxs8b: snapshot snapshot-bxs8b is not Ready I0622 06:18:44.559] Jun 22 06:18:43.589: INFO: At 2022-06-22 06:07:57 +0000 UTC - event for restored-pvc-tester-jq5dz: {default-scheduler } FailedScheduling: running PreBind plugin "VolumeBinding": binding volumes: timed out waiting for the condition I0622 06:18:44.559] Jun 22 06:18:43.589: INFO: At 2022-06-22 06:12:57 +0000 UTC - event for restored-pvc-tester-jq5dz: {default-scheduler } FailedScheduling: running PreBind plugin "VolumeBinding": binding volumes: pod does not exist any more: pod "restored-pvc-tester-jq5dz" not found I0622 06:18:44.560] Jun 22 06:18:43.592: INFO: POD NODE PHASE GRACE CONDITIONS I0622 06:18:44.560] Jun 22 06:18:43.592: INFO: I0622 06:18:44.560] Jun 22 06:18:43.597: INFO: I0622 06:18:44.560] Logging node info for node e2e-test-prow-master ... skipping 137 lines ... I0622 06:18:44.629] [90mtest/e2e/storage/testsuites/snapshottable.go:278[0m I0622 06:18:44.629] I0622 06:18:44.629] [91mJun 22 05:57:49.346: VolumeSnapshot snapshot-bxs8b is not ready within 5m0s[0m I0622 06:18:44.629] I0622 06:18:44.629] test/e2e/storage/utils/snapshot.go:86 I0622 06:18:44.630] [90m------------------------------[0m I0622 06:18:44.630] {"msg":"FAILED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":-1,"completed":10,"skipped":1110,"failed":1,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)"]} I0622 06:18:44.631] I0622 06:19:03.171] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:19:03.171] [90m------------------------------[0m I0622 06:19:03.172] [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral I0622 06:19:03.172] test/e2e/storage/framework/testsuite.go:51 I0622 06:19:03.172] [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral ... skipping 140 lines ... I0622 06:19:03.210] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:19:03.211] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral I0622 06:19:03.211] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:19:03.211] should create read/write inline ephemeral volume I0622 06:19:03.211] [90mtest/e2e/storage/testsuites/ephemeral.go:196[0m I0622 06:19:03.211] [90m------------------------------[0m I0622 06:19:03.212] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":11,"skipped":1464,"failed":1,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]"]} I0622 06:19:03.212] I0622 06:19:03.274] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:19:03.275] [90m------------------------------[0m I0622 06:19:03.275] [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath I0622 06:19:03.276] test/e2e/storage/framework/testsuite.go:51 I0622 06:19:03.276] Jun 22 06:19:03.272: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "PreprovisionedPV" - skipping ... skipping 93 lines ... I0622 06:23:48.680] Jun 22 06:19:03.378: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 06:23:48.680] [1mSTEP[0m: creating a StorageClass provisioning-9011-e2e-sc6xnd6 I0622 06:23:48.680] [1mSTEP[0m: creating a claim I0622 06:23:48.681] Jun 22 06:19:03.383: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0622 06:23:48.682] [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-q5zs I0622 06:23:48.682] [1mSTEP[0m: Creating a pod to test subpath I0622 06:23:48.682] Jun 22 06:19:03.410: INFO: Waiting up to 10m0s for pod "pod-subpath-test-dynamicpv-q5zs" in namespace "provisioning-9011" to be "Succeeded or Failed" I0622 06:23:48.683] Jun 22 06:19:03.419: INFO: Pod "pod-subpath-test-dynamicpv-q5zs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.905419ms I0622 06:23:48.683] Jun 22 06:19:05.425: INFO: Pod "pod-subpath-test-dynamicpv-q5zs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014424638s I0622 06:23:48.683] Jun 22 06:19:07.424: INFO: Pod "pod-subpath-test-dynamicpv-q5zs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013935677s I0622 06:23:48.684] Jun 22 06:19:09.424: INFO: Pod "pod-subpath-test-dynamicpv-q5zs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01400916s I0622 06:23:48.684] Jun 22 06:19:11.424: INFO: Pod "pod-subpath-test-dynamicpv-q5zs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013946592s I0622 06:23:48.684] Jun 22 06:19:13.423: INFO: Pod "pod-subpath-test-dynamicpv-q5zs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.013027007s ... skipping 60 lines ... I0622 06:23:48.702] Jun 22 06:21:15.425: INFO: Pod "pod-subpath-test-dynamicpv-q5zs": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.014252441s I0622 06:23:48.702] Jun 22 06:21:17.423: INFO: Pod "pod-subpath-test-dynamicpv-q5zs": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.013047655s I0622 06:23:48.702] Jun 22 06:21:19.424: INFO: Pod "pod-subpath-test-dynamicpv-q5zs": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.013398052s I0622 06:23:48.703] Jun 22 06:21:21.424: INFO: Pod "pod-subpath-test-dynamicpv-q5zs": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.013747717s I0622 06:23:48.703] Jun 22 06:21:23.426: INFO: Pod "pod-subpath-test-dynamicpv-q5zs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m20.015806079s I0622 06:23:48.703] [1mSTEP[0m: Saw pod success I0622 06:23:48.703] Jun 22 06:21:23.426: INFO: Pod "pod-subpath-test-dynamicpv-q5zs" satisfied condition "Succeeded or Failed" I0622 06:23:48.704] Jun 22 06:21:23.433: INFO: Trying to get logs from node e2e-test-prow-minion-group-gcmd pod pod-subpath-test-dynamicpv-q5zs container test-container-volume-dynamicpv-q5zs: <nil> I0622 06:23:48.704] [1mSTEP[0m: delete the pod I0622 06:23:48.704] Jun 22 06:21:23.472: INFO: Waiting for pod pod-subpath-test-dynamicpv-q5zs to disappear I0622 06:23:48.704] Jun 22 06:21:23.476: INFO: Pod pod-subpath-test-dynamicpv-q5zs no longer exists I0622 06:23:48.704] [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-q5zs I0622 06:23:48.704] Jun 22 06:21:23.477: INFO: Deleting pod "pod-subpath-test-dynamicpv-q5zs" in namespace "provisioning-9011" ... skipping 44 lines ... I0622 06:23:48.714] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:23:48.714] [Testpattern: Dynamic PV (default fs)] subPath I0622 06:23:48.714] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:23:48.715] should support existing directory I0622 06:23:48.715] [90mtest/e2e/storage/testsuites/subpath.go:207[0m I0622 06:23:48.715] [90m------------------------------[0m I0622 06:23:48.715] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":12,"skipped":1574,"failed":1,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]"]} I0622 06:23:48.715] I0622 06:31:46.455] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:31:46.455] [90m------------------------------[0m I0622 06:31:46.455] [BeforeEach] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] I0622 06:31:46.455] test/e2e/storage/framework/testsuite.go:51 I0622 06:31:46.455] [BeforeEach] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] ... skipping 6 lines ... I0622 06:31:46.456] [It] should check snapshot fields, check restore correctly works, check deletion (ephemeral) I0622 06:31:46.456] test/e2e/storage/testsuites/snapshottable.go:177 I0622 06:31:46.456] Jun 22 06:14:50.027: INFO: Creating resource for dynamic PV I0622 06:31:46.457] Jun 22 06:14:50.027: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 06:31:46.457] [1mSTEP[0m: creating a StorageClass snapshotting-4072-e2e-scct5vq I0622 06:31:46.457] [1mSTEP[0m: [init] starting a pod to use the claim I0622 06:31:46.457] Jun 22 06:14:50.044: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-7q47r" in namespace "snapshotting-4072" to be "Succeeded or Failed" I0622 06:31:46.457] Jun 22 06:14:50.068: INFO: Pod "pvc-snapshottable-tester-7q47r": Phase="Pending", Reason="", readiness=false. Elapsed: 23.908136ms I0622 06:31:46.457] Jun 22 06:14:52.073: INFO: Pod "pvc-snapshottable-tester-7q47r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028957122s I0622 06:31:46.457] Jun 22 06:14:54.073: INFO: Pod "pvc-snapshottable-tester-7q47r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029005702s I0622 06:31:46.458] Jun 22 06:14:56.074: INFO: Pod "pvc-snapshottable-tester-7q47r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029829781s I0622 06:31:46.458] Jun 22 06:14:58.074: INFO: Pod "pvc-snapshottable-tester-7q47r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.029921402s I0622 06:31:46.458] Jun 22 06:15:00.074: INFO: Pod "pvc-snapshottable-tester-7q47r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.029852905s ... skipping 58 lines ... I0622 06:31:46.468] Jun 22 06:16:58.078: INFO: Pod "pvc-snapshottable-tester-7q47r": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.033590927s I0622 06:31:46.468] Jun 22 06:17:00.074: INFO: Pod "pvc-snapshottable-tester-7q47r": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.029166022s I0622 06:31:46.468] Jun 22 06:17:02.072: INFO: Pod "pvc-snapshottable-tester-7q47r": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.0278428s I0622 06:31:46.469] Jun 22 06:17:04.073: INFO: Pod "pvc-snapshottable-tester-7q47r": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.028811017s I0622 06:31:46.469] Jun 22 06:17:06.074: INFO: Pod "pvc-snapshottable-tester-7q47r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m16.029715807s I0622 06:31:46.469] [1mSTEP[0m: Saw pod success I0622 06:31:46.469] Jun 22 06:17:06.074: INFO: Pod "pvc-snapshottable-tester-7q47r" satisfied condition "Succeeded or Failed" I0622 06:31:46.469] [1mSTEP[0m: [init] checking the claim I0622 06:31:46.469] [1mSTEP[0m: creating a SnapshotClass I0622 06:31:46.469] [1mSTEP[0m: creating a dynamic VolumeSnapshot I0622 06:31:46.469] Jun 22 06:17:06.098: INFO: Waiting up to 5m0s for VolumeSnapshot snapshot-2fmvv to become ready I0622 06:31:46.470] Jun 22 06:17:06.114: INFO: VolumeSnapshot snapshot-2fmvv found but is not ready. I0622 06:31:46.470] Jun 22 06:17:08.119: INFO: VolumeSnapshot snapshot-2fmvv found but is not ready. ... skipping 142 lines ... I0622 06:31:46.489] Jun 22 06:21:55.066: INFO: VolumeSnapshot snapshot-2fmvv found but is not ready. I0622 06:31:46.489] Jun 22 06:21:57.088: INFO: VolumeSnapshot snapshot-2fmvv found but is not ready. I0622 06:31:46.489] Jun 22 06:21:59.093: INFO: VolumeSnapshot snapshot-2fmvv found but is not ready. I0622 06:31:46.489] Jun 22 06:22:01.099: INFO: VolumeSnapshot snapshot-2fmvv found but is not ready. I0622 06:31:46.490] Jun 22 06:22:03.104: INFO: VolumeSnapshot snapshot-2fmvv found but is not ready. I0622 06:31:46.490] Jun 22 06:22:05.113: INFO: VolumeSnapshot snapshot-2fmvv found but is not ready. I0622 06:31:46.490] Jun 22 06:22:07.113: INFO: WaitUntil failed after reaching the timeout 5m0s I0622 06:31:46.490] Jun 22 06:22:07.113: INFO: Unexpected error: I0622 06:31:46.490] <*errors.errorString | 0xc003647570>: { I0622 06:31:46.490] s: "VolumeSnapshot snapshot-2fmvv is not ready within 5m0s", I0622 06:31:46.490] } I0622 06:31:46.490] Jun 22 06:22:07.113: FAIL: VolumeSnapshot snapshot-2fmvv is not ready within 5m0s I0622 06:31:46.491] I0622 06:31:46.491] Full Stack Trace I0622 06:31:46.491] k8s.io/kubernetes/test/e2e/storage/utils.GetSnapshotContentFromSnapshot({0x79f49e0, 0xc003bc01e0?}, 0xc003bc01a0) I0622 06:31:46.491] test/e2e/storage/utils/snapshot.go:86 +0x1ad I0622 06:31:46.491] k8s.io/kubernetes/test/e2e/storage/framework.CreateSnapshotResource({0x7f2e44473020, 0xc000b20160}, 0xc003c9e4e0, {{0x71d38ed, 0x22}, {0x0, 0x0}, {0x717cceb, 0x16}, {0x0, ...}, ...}, ...) I0622 06:31:46.491] test/e2e/storage/framework/snapshot_resource.go:92 +0x246 ... skipping 309 lines ... I0622 06:31:46.544] Jun 22 06:31:07.392: INFO: volumesnapshotcontents snapcontent-85577b12-f52e-49f5-bfdb-785f5ee98e39 has been found and is not deleted I0622 06:31:46.544] Jun 22 06:31:08.397: INFO: volumesnapshotcontents snapcontent-85577b12-f52e-49f5-bfdb-785f5ee98e39 has been found and is not deleted I0622 06:31:46.544] Jun 22 06:31:09.402: INFO: volumesnapshotcontents snapcontent-85577b12-f52e-49f5-bfdb-785f5ee98e39 has been found and is not deleted I0622 06:31:46.544] Jun 22 06:31:10.407: INFO: volumesnapshotcontents snapcontent-85577b12-f52e-49f5-bfdb-785f5ee98e39 has been found and is not deleted I0622 06:31:46.544] Jun 22 06:31:11.411: INFO: volumesnapshotcontents snapcontent-85577b12-f52e-49f5-bfdb-785f5ee98e39 has been found and is not deleted I0622 06:31:46.545] Jun 22 06:31:12.420: INFO: volumesnapshotcontents snapcontent-85577b12-f52e-49f5-bfdb-785f5ee98e39 has been found and is not deleted I0622 06:31:46.545] Jun 22 06:31:13.421: INFO: WaitUntil failed after reaching the timeout 30s I0622 06:31:46.545] [AfterEach] volume snapshot controller I0622 06:31:46.545] test/e2e/storage/testsuites/snapshottable.go:172 I0622 06:31:46.545] Jun 22 06:31:13.443: INFO: Pod restored-pvc-tester-6z7ng has the following logs: I0622 06:31:46.545] Jun 22 06:31:13.443: INFO: Deleting pod "restored-pvc-tester-6z7ng" in namespace "snapshotting-4072" I0622 06:31:46.545] Jun 22 06:31:13.456: INFO: Wait up to 5m0s for pod "restored-pvc-tester-6z7ng" to be fully deleted I0622 06:31:46.545] Jun 22 06:31:45.471: INFO: deleting snapshot "snapshotting-4072"/"snapshot-2fmvv" ... skipping 18 lines ... I0622 06:31:46.551] Jun 22 06:31:45.550: INFO: At 2022-06-22 06:17:01 +0000 UTC - event for pvc-snapshottable-tester-7q47r: {kubelet e2e-test-prow-minion-group-gcmd} Created: Created container volume-tester I0622 06:31:46.551] Jun 22 06:31:45.550: INFO: At 2022-06-22 06:17:02 +0000 UTC - event for pvc-snapshottable-tester-7q47r: {kubelet e2e-test-prow-minion-group-gcmd} Started: Started container volume-tester I0622 06:31:46.551] Jun 22 06:31:45.550: INFO: At 2022-06-22 06:17:06 +0000 UTC - event for snapshot-2fmvv: {snapshot-controller } CreatingSnapshot: Waiting for a snapshot snapshotting-4072/snapshot-2fmvv to be created by the CSI driver. I0622 06:31:46.552] Jun 22 06:31:45.550: INFO: At 2022-06-22 06:22:07 +0000 UTC - event for restored-pvc-tester-6z7ng: {default-scheduler } FailedScheduling: 0/4 nodes are available: 4 waiting for ephemeral volume controller to create the persistentvolumeclaim "restored-pvc-tester-6z7ng-my-volume". preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling. I0622 06:31:46.552] Jun 22 06:31:45.550: INFO: At 2022-06-22 06:22:07 +0000 UTC - event for restored-pvc-tester-6z7ng-my-volume: {persistentvolume-controller } WaitForPodScheduled: waiting for pod restored-pvc-tester-6z7ng to be scheduled I0622 06:31:46.552] Jun 22 06:31:45.550: INFO: At 2022-06-22 06:22:08 +0000 UTC - event for restored-pvc-tester-6z7ng-my-volume: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "filestore.csi.storage.gke.io" or manually created by system administrator I0622 06:31:46.553] Jun 22 06:31:45.550: INFO: At 2022-06-22 06:22:08 +0000 UTC - event for restored-pvc-tester-6z7ng-my-volume: {filestore.csi.storage.gke.io_e2e-test-prow-minion-group-qhf3_fd79bb18-f088-4002-8d73-e99bd9a91597 } ProvisioningFailed: failed to provision volume with StorageClass "snapshotting-4072-e2e-scct5vq": error getting handle for DataSource Type VolumeSnapshot by Name snapshot-2fmvv: snapshot snapshot-2fmvv is not Ready I0622 06:31:46.554] Jun 22 06:31:45.550: INFO: At 2022-06-22 06:22:08 +0000 UTC - event for restored-pvc-tester-6z7ng-my-volume: {filestore.csi.storage.gke.io_e2e-test-prow-minion-group-qhf3_fd79bb18-f088-4002-8d73-e99bd9a91597 } Provisioning: External provisioner is provisioning volume for claim "snapshotting-4072/restored-pvc-tester-6z7ng-my-volume" I0622 06:31:46.554] Jun 22 06:31:45.550: INFO: At 2022-06-22 06:27:25 +0000 UTC - event for snapshot-2fmvv: {snapshot-controller } SnapshotCreated: Snapshot snapshotting-4072/snapshot-2fmvv was successfully created by the CSI driver. I0622 06:31:46.554] Jun 22 06:31:45.550: INFO: At 2022-06-22 06:27:25 +0000 UTC - event for snapshot-2fmvv: {snapshot-controller } SnapshotReady: Snapshot snapshotting-4072/snapshot-2fmvv is ready to use. I0622 06:31:46.555] Jun 22 06:31:45.550: INFO: At 2022-06-22 06:30:37 +0000 UTC - event for restored-pvc-tester-6z7ng: {default-scheduler } Scheduled: Successfully assigned snapshotting-4072/restored-pvc-tester-6z7ng to e2e-test-prow-minion-group-gcmd I0622 06:31:46.555] Jun 22 06:31:45.550: INFO: At 2022-06-22 06:30:37 +0000 UTC - event for restored-pvc-tester-6z7ng-my-volume: {filestore.csi.storage.gke.io_e2e-test-prow-minion-group-qhf3_fd79bb18-f088-4002-8d73-e99bd9a91597 } ProvisioningSucceeded: Successfully provisioned volume pvc-99ab5824-64f3-4271-a75d-dc94625653a6 I0622 06:31:46.555] Jun 22 06:31:45.550: INFO: At 2022-06-22 06:30:41 +0000 UTC - event for restored-pvc-tester-6z7ng: {kubelet e2e-test-prow-minion-group-gcmd} Created: Created container volume-tester ... skipping 144 lines ... I0622 06:31:46.615] [90mtest/e2e/storage/testsuites/snapshottable.go:177[0m I0622 06:31:46.616] I0622 06:31:46.616] [91mJun 22 06:22:07.113: VolumeSnapshot snapshot-2fmvv is not ready within 5m0s[0m I0622 06:31:46.616] I0622 06:31:46.616] test/e2e/storage/utils/snapshot.go:86 I0622 06:31:46.616] [90m------------------------------[0m I0622 06:31:46.617] {"msg":"FAILED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)","total":-1,"completed":10,"skipped":1924,"failed":2,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)","External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)"]} I0622 06:31:46.617] I0622 06:31:46.618] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:31:46.618] [90m------------------------------[0m I0622 06:31:46.618] [BeforeEach] [Testpattern: Inline-volume (xfs)][Slow] volumes I0622 06:31:46.619] test/e2e/storage/framework/testsuite.go:51 I0622 06:31:46.619] Jun 22 06:31:46.594: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "InlineVolume" - skipping ... skipping 24 lines ... I0622 06:31:46.716] I0622 06:31:46.717] [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m I0622 06:31:46.717] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 06:31:46.717] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:31:46.717] [Testpattern: Pre-provisioned PV (default fs)] subPath I0622 06:31:46.718] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:31:46.718] [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m I0622 06:31:46.718] [90mtest/e2e/storage/testsuites/subpath.go:258[0m I0622 06:31:46.718] I0622 06:31:46.719] [36mDriver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "PreprovisionedPV" - skipping[0m I0622 06:31:46.719] I0622 06:31:46.719] test/e2e/storage/external/external.go:269 I0622 06:31:46.719] [90m------------------------------[0m ... skipping 127 lines ... I0622 06:33:52.441] Jun 22 06:18:44.619: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 06:33:52.441] [1mSTEP[0m: creating a StorageClass snapshotting-9440-e2e-scpdq98 I0622 06:33:52.441] [1mSTEP[0m: creating a claim I0622 06:33:52.441] Jun 22 06:18:44.627: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0622 06:33:52.442] [1mSTEP[0m: [init] starting a pod to use the claim I0622 06:33:52.442] [1mSTEP[0m: [init] check pod success I0622 06:33:52.442] Jun 22 06:18:44.678: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-hs2rl" in namespace "snapshotting-9440" to be "Succeeded or Failed" I0622 06:33:52.442] Jun 22 06:18:44.685: INFO: Pod "pvc-snapshottable-tester-hs2rl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.304291ms I0622 06:33:52.442] Jun 22 06:18:46.690: INFO: Pod "pvc-snapshottable-tester-hs2rl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011711855s I0622 06:33:52.443] Jun 22 06:18:48.689: INFO: Pod "pvc-snapshottable-tester-hs2rl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010914135s I0622 06:33:52.443] Jun 22 06:18:50.691: INFO: Pod "pvc-snapshottable-tester-hs2rl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01270986s I0622 06:33:52.443] Jun 22 06:18:52.691: INFO: Pod "pvc-snapshottable-tester-hs2rl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012594505s I0622 06:33:52.443] Jun 22 06:18:54.689: INFO: Pod "pvc-snapshottable-tester-hs2rl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.010619777s ... skipping 62 lines ... I0622 06:33:52.459] Jun 22 06:21:00.691: INFO: Pod "pvc-snapshottable-tester-hs2rl": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.012317451s I0622 06:33:52.459] Jun 22 06:21:02.689: INFO: Pod "pvc-snapshottable-tester-hs2rl": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.010404674s I0622 06:33:52.460] Jun 22 06:21:04.690: INFO: Pod "pvc-snapshottable-tester-hs2rl": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.011291441s I0622 06:33:52.460] Jun 22 06:21:06.690: INFO: Pod "pvc-snapshottable-tester-hs2rl": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.011762508s I0622 06:33:52.460] Jun 22 06:21:08.689: INFO: Pod "pvc-snapshottable-tester-hs2rl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m24.011104943s I0622 06:33:52.460] [1mSTEP[0m: Saw pod success I0622 06:33:52.460] Jun 22 06:21:08.689: INFO: Pod "pvc-snapshottable-tester-hs2rl" satisfied condition "Succeeded or Failed" I0622 06:33:52.460] [1mSTEP[0m: [init] checking the claim I0622 06:33:52.460] Jun 22 06:21:08.693: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-gcpfs-fs-sc-basic-hddg4d9n] to have phase Bound I0622 06:33:52.461] Jun 22 06:21:08.696: INFO: PersistentVolumeClaim csi-gcpfs-fs-sc-basic-hddg4d9n found and phase=Bound (3.460834ms) I0622 06:33:52.461] [1mSTEP[0m: [init] checking the PV I0622 06:33:52.461] [1mSTEP[0m: [init] deleting the pod I0622 06:33:52.461] Jun 22 06:21:08.740: INFO: Pod pvc-snapshottable-tester-hs2rl has the following logs: ... skipping 152 lines ... I0622 06:33:52.481] Jun 22 06:25:57.864: INFO: VolumeSnapshot snapshot-rzzhh found but is not ready. I0622 06:33:52.481] Jun 22 06:25:59.869: INFO: VolumeSnapshot snapshot-rzzhh found but is not ready. I0622 06:33:52.481] Jun 22 06:26:01.875: INFO: VolumeSnapshot snapshot-rzzhh found but is not ready. I0622 06:33:52.482] Jun 22 06:26:03.880: INFO: VolumeSnapshot snapshot-rzzhh found but is not ready. I0622 06:33:52.482] Jun 22 06:26:05.885: INFO: VolumeSnapshot snapshot-rzzhh found but is not ready. I0622 06:33:52.482] Jun 22 06:26:07.891: INFO: VolumeSnapshot snapshot-rzzhh found but is not ready. I0622 06:33:52.482] Jun 22 06:26:09.892: INFO: WaitUntil failed after reaching the timeout 5m0s I0622 06:33:52.482] Jun 22 06:26:09.892: INFO: Unexpected error: I0622 06:33:52.482] <*errors.errorString | 0xc001d13960>: { I0622 06:33:52.483] s: "VolumeSnapshot snapshot-rzzhh is not ready within 5m0s", I0622 06:33:52.483] } I0622 06:33:52.483] Jun 22 06:26:09.892: FAIL: VolumeSnapshot snapshot-rzzhh is not ready within 5m0s I0622 06:33:52.483] I0622 06:33:52.483] Full Stack Trace I0622 06:33:52.483] k8s.io/kubernetes/test/e2e/storage/utils.GetSnapshotContentFromSnapshot({0x79f49e0, 0xc000765b28?}, 0xc000764130) I0622 06:33:52.483] test/e2e/storage/utils/snapshot.go:86 +0x1ad I0622 06:33:52.484] k8s.io/kubernetes/test/e2e/storage/framework.CreateSnapshotResource({0x7fd2a034bd60, 0xc000ba5b80}, 0xc003d19800, {{0x71c3d70, 0x20}, {0x0, 0x0}, {0x713ca0a, 0x9}, {0x0, ...}, ...}, ...) I0622 06:33:52.484] test/e2e/storage/framework/snapshot_resource.go:92 +0x246 ... skipping 8 lines ... I0622 06:33:52.485] created by testing.(*T).Run I0622 06:33:52.485] /usr/local/go/src/testing/testing.go:1486 +0x35f I0622 06:33:52.486] [1mSTEP[0m: checking the snapshot I0622 06:33:52.486] [1mSTEP[0m: checking the SnapshotContent I0622 06:33:52.486] [1mSTEP[0m: Modifying source data test I0622 06:33:52.486] [1mSTEP[0m: modifying the data in the source PVC I0622 06:33:52.486] Jun 22 06:26:09.918: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-dc6dm" in namespace "snapshotting-9440" to be "Succeeded or Failed" I0622 06:33:52.487] Jun 22 06:26:09.925: INFO: Pod "pvc-snapshottable-data-tester-dc6dm": Phase="Pending", Reason="", readiness=false. Elapsed: 7.416837ms I0622 06:33:52.487] Jun 22 06:26:11.930: INFO: Pod "pvc-snapshottable-data-tester-dc6dm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011884183s I0622 06:33:52.487] Jun 22 06:26:13.929: INFO: Pod "pvc-snapshottable-data-tester-dc6dm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011192226s I0622 06:33:52.488] Jun 22 06:26:15.930: INFO: Pod "pvc-snapshottable-data-tester-dc6dm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012353749s I0622 06:33:52.488] Jun 22 06:26:17.931: INFO: Pod "pvc-snapshottable-data-tester-dc6dm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.013467438s I0622 06:33:52.488] [1mSTEP[0m: Saw pod success I0622 06:33:52.488] Jun 22 06:26:17.931: INFO: Pod "pvc-snapshottable-data-tester-dc6dm" satisfied condition "Succeeded or Failed" I0622 06:33:52.489] Jun 22 06:26:17.988: INFO: Pod pvc-snapshottable-data-tester-dc6dm has the following logs: I0622 06:33:52.489] Jun 22 06:26:17.988: INFO: Deleting pod "pvc-snapshottable-data-tester-dc6dm" in namespace "snapshotting-9440" I0622 06:33:52.489] Jun 22 06:26:18.005: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-dc6dm" to be fully deleted I0622 06:33:52.489] [1mSTEP[0m: creating a pvc from the snapshot I0622 06:33:52.489] [1mSTEP[0m: starting a pod to use the snapshot I0622 06:33:52.490] Jun 22 06:26:18.047: INFO: Waiting up to 15m0s for pod "restored-pvc-tester-h4gq6" in namespace "snapshotting-9440" to be "running" ... skipping 176 lines ... I0622 06:33:52.530] Jun 22 06:31:18.575: INFO: volumesnapshotcontents snapcontent-ffbcc03c-0204-4d48-9c0d-e79df98074bc has been found and is not deleted I0622 06:33:52.530] Jun 22 06:31:19.582: INFO: volumesnapshotcontents snapcontent-ffbcc03c-0204-4d48-9c0d-e79df98074bc has been found and is not deleted I0622 06:33:52.530] Jun 22 06:31:20.601: INFO: volumesnapshotcontents snapcontent-ffbcc03c-0204-4d48-9c0d-e79df98074bc has been found and is not deleted I0622 06:33:52.531] Jun 22 06:31:21.606: INFO: volumesnapshotcontents snapcontent-ffbcc03c-0204-4d48-9c0d-e79df98074bc has been found and is not deleted I0622 06:33:52.531] Jun 22 06:31:22.623: INFO: volumesnapshotcontents snapcontent-ffbcc03c-0204-4d48-9c0d-e79df98074bc has been found and is not deleted I0622 06:33:52.531] Jun 22 06:31:23.630: INFO: volumesnapshotcontents snapcontent-ffbcc03c-0204-4d48-9c0d-e79df98074bc has been found and is not deleted I0622 06:33:52.532] Jun 22 06:31:24.630: INFO: WaitUntil failed after reaching the timeout 30s I0622 06:33:52.532] [AfterEach] volume snapshot controller I0622 06:33:52.532] test/e2e/storage/testsuites/snapshottable.go:172 I0622 06:33:52.532] Jun 22 06:31:24.639: INFO: Pod restored-pvc-tester-h4gq6 has the following logs: I0622 06:33:52.532] Jun 22 06:31:24.639: INFO: Deleting pod "restored-pvc-tester-h4gq6" in namespace "snapshotting-9440" I0622 06:33:52.533] Jun 22 06:31:24.643: INFO: Wait up to 5m0s for pod "restored-pvc-tester-h4gq6" to be fully deleted I0622 06:33:52.533] Jun 22 06:31:26.652: INFO: deleting claim "snapshotting-9440"/"pvc-nm9jh" ... skipping 50 lines ... I0622 06:33:52.547] Jun 22 06:33:51.897: INFO: At 2022-06-22 06:21:05 +0000 UTC - event for pvc-snapshottable-tester-hs2rl: {kubelet e2e-test-prow-minion-group-gcmd} Started: Started container volume-tester I0622 06:33:52.548] Jun 22 06:33:51.897: INFO: At 2022-06-22 06:21:08 +0000 UTC - event for snapshot-rzzhh: {snapshot-controller } CreatingSnapshot: Waiting for a snapshot snapshotting-9440/snapshot-rzzhh to be created by the CSI driver. I0622 06:33:52.548] Jun 22 06:33:51.897: INFO: At 2022-06-22 06:26:09 +0000 UTC - event for pvc-snapshottable-data-tester-dc6dm: {default-scheduler } Scheduled: Successfully assigned snapshotting-9440/pvc-snapshottable-data-tester-dc6dm to e2e-test-prow-minion-group-gcmd I0622 06:33:52.549] Jun 22 06:33:51.897: INFO: At 2022-06-22 06:26:13 +0000 UTC - event for pvc-snapshottable-data-tester-dc6dm: {kubelet e2e-test-prow-minion-group-gcmd} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-2" already present on machine I0622 06:33:52.549] Jun 22 06:33:51.897: INFO: At 2022-06-22 06:26:13 +0000 UTC - event for pvc-snapshottable-data-tester-dc6dm: {kubelet e2e-test-prow-minion-group-gcmd} Created: Created container volume-tester I0622 06:33:52.549] Jun 22 06:33:51.897: INFO: At 2022-06-22 06:26:13 +0000 UTC - event for pvc-snapshottable-data-tester-dc6dm: {kubelet e2e-test-prow-minion-group-gcmd} Started: Started container volume-tester I0622 06:33:52.550] Jun 22 06:33:51.897: INFO: At 2022-06-22 06:26:18 +0000 UTC - event for pvc-nm9jh: {filestore.csi.storage.gke.io_e2e-test-prow-minion-group-qhf3_fd79bb18-f088-4002-8d73-e99bd9a91597 } ProvisioningFailed: failed to provision volume with StorageClass "snapshotting-9440-e2e-scpdq98": error getting handle for DataSource Type VolumeSnapshot by Name snapshot-rzzhh: snapshot snapshot-rzzhh is not Ready I0622 06:33:52.550] Jun 22 06:33:51.897: INFO: At 2022-06-22 06:26:18 +0000 UTC - event for pvc-nm9jh: {filestore.csi.storage.gke.io_e2e-test-prow-minion-group-qhf3_fd79bb18-f088-4002-8d73-e99bd9a91597 } Provisioning: External provisioner is provisioning volume for claim "snapshotting-9440/pvc-nm9jh" I0622 06:33:52.551] Jun 22 06:33:51.897: INFO: At 2022-06-22 06:26:18 +0000 UTC - event for pvc-nm9jh: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "filestore.csi.storage.gke.io" or manually created by system administrator I0622 06:33:52.551] Jun 22 06:33:51.897: INFO: At 2022-06-22 06:26:18 +0000 UTC - event for pvc-nm9jh: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding I0622 06:33:52.551] Jun 22 06:33:51.897: INFO: At 2022-06-22 06:27:25 +0000 UTC - event for snapshot-rzzhh: {snapshot-controller } SnapshotCreated: Snapshot snapshotting-9440/snapshot-rzzhh was successfully created by the CSI driver. I0622 06:33:52.552] Jun 22 06:33:51.897: INFO: At 2022-06-22 06:27:25 +0000 UTC - event for snapshot-rzzhh: {snapshot-controller } SnapshotReady: Snapshot snapshotting-9440/snapshot-rzzhh is ready to use. I0622 06:33:52.552] Jun 22 06:33:51.897: INFO: At 2022-06-22 06:30:47 +0000 UTC - event for pvc-nm9jh: {filestore.csi.storage.gke.io_e2e-test-prow-minion-group-qhf3_fd79bb18-f088-4002-8d73-e99bd9a91597 } ProvisioningSucceeded: Successfully provisioned volume pvc-f24f3e0d-0308-4381-97e0-c79ca7352dac ... skipping 144 lines ... I0622 06:33:52.613] [90mtest/e2e/storage/testsuites/snapshottable.go:278[0m I0622 06:33:52.613] I0622 06:33:52.613] [91mJun 22 06:26:09.892: VolumeSnapshot snapshot-rzzhh is not ready within 5m0s[0m I0622 06:33:52.613] I0622 06:33:52.613] test/e2e/storage/utils/snapshot.go:86 I0622 06:33:52.613] [90m------------------------------[0m I0622 06:33:52.615] {"msg":"FAILED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":-1,"completed":10,"skipped":1128,"failed":2,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)"]} I0622 06:33:52.615] I0622 06:33:52.615] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:33:52.615] [90m------------------------------[0m I0622 06:33:52.615] [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] I0622 06:33:52.615] test/e2e/storage/framework/testsuite.go:51 I0622 06:33:52.616] Jun 22 06:33:52.544: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "PreprovisionedPV" - skipping ... skipping 387 lines ... I0622 06:34:40.047] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:34:40.047] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral I0622 06:34:40.047] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:34:40.047] should support two pods which have the same volume definition I0622 06:34:40.047] [90mtest/e2e/storage/testsuites/ephemeral.go:277[0m I0622 06:34:40.047] [90m------------------------------[0m I0622 06:34:40.048] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition","total":-1,"completed":13,"skipped":1610,"failed":1,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]"]} I0622 06:34:40.048] I0622 06:34:40.048] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:34:40.048] [90m------------------------------[0m I0622 06:34:40.048] [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning I0622 06:34:40.048] test/e2e/storage/framework/testsuite.go:51 I0622 06:34:40.048] Jun 22 06:34:39.997: INFO: Driver csi-gcpfs-fs-sc-basic-hdd doesn't support ntfs -- skipping ... skipping 48 lines ... I0622 06:36:40.447] Jun 22 06:31:47.043: INFO: Creating resource for dynamic PV I0622 06:36:40.447] Jun 22 06:31:47.043: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 06:36:40.448] [1mSTEP[0m: creating a StorageClass provisioning-5216-e2e-sc2nv2q I0622 06:36:40.448] [1mSTEP[0m: creating a claim I0622 06:36:40.448] Jun 22 06:31:47.047: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0622 06:36:40.448] [1mSTEP[0m: Creating pod to format volume volume-prep-provisioning-5216 I0622 06:36:40.448] Jun 22 06:31:47.075: INFO: Waiting up to 10m0s for pod "volume-prep-provisioning-5216" in namespace "provisioning-5216" to be "Succeeded or Failed" I0622 06:36:40.449] Jun 22 06:31:47.080: INFO: Pod "volume-prep-provisioning-5216": Phase="Pending", Reason="", readiness=false. Elapsed: 4.931842ms I0622 06:36:40.449] Jun 22 06:31:49.084: INFO: Pod "volume-prep-provisioning-5216": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008710263s I0622 06:36:40.449] Jun 22 06:31:51.087: INFO: Pod "volume-prep-provisioning-5216": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011916435s I0622 06:36:40.449] Jun 22 06:31:53.086: INFO: Pod "volume-prep-provisioning-5216": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011214123s I0622 06:36:40.450] Jun 22 06:31:55.084: INFO: Pod "volume-prep-provisioning-5216": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009284892s I0622 06:36:40.450] Jun 22 06:31:57.084: INFO: Pod "volume-prep-provisioning-5216": Phase="Pending", Reason="", readiness=false. Elapsed: 10.009283937s ... skipping 64 lines ... I0622 06:36:40.465] Jun 22 06:34:07.108: INFO: Pod "volume-prep-provisioning-5216": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.033015275s I0622 06:36:40.466] Jun 22 06:34:09.092: INFO: Pod "volume-prep-provisioning-5216": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.017229477s I0622 06:36:40.466] Jun 22 06:34:11.086: INFO: Pod "volume-prep-provisioning-5216": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.010737759s I0622 06:36:40.466] Jun 22 06:34:13.086: INFO: Pod "volume-prep-provisioning-5216": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.011106394s I0622 06:36:40.466] Jun 22 06:34:15.085: INFO: Pod "volume-prep-provisioning-5216": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m28.010044515s I0622 06:36:40.466] [1mSTEP[0m: Saw pod success I0622 06:36:40.467] Jun 22 06:34:15.085: INFO: Pod "volume-prep-provisioning-5216" satisfied condition "Succeeded or Failed" I0622 06:36:40.467] Jun 22 06:34:15.085: INFO: Deleting pod "volume-prep-provisioning-5216" in namespace "provisioning-5216" I0622 06:36:40.467] Jun 22 06:34:15.100: INFO: Wait up to 5m0s for pod "volume-prep-provisioning-5216" to be fully deleted I0622 06:36:40.467] [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-nz6x I0622 06:36:40.467] [1mSTEP[0m: Checking for subpath error in container status I0622 06:36:40.468] Jun 22 06:34:23.124: INFO: Deleting pod "pod-subpath-test-dynamicpv-nz6x" in namespace "provisioning-5216" I0622 06:36:40.468] Jun 22 06:34:23.136: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-nz6x" to be fully deleted I0622 06:36:40.468] [1mSTEP[0m: Deleting pod I0622 06:36:40.468] Jun 22 06:34:25.148: INFO: Deleting pod "pod-subpath-test-dynamicpv-nz6x" in namespace "provisioning-5216" I0622 06:36:40.468] [1mSTEP[0m: Deleting pvc I0622 06:36:40.468] Jun 22 06:34:25.156: INFO: Deleting PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hddtbqz2" ... skipping 38 lines ... I0622 06:36:40.474] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:36:40.474] [Testpattern: Dynamic PV (default fs)] subPath I0622 06:36:40.474] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:36:40.474] should verify container cannot write to subpath readonly volumes [Slow] I0622 06:36:40.474] [90mtest/e2e/storage/testsuites/subpath.go:425[0m I0622 06:36:40.474] [90m------------------------------[0m I0622 06:36:40.475] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]","total":-1,"completed":11,"skipped":2245,"failed":2,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)","External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)"]} I0622 06:36:40.475] I0622 06:36:40.491] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:36:40.492] [90m------------------------------[0m I0622 06:36:40.492] [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath I0622 06:36:40.492] test/e2e/storage/framework/testsuite.go:51 I0622 06:36:40.492] Jun 22 06:36:40.490: INFO: Driver csi-gcpfs-fs-sc-basic-hdd doesn't support ntfs -- skipping ... skipping 98 lines ... I0622 06:36:40.677] I0622 06:36:40.677] [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m I0622 06:36:40.677] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 06:36:40.677] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:36:40.678] [Testpattern: Inline-volume (default fs)] subPath I0622 06:36:40.678] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:36:40.678] [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m I0622 06:36:40.678] [90mtest/e2e/storage/testsuites/subpath.go:269[0m I0622 06:36:40.678] I0622 06:36:40.678] [36mDriver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "InlineVolume" - skipping[0m I0622 06:36:40.678] I0622 06:36:40.678] test/e2e/storage/external/external.go:269 I0622 06:36:40.678] [90m------------------------------[0m ... skipping 10 lines ... I0622 06:36:40.774] [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace I0622 06:36:40.775] [It] should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent) I0622 06:36:40.775] test/e2e/storage/testsuites/snapshottable.go:278 I0622 06:36:40.775] Jun 22 06:36:40.740: INFO: volume type "GenericEphemeralVolume" is ephemeral I0622 06:36:40.775] [AfterEach] volume snapshot controller I0622 06:36:40.775] test/e2e/storage/testsuites/snapshottable.go:172 I0622 06:36:40.776] Jun 22 06:36:40.746: INFO: Error getting logs for pod restored-pvc-tester-qdhlh: the server could not find the requested resource (get pods restored-pvc-tester-qdhlh) I0622 06:36:40.776] Jun 22 06:36:40.746: INFO: Deleting pod "restored-pvc-tester-qdhlh" in namespace "snapshotting-9436" I0622 06:36:40.776] Jun 22 06:36:40.750: INFO: deleting snapshot "snapshotting-9436"/"snapshot-wv8d7" I0622 06:36:40.776] Jun 22 06:36:40.752: INFO: deleting snapshot class "snapshotting-9436r5zx4" I0622 06:36:40.776] Jun 22 06:36:40.755: INFO: Waiting up to 5m0s for volumesnapshotclasses snapshotting-9436r5zx4 to be deleted I0622 06:36:40.776] Jun 22 06:36:40.757: INFO: volumesnapshotclasses snapshotting-9436r5zx4 is not found and has been deleted I0622 06:36:40.777] Jun 22 06:36:40.758: INFO: WaitUntil finished successfully after 2.305235ms ... skipping 70 lines ... I0622 06:41:51.310] Jun 22 06:36:40.921: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 06:41:51.310] [1mSTEP[0m: creating a StorageClass provisioning-7699-e2e-scgc4v2 I0622 06:41:51.310] [1mSTEP[0m: creating a claim I0622 06:41:51.311] Jun 22 06:36:40.927: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0622 06:41:51.311] [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-w7qp I0622 06:41:51.311] [1mSTEP[0m: Creating a pod to test subpath I0622 06:41:51.311] Jun 22 06:36:40.960: INFO: Waiting up to 10m0s for pod "pod-subpath-test-dynamicpv-w7qp" in namespace "provisioning-7699" to be "Succeeded or Failed" I0622 06:41:51.312] Jun 22 06:36:40.963: INFO: Pod "pod-subpath-test-dynamicpv-w7qp": Phase="Pending", Reason="", readiness=false. Elapsed: 3.150196ms I0622 06:41:51.312] Jun 22 06:36:42.967: INFO: Pod "pod-subpath-test-dynamicpv-w7qp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006925476s I0622 06:41:51.312] Jun 22 06:36:44.976: INFO: Pod "pod-subpath-test-dynamicpv-w7qp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015969727s I0622 06:41:51.312] Jun 22 06:36:46.969: INFO: Pod "pod-subpath-test-dynamicpv-w7qp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009147994s I0622 06:41:51.313] Jun 22 06:36:48.968: INFO: Pod "pod-subpath-test-dynamicpv-w7qp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.007645507s I0622 06:41:51.313] Jun 22 06:36:50.968: INFO: Pod "pod-subpath-test-dynamicpv-w7qp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.00816484s ... skipping 65 lines ... I0622 06:41:51.331] Jun 22 06:39:02.968: INFO: Pod "pod-subpath-test-dynamicpv-w7qp": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.007473725s I0622 06:41:51.332] Jun 22 06:39:04.968: INFO: Pod "pod-subpath-test-dynamicpv-w7qp": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.007979092s I0622 06:41:51.332] Jun 22 06:39:06.968: INFO: Pod "pod-subpath-test-dynamicpv-w7qp": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.00788303s I0622 06:41:51.332] Jun 22 06:39:08.967: INFO: Pod "pod-subpath-test-dynamicpv-w7qp": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.007007005s I0622 06:41:51.333] Jun 22 06:39:10.968: INFO: Pod "pod-subpath-test-dynamicpv-w7qp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m30.007600014s I0622 06:41:51.333] [1mSTEP[0m: Saw pod success I0622 06:41:51.333] Jun 22 06:39:10.968: INFO: Pod "pod-subpath-test-dynamicpv-w7qp" satisfied condition "Succeeded or Failed" I0622 06:41:51.333] Jun 22 06:39:10.971: INFO: Trying to get logs from node e2e-test-prow-minion-group-gcmd pod pod-subpath-test-dynamicpv-w7qp container test-container-subpath-dynamicpv-w7qp: <nil> I0622 06:41:51.334] [1mSTEP[0m: delete the pod I0622 06:41:51.334] Jun 22 06:39:11.011: INFO: Waiting for pod pod-subpath-test-dynamicpv-w7qp to disappear I0622 06:41:51.334] Jun 22 06:39:11.017: INFO: Pod pod-subpath-test-dynamicpv-w7qp no longer exists I0622 06:41:51.334] [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-w7qp I0622 06:41:51.335] Jun 22 06:39:11.017: INFO: Deleting pod "pod-subpath-test-dynamicpv-w7qp" in namespace "provisioning-7699" I0622 06:41:51.335] [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-w7qp I0622 06:41:51.335] [1mSTEP[0m: Creating a pod to test subpath I0622 06:41:51.335] Jun 22 06:39:11.030: INFO: Waiting up to 10m0s for pod "pod-subpath-test-dynamicpv-w7qp" in namespace "provisioning-7699" to be "Succeeded or Failed" I0622 06:41:51.335] Jun 22 06:39:11.036: INFO: Pod "pod-subpath-test-dynamicpv-w7qp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.228402ms I0622 06:41:51.336] Jun 22 06:39:13.041: INFO: Pod "pod-subpath-test-dynamicpv-w7qp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011556713s I0622 06:41:51.336] Jun 22 06:39:15.045: INFO: Pod "pod-subpath-test-dynamicpv-w7qp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015615865s I0622 06:41:51.336] Jun 22 06:39:17.040: INFO: Pod "pod-subpath-test-dynamicpv-w7qp": Phase="Running", Reason="", readiness=true. Elapsed: 6.010297717s I0622 06:41:51.337] Jun 22 06:39:19.041: INFO: Pod "pod-subpath-test-dynamicpv-w7qp": Phase="Running", Reason="", readiness=false. Elapsed: 8.011641137s I0622 06:41:51.337] Jun 22 06:39:21.040: INFO: Pod "pod-subpath-test-dynamicpv-w7qp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.010392443s I0622 06:41:51.337] [1mSTEP[0m: Saw pod success I0622 06:41:51.337] Jun 22 06:39:21.040: INFO: Pod "pod-subpath-test-dynamicpv-w7qp" satisfied condition "Succeeded or Failed" I0622 06:41:51.338] Jun 22 06:39:21.043: INFO: Trying to get logs from node e2e-test-prow-minion-group-gcmd pod pod-subpath-test-dynamicpv-w7qp container test-container-subpath-dynamicpv-w7qp: <nil> I0622 06:41:51.338] [1mSTEP[0m: delete the pod I0622 06:41:51.338] Jun 22 06:39:21.066: INFO: Waiting for pod pod-subpath-test-dynamicpv-w7qp to disappear I0622 06:41:51.338] Jun 22 06:39:21.069: INFO: Pod pod-subpath-test-dynamicpv-w7qp no longer exists I0622 06:41:51.338] [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-w7qp I0622 06:41:51.339] Jun 22 06:39:21.069: INFO: Deleting pod "pod-subpath-test-dynamicpv-w7qp" in namespace "provisioning-7699" ... skipping 45 lines ... I0622 06:41:51.350] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:41:51.351] [Testpattern: Dynamic PV (default fs)] subPath I0622 06:41:51.351] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:41:51.351] should support existing directories when readOnly specified in the volumeSource I0622 06:41:51.351] [90mtest/e2e/storage/testsuites/subpath.go:397[0m I0622 06:41:51.352] [90m------------------------------[0m I0622 06:41:51.353] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":12,"skipped":2478,"failed":2,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)","External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)"]} I0622 06:41:51.353] I0622 06:42:31.843] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:42:31.843] [90m------------------------------[0m I0622 06:42:31.843] [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] I0622 06:42:31.843] test/e2e/storage/framework/testsuite.go:51 I0622 06:42:31.843] [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] ... skipping 314 lines ... I0622 06:42:31.906] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:42:31.906] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] I0622 06:42:31.906] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:42:31.906] should access to two volumes with the same volume mode and retain data across pod recreation on the same node I0622 06:42:31.906] [90mtest/e2e/storage/testsuites/multivolume.go:138[0m I0622 06:42:31.906] [90m------------------------------[0m I0622 06:42:31.907] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node","total":-1,"completed":14,"skipped":1653,"failed":1,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]"]} I0622 06:42:31.907] I0622 06:44:44.200] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:44:44.201] [90m------------------------------[0m I0622 06:44:44.201] [BeforeEach] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] I0622 06:44:44.201] test/e2e/storage/framework/testsuite.go:51 I0622 06:44:44.201] [BeforeEach] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] ... skipping 9 lines ... I0622 06:44:44.203] Jun 22 06:33:52.736: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 06:44:44.203] [1mSTEP[0m: creating a StorageClass snapshotting-9189-e2e-sckwwjp I0622 06:44:44.203] [1mSTEP[0m: creating a claim I0622 06:44:44.204] Jun 22 06:33:52.741: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0622 06:44:44.204] [1mSTEP[0m: [init] starting a pod to use the claim I0622 06:44:44.204] [1mSTEP[0m: [init] check pod success I0622 06:44:44.204] Jun 22 06:33:52.771: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-2889s" in namespace "snapshotting-9189" to be "Succeeded or Failed" I0622 06:44:44.205] Jun 22 06:33:52.774: INFO: Pod "pvc-snapshottable-tester-2889s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.994266ms I0622 06:44:44.205] Jun 22 06:33:54.778: INFO: Pod "pvc-snapshottable-tester-2889s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007072849s I0622 06:44:44.205] Jun 22 06:33:56.780: INFO: Pod "pvc-snapshottable-tester-2889s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008371571s I0622 06:44:44.205] Jun 22 06:33:58.779: INFO: Pod "pvc-snapshottable-tester-2889s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007684039s I0622 06:44:44.206] Jun 22 06:34:00.785: INFO: Pod "pvc-snapshottable-tester-2889s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013253512s I0622 06:44:44.206] Jun 22 06:34:02.778: INFO: Pod "pvc-snapshottable-tester-2889s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.006555149s ... skipping 62 lines ... I0622 06:44:44.223] Jun 22 06:36:08.778: INFO: Pod "pvc-snapshottable-tester-2889s": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.006778175s I0622 06:44:44.224] Jun 22 06:36:10.779: INFO: Pod "pvc-snapshottable-tester-2889s": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.007675567s I0622 06:44:44.224] Jun 22 06:36:12.778: INFO: Pod "pvc-snapshottable-tester-2889s": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.006993755s I0622 06:44:44.224] Jun 22 06:36:14.779: INFO: Pod "pvc-snapshottable-tester-2889s": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.007459998s I0622 06:44:44.224] Jun 22 06:36:16.778: INFO: Pod "pvc-snapshottable-tester-2889s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m24.006832516s I0622 06:44:44.225] [1mSTEP[0m: Saw pod success I0622 06:44:44.225] Jun 22 06:36:16.778: INFO: Pod "pvc-snapshottable-tester-2889s" satisfied condition "Succeeded or Failed" I0622 06:44:44.225] [1mSTEP[0m: [init] checking the claim I0622 06:44:44.225] Jun 22 06:36:16.781: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-gcpfs-fs-sc-basic-hddzn4k7] to have phase Bound I0622 06:44:44.225] Jun 22 06:36:16.784: INFO: PersistentVolumeClaim csi-gcpfs-fs-sc-basic-hddzn4k7 found and phase=Bound (2.612122ms) I0622 06:44:44.226] [1mSTEP[0m: [init] checking the PV I0622 06:44:44.226] [1mSTEP[0m: [init] deleting the pod I0622 06:44:44.226] Jun 22 06:36:16.815: INFO: Pod pvc-snapshottable-tester-2889s has the following logs: ... skipping 152 lines ... I0622 06:44:44.252] Jun 22 06:41:05.908: INFO: VolumeSnapshot snapshot-sfbn6 found but is not ready. I0622 06:44:44.252] Jun 22 06:41:07.912: INFO: VolumeSnapshot snapshot-sfbn6 found but is not ready. I0622 06:44:44.252] Jun 22 06:41:09.918: INFO: VolumeSnapshot snapshot-sfbn6 found but is not ready. I0622 06:44:44.252] Jun 22 06:41:11.924: INFO: VolumeSnapshot snapshot-sfbn6 found but is not ready. I0622 06:44:44.252] Jun 22 06:41:13.940: INFO: VolumeSnapshot snapshot-sfbn6 found but is not ready. I0622 06:44:44.252] Jun 22 06:41:15.946: INFO: VolumeSnapshot snapshot-sfbn6 found but is not ready. I0622 06:44:44.253] Jun 22 06:41:17.947: INFO: WaitUntil failed after reaching the timeout 5m0s I0622 06:44:44.253] Jun 22 06:41:17.947: INFO: Unexpected error: I0622 06:44:44.253] <*errors.errorString | 0xc003343a40>: { I0622 06:44:44.253] s: "VolumeSnapshot snapshot-sfbn6 is not ready within 5m0s", I0622 06:44:44.253] } I0622 06:44:44.253] Jun 22 06:41:17.947: FAIL: VolumeSnapshot snapshot-sfbn6 is not ready within 5m0s I0622 06:44:44.254] I0622 06:44:44.254] Full Stack Trace I0622 06:44:44.254] k8s.io/kubernetes/test/e2e/storage/utils.GetSnapshotContentFromSnapshot({0x79f49e0, 0xc0035050d8?}, 0xc003f76120) I0622 06:44:44.254] test/e2e/storage/utils/snapshot.go:86 +0x1ad I0622 06:44:44.254] k8s.io/kubernetes/test/e2e/storage/framework.CreateSnapshotResource({0x7fd2a034bd60, 0xc000ba5b80}, 0xc001d164e0, {{0x71fffc1, 0x28}, {0x0, 0x0}, {0x713ca0a, 0x9}, {0x0, ...}, ...}, ...) I0622 06:44:44.255] test/e2e/storage/framework/snapshot_resource.go:92 +0x246 ... skipping 213 lines ... I0622 06:44:44.321] [90mtest/e2e/storage/testsuites/snapshottable.go:278[0m I0622 06:44:44.321] I0622 06:44:44.321] [91mJun 22 06:41:17.947: VolumeSnapshot snapshot-sfbn6 is not ready within 5m0s[0m I0622 06:44:44.321] I0622 06:44:44.321] test/e2e/storage/utils/snapshot.go:86 I0622 06:44:44.321] [90m------------------------------[0m I0622 06:44:44.323] {"msg":"FAILED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":-1,"completed":10,"skipped":1247,"failed":3,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)"]} I0622 06:44:44.323] I0622 06:44:44.323] [36mS[0m[36mS[0m[36mS[0m I0622 06:44:44.323] [90m------------------------------[0m I0622 06:44:44.323] [BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] I0622 06:44:44.324] test/e2e/storage/framework/testsuite.go:51 I0622 06:44:44.324] Jun 22 06:44:44.256: INFO: Driver csi-gcpfs-fs-sc-basic-hdd doesn't support Block -- skipping ... skipping 278 lines ... I0622 06:46:47.476] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:46:47.476] [Testpattern: Dynamic PV (default fs)] volumes I0622 06:46:47.476] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:46:47.476] should store data I0622 06:46:47.477] [90mtest/e2e/storage/testsuites/volumes.go:161[0m I0622 06:46:47.477] [90m------------------------------[0m I0622 06:46:47.477] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":13,"skipped":2499,"failed":2,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)","External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)"]} I0622 06:46:47.477] I0622 06:46:47.477] [36mS[0m[36mS[0m[36mS[0m I0622 06:46:47.478] [90m------------------------------[0m I0622 06:46:47.478] [BeforeEach] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] I0622 06:46:47.478] test/e2e/storage/framework/testsuite.go:51 I0622 06:46:47.478] Jun 22 06:46:47.453: INFO: Driver csi-gcpfs-fs-sc-basic-hdd doesn't support ext4 -- skipping ... skipping 24 lines ... I0622 06:46:47.480] I0622 06:46:47.480] [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m I0622 06:46:47.480] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 06:46:47.481] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:46:47.481] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode I0622 06:46:47.481] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:46:47.481] [36m[1mshould fail to create pod by failing to mount volume [Slow] [BeforeEach][0m I0622 06:46:47.481] [90mtest/e2e/storage/testsuites/volumemode.go:199[0m I0622 06:46:47.481] I0622 06:46:47.481] [36mDriver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "PreprovisionedPV" - skipping[0m I0622 06:46:47.481] I0622 06:46:47.481] test/e2e/storage/external/external.go:269 I0622 06:46:47.481] [90m------------------------------[0m ... skipping 234 lines ... I0622 06:47:41.437] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:47:41.437] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral I0622 06:47:41.437] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:47:41.437] should create read-only inline ephemeral volume I0622 06:47:41.437] [90mtest/e2e/storage/testsuites/ephemeral.go:175[0m I0622 06:47:41.437] [90m------------------------------[0m I0622 06:47:41.438] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":15,"skipped":1681,"failed":1,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]"]} I0622 06:47:41.438] I0622 06:47:41.551] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:47:41.552] [90m------------------------------[0m I0622 06:47:41.552] [BeforeEach] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] I0622 06:47:41.552] test/e2e/storage/framework/testsuite.go:51 I0622 06:47:41.552] Jun 22 06:47:41.550: INFO: Driver csi-gcpfs-fs-sc-basic-hdd doesn't support ext4 -- skipping ... skipping 156 lines ... I0622 06:49:54.863] Jun 22 06:44:44.435: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 06:49:54.863] [1mSTEP[0m: creating a StorageClass provisioning-7733-e2e-sc7vrp2 I0622 06:49:54.863] [1mSTEP[0m: creating a claim I0622 06:49:54.863] Jun 22 06:44:44.439: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0622 06:49:54.864] [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-c4zr I0622 06:49:54.864] [1mSTEP[0m: Creating a pod to test atomic-volume-subpath I0622 06:49:54.864] Jun 22 06:44:44.464: INFO: Waiting up to 10m0s for pod "pod-subpath-test-dynamicpv-c4zr" in namespace "provisioning-7733" to be "Succeeded or Failed" I0622 06:49:54.864] Jun 22 06:44:44.470: INFO: Pod "pod-subpath-test-dynamicpv-c4zr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116248ms I0622 06:49:54.865] Jun 22 06:44:46.475: INFO: Pod "pod-subpath-test-dynamicpv-c4zr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011120368s I0622 06:49:54.865] Jun 22 06:44:48.475: INFO: Pod "pod-subpath-test-dynamicpv-c4zr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010483486s I0622 06:49:54.865] Jun 22 06:44:50.475: INFO: Pod "pod-subpath-test-dynamicpv-c4zr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010228803s I0622 06:49:54.865] Jun 22 06:44:52.476: INFO: Pod "pod-subpath-test-dynamicpv-c4zr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011720758s I0622 06:49:54.866] Jun 22 06:44:54.507: INFO: Pod "pod-subpath-test-dynamicpv-c4zr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.042288493s ... skipping 75 lines ... I0622 06:49:54.880] Jun 22 06:47:26.476: INFO: Pod "pod-subpath-test-dynamicpv-c4zr": Phase="Running", Reason="", readiness=true. Elapsed: 2m42.011843203s I0622 06:49:54.880] Jun 22 06:47:28.474: INFO: Pod "pod-subpath-test-dynamicpv-c4zr": Phase="Running", Reason="", readiness=true. Elapsed: 2m44.010175568s I0622 06:49:54.880] Jun 22 06:47:30.477: INFO: Pod "pod-subpath-test-dynamicpv-c4zr": Phase="Running", Reason="", readiness=true. Elapsed: 2m46.01220962s I0622 06:49:54.880] Jun 22 06:47:32.475: INFO: Pod "pod-subpath-test-dynamicpv-c4zr": Phase="Running", Reason="", readiness=false. Elapsed: 2m48.011139402s I0622 06:49:54.880] Jun 22 06:47:34.475: INFO: Pod "pod-subpath-test-dynamicpv-c4zr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m50.010712922s I0622 06:49:54.881] [1mSTEP[0m: Saw pod success I0622 06:49:54.881] Jun 22 06:47:34.475: INFO: Pod "pod-subpath-test-dynamicpv-c4zr" satisfied condition "Succeeded or Failed" I0622 06:49:54.881] Jun 22 06:47:34.478: INFO: Trying to get logs from node e2e-test-prow-minion-group-gcmd pod pod-subpath-test-dynamicpv-c4zr container test-container-subpath-dynamicpv-c4zr: <nil> I0622 06:49:54.881] [1mSTEP[0m: delete the pod I0622 06:49:54.881] Jun 22 06:47:34.539: INFO: Waiting for pod pod-subpath-test-dynamicpv-c4zr to disappear I0622 06:49:54.881] Jun 22 06:47:34.545: INFO: Pod pod-subpath-test-dynamicpv-c4zr no longer exists I0622 06:49:54.881] [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-c4zr I0622 06:49:54.882] Jun 22 06:47:34.545: INFO: Deleting pod "pod-subpath-test-dynamicpv-c4zr" in namespace "provisioning-7733" ... skipping 43 lines ... I0622 06:49:54.888] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:49:54.888] [Testpattern: Dynamic PV (default fs)] subPath I0622 06:49:54.888] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:49:54.888] should support file as subpath [LinuxOnly] I0622 06:49:54.889] [90mtest/e2e/storage/testsuites/subpath.go:232[0m I0622 06:49:54.889] [90m------------------------------[0m I0622 06:49:54.890] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":11,"skipped":1396,"failed":3,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)"]} I0622 06:49:54.890] I0622 06:49:55.095] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:49:55.095] [90m------------------------------[0m I0622 06:49:55.095] [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes I0622 06:49:55.095] test/e2e/storage/framework/testsuite.go:51 I0622 06:49:55.096] Jun 22 06:49:55.093: INFO: Driver csi-gcpfs-fs-sc-basic-hdd doesn't support ext4 -- skipping ... skipping 24 lines ... I0622 06:49:55.118] I0622 06:49:55.118] [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m I0622 06:49:55.118] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 06:49:55.118] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:49:55.118] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath I0622 06:49:55.119] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:49:55.119] [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m I0622 06:49:55.119] [90mtest/e2e/storage/testsuites/subpath.go:280[0m I0622 06:49:55.119] I0622 06:49:55.119] [36mDriver csi-gcpfs-fs-sc-basic-hdd doesn't support ntfs -- skipping[0m I0622 06:49:55.119] I0622 06:49:55.119] test/e2e/storage/framework/testsuite.go:121 I0622 06:49:55.120] [90m------------------------------[0m ... skipping 225 lines ... I0622 06:52:21.130] test/e2e/framework/framework.go:186 I0622 06:52:21.130] [1mSTEP[0m: Creating a kubernetes client I0622 06:52:21.130] Jun 22 06:47:41.713: INFO: >>> kubeConfig: /root/.kube/config I0622 06:52:21.130] [1mSTEP[0m: Building a namespace api object, basename provisioning I0622 06:52:21.130] [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace I0622 06:52:21.131] [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace I0622 06:52:21.131] [It] should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] I0622 06:52:21.131] test/e2e/storage/testsuites/subpath.go:280 I0622 06:52:21.131] Jun 22 06:47:41.760: INFO: Creating resource for dynamic PV I0622 06:52:21.131] Jun 22 06:47:41.760: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 06:52:21.131] [1mSTEP[0m: creating a StorageClass provisioning-4381-e2e-schtxjv I0622 06:52:21.131] [1mSTEP[0m: creating a claim I0622 06:52:21.131] Jun 22 06:47:41.763: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0622 06:52:21.131] [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-5zc6 I0622 06:52:21.132] [1mSTEP[0m: Checking for subpath error in container status I0622 06:52:21.132] Jun 22 06:50:03.807: INFO: Deleting pod "pod-subpath-test-dynamicpv-5zc6" in namespace "provisioning-4381" I0622 06:52:21.132] Jun 22 06:50:03.834: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-5zc6" to be fully deleted I0622 06:52:21.132] [1mSTEP[0m: Deleting pod I0622 06:52:21.132] Jun 22 06:50:05.857: INFO: Deleting pod "pod-subpath-test-dynamicpv-5zc6" in namespace "provisioning-4381" I0622 06:52:21.132] [1mSTEP[0m: Deleting pvc I0622 06:52:21.132] Jun 22 06:50:05.866: INFO: Deleting PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hdd2z62k" ... skipping 35 lines ... I0622 06:52:21.138] I0622 06:52:21.138] [32m• [SLOW TEST:279.414 seconds][0m I0622 06:52:21.138] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 06:52:21.138] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:52:21.138] [Testpattern: Dynamic PV (default fs)] subPath I0622 06:52:21.138] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:52:21.138] should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] I0622 06:52:21.138] [90mtest/e2e/storage/testsuites/subpath.go:280[0m I0622 06:52:21.138] [90m------------------------------[0m I0622 06:52:21.139] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]","total":-1,"completed":16,"skipped":1971,"failed":1,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]"]} I0622 06:52:21.139] I0622 06:52:21.161] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:52:21.162] [90m------------------------------[0m I0622 06:52:21.162] [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] I0622 06:52:21.163] test/e2e/storage/framework/testsuite.go:51 I0622 06:52:21.163] Jun 22 06:52:21.160: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "PreprovisionedPV" - skipping ... skipping 24 lines ... I0622 06:52:21.170] I0622 06:52:21.171] [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m I0622 06:52:21.171] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 06:52:21.171] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:52:21.171] [Testpattern: Dynamic PV (immediate binding)] topology I0622 06:52:21.171] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:52:21.171] [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m I0622 06:52:21.171] [90mtest/e2e/storage/testsuites/topology.go:194[0m I0622 06:52:21.171] I0622 06:52:21.171] [36mDriver "csi-gcpfs-fs-sc-basic-hdd" does not support topology - skipping[0m I0622 06:52:21.171] I0622 06:52:21.171] test/e2e/storage/testsuites/topology.go:93 I0622 06:52:21.172] [90m------------------------------[0m ... skipping 29 lines ... I0622 06:52:21.190] I0622 06:52:21.190] [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m I0622 06:52:21.190] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 06:52:21.190] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:52:21.190] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode I0622 06:52:21.190] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:52:21.191] [36m[1mshould fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach][0m I0622 06:52:21.191] [90mtest/e2e/storage/testsuites/volumemode.go:299[0m I0622 06:52:21.191] I0622 06:52:21.191] [36mDriver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "PreprovisionedPV" - skipping[0m I0622 06:52:21.191] I0622 06:52:21.191] test/e2e/storage/external/external.go:269 I0622 06:52:21.191] [90m------------------------------[0m ... skipping 50 lines ... I0622 06:52:21.219] I0622 06:52:21.219] [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m I0622 06:52:21.219] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 06:52:21.220] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:52:21.220] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath I0622 06:52:21.220] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:52:21.220] [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m I0622 06:52:21.220] [90mtest/e2e/storage/testsuites/subpath.go:269[0m I0622 06:52:21.220] I0622 06:52:21.221] [36mDriver csi-gcpfs-fs-sc-basic-hdd doesn't support ntfs -- skipping[0m I0622 06:52:21.221] I0622 06:52:21.221] test/e2e/storage/framework/testsuite.go:121 I0622 06:52:21.221] [90m------------------------------[0m ... skipping 446 lines ... I0622 06:53:53.131] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:53:53.132] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] I0622 06:53:53.132] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:53:53.132] should access to two volumes with the same volume mode and retain data across pod recreation on different node I0622 06:53:53.132] [90mtest/e2e/storage/testsuites/multivolume.go:168[0m I0622 06:53:53.132] [90m------------------------------[0m I0622 06:53:53.133] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node","total":-1,"completed":14,"skipped":2657,"failed":2,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)","External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)"]} I0622 06:53:53.133] I0622 06:54:35.812] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:54:35.812] [90m------------------------------[0m I0622 06:54:35.812] [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath I0622 06:54:35.812] test/e2e/storage/framework/testsuite.go:51 I0622 06:54:35.813] [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath I0622 06:54:35.813] test/e2e/framework/framework.go:186 I0622 06:54:35.813] [1mSTEP[0m: Creating a kubernetes client I0622 06:54:35.813] Jun 22 06:49:55.511: INFO: >>> kubeConfig: /root/.kube/config I0622 06:54:35.813] [1mSTEP[0m: Building a namespace api object, basename provisioning I0622 06:54:35.813] [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace I0622 06:54:35.814] [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace I0622 06:54:35.814] [It] should fail if subpath directory is outside the volume [Slow][LinuxOnly] I0622 06:54:35.814] test/e2e/storage/testsuites/subpath.go:242 I0622 06:54:35.814] Jun 22 06:49:55.543: INFO: Creating resource for dynamic PV I0622 06:54:35.814] Jun 22 06:49:55.543: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 06:54:35.814] [1mSTEP[0m: creating a StorageClass provisioning-5919-e2e-scb8k2m I0622 06:54:35.814] [1mSTEP[0m: creating a claim I0622 06:54:35.815] Jun 22 06:49:55.551: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0622 06:54:35.815] [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-x8dg I0622 06:54:35.815] [1mSTEP[0m: Checking for subpath error in container status I0622 06:54:35.815] Jun 22 06:52:13.593: INFO: Deleting pod "pod-subpath-test-dynamicpv-x8dg" in namespace "provisioning-5919" I0622 06:54:35.815] Jun 22 06:52:13.600: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-x8dg" to be fully deleted I0622 06:54:35.815] [1mSTEP[0m: Deleting pod I0622 06:54:35.815] Jun 22 06:52:15.614: INFO: Deleting pod "pod-subpath-test-dynamicpv-x8dg" in namespace "provisioning-5919" I0622 06:54:35.815] [1mSTEP[0m: Deleting pvc I0622 06:54:35.815] Jun 22 06:52:15.623: INFO: Deleting PersistentVolumeClaim "csi-gcpfs-fs-sc-basic-hddjhqbv" ... skipping 36 lines ... I0622 06:54:35.821] I0622 06:54:35.821] [32m• [SLOW TEST:280.299 seconds][0m I0622 06:54:35.821] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 06:54:35.821] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:54:35.821] [Testpattern: Dynamic PV (default fs)] subPath I0622 06:54:35.822] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:54:35.822] should fail if subpath directory is outside the volume [Slow][LinuxOnly] I0622 06:54:35.822] [90mtest/e2e/storage/testsuites/subpath.go:242[0m I0622 06:54:35.823] [90m------------------------------[0m I0622 06:54:35.824] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]","total":-1,"completed":12,"skipped":1955,"failed":3,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)"]} I0622 06:54:35.825] I0622 06:54:35.877] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:54:35.877] [90m------------------------------[0m I0622 06:54:35.877] [BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral I0622 06:54:35.878] test/e2e/storage/framework/testsuite.go:51 I0622 06:54:35.878] Jun 22 06:54:35.875: INFO: Driver "csi-gcpfs-fs-sc-basic-hdd" does not support volume type "CSIInlineVolume" - skipping ... skipping 24 lines ... I0622 06:54:35.952] I0622 06:54:35.952] [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m I0622 06:54:35.952] External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] I0622 06:54:35.953] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:54:35.953] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath I0622 06:54:35.953] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:54:35.953] [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m I0622 06:54:35.953] [90mtest/e2e/storage/testsuites/subpath.go:258[0m I0622 06:54:35.953] I0622 06:54:35.953] [36mDriver csi-gcpfs-fs-sc-basic-hdd doesn't support ntfs -- skipping[0m I0622 06:54:35.953] I0622 06:54:35.954] test/e2e/storage/framework/testsuite.go:121 I0622 06:54:35.954] [90m------------------------------[0m ... skipping 182 lines ... I0622 06:58:16.201] [90mtest/e2e/storage/external/external.go:174[0m I0622 06:58:16.201] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral I0622 06:58:16.201] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 06:58:16.201] should support expansion of pvcs created for ephemeral pvcs I0622 06:58:16.202] [90mtest/e2e/storage/testsuites/ephemeral.go:216[0m I0622 06:58:16.202] [90m------------------------------[0m I0622 06:58:16.202] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs","total":-1,"completed":17,"skipped":2226,"failed":1,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]"]} I0622 06:58:16.202] I0622 06:58:16.202] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 06:58:16.202] [90m------------------------------[0m I0622 06:58:16.203] [BeforeEach] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] I0622 06:58:16.203] test/e2e/storage/framework/testsuite.go:51 I0622 06:58:16.203] Jun 22 06:58:16.190: INFO: Driver csi-gcpfs-fs-sc-basic-hdd doesn't support xfs -- skipping ... skipping 200 lines ... I0622 07:03:24.853] [90mtest/e2e/storage/external/external.go:174[0m I0622 07:03:24.854] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] I0622 07:03:24.854] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 07:03:24.855] should concurrently access the single read-only volume from pods on the same node I0622 07:03:24.855] [90mtest/e2e/storage/testsuites/multivolume.go:423[0m I0622 07:03:24.856] [90m------------------------------[0m I0622 07:03:24.856] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":-1,"completed":18,"skipped":2324,"failed":1,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]"]} I0622 07:03:24.857] I0622 07:03:24.857] [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m I0622 07:03:24.858] [90m------------------------------[0m I0622 07:03:24.858] [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] I0622 07:03:24.859] test/e2e/storage/framework/testsuite.go:51 I0622 07:03:24.859] Jun 22 07:03:24.641: INFO: Driver csi-gcpfs-fs-sc-basic-hdd doesn't support ntfs -- skipping ... skipping 291 lines ... I0622 07:04:27.417] [90mtest/e2e/storage/external/external.go:174[0m I0622 07:04:27.417] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral I0622 07:04:27.417] [90mtest/e2e/storage/framework/testsuite.go:50[0m I0622 07:04:27.417] should support two pods which have the same volume definition I0622 07:04:27.418] [90mtest/e2e/storage/testsuites/ephemeral.go:277[0m I0622 07:04:27.418] [90m------------------------------[0m I0622 07:04:27.418] {"msg":"PASSED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition","total":-1,"completed":15,"skipped":2682,"failed":2,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)","External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)"]} I0622 07:04:27.418] Jun 22 07:04:27.363: INFO: Running AfterSuite actions on all nodes I0622 07:04:27.419] Jun 22 07:04:27.363: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 I0622 07:04:27.419] Jun 22 07:04:27.363: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 I0622 07:04:27.419] Jun 22 07:04:27.363: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 I0622 07:04:27.419] Jun 22 07:04:27.363: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 I0622 07:04:27.419] Jun 22 07:04:27.363: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 16 lines ... I0622 07:07:16.726] Jun 22 06:54:36.171: INFO: Using claimSize:1Ti, test suite supported size:{ 1Mi}, driver(csi-gcpfs-fs-sc-basic-hdd) supported size:{ 1Mi} I0622 07:07:16.726] [1mSTEP[0m: creating a StorageClass snapshotting-1342-e2e-scvzwtk I0622 07:07:16.726] [1mSTEP[0m: creating a claim I0622 07:07:16.727] Jun 22 06:54:36.178: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0622 07:07:16.727] [1mSTEP[0m: [init] starting a pod to use the claim I0622 07:07:16.727] [1mSTEP[0m: [init] check pod success I0622 07:07:16.727] Jun 22 06:54:36.214: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-vcnkp" in namespace "snapshotting-1342" to be "Succeeded or Failed" I0622 07:07:16.728] Jun 22 06:54:36.220: INFO: Pod "pvc-snapshottable-tester-vcnkp": Phase="Pending", Reason="", readiness=false. Elapsed: 5.956292ms I0622 07:07:16.728] Jun 22 06:54:38.225: INFO: Pod "pvc-snapshottable-tester-vcnkp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010540044s I0622 07:07:16.728] Jun 22 06:54:40.225: INFO: Pod "pvc-snapshottable-tester-vcnkp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010330202s I0622 07:07:16.728] Jun 22 06:54:42.226: INFO: Pod "pvc-snapshottable-tester-vcnkp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011088568s I0622 07:07:16.729] Jun 22 06:54:44.228: INFO: Pod "pvc-snapshottable-tester-vcnkp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013196103s I0622 07:07:16.729] Jun 22 06:54:46.225: INFO: Pod "pvc-snapshottable-tester-vcnkp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.010319264s ... skipping 69 lines ... I0622 07:07:16.743] Jun 22 06:57:06.224: INFO: Pod "pvc-snapshottable-tester-vcnkp": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.009904636s I0622 07:07:16.743] Jun 22 06:57:08.224: INFO: Pod "pvc-snapshottable-tester-vcnkp": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.009919625s I0622 07:07:16.744] Jun 22 06:57:10.228: INFO: Pod "pvc-snapshottable-tester-vcnkp": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.013549432s I0622 07:07:16.744] Jun 22 06:57:12.226: INFO: Pod "pvc-snapshottable-tester-vcnkp": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.011273574s I0622 07:07:16.744] Jun 22 06:57:14.225: INFO: Pod "pvc-snapshottable-tester-vcnkp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m38.010926201s I0622 07:07:16.744] [1mSTEP[0m: Saw pod success I0622 07:07:16.744] Jun 22 06:57:14.225: INFO: Pod "pvc-snapshottable-tester-vcnkp" satisfied condition "Succeeded or Failed" I0622 07:07:16.745] [1mSTEP[0m: [init] checking the claim I0622 07:07:16.745] Jun 22 06:57:14.229: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-gcpfs-fs-sc-basic-hddhvs7h] to have phase Bound I0622 07:07:16.745] Jun 22 06:57:14.232: INFO: PersistentVolumeClaim csi-gcpfs-fs-sc-basic-hddhvs7h found and phase=Bound (2.997859ms) I0622 07:07:16.745] [1mSTEP[0m: [init] checking the PV I0622 07:07:16.745] [1mSTEP[0m: [init] deleting the pod I0622 07:07:16.746] Jun 22 06:57:14.267: INFO: Pod pvc-snapshottable-tester-vcnkp has the following logs: ... skipping 152 lines ... I0622 07:07:16.767] Jun 22 07:02:03.377: INFO: VolumeSnapshot snapshot-z85mb found but is not ready. I0622 07:07:16.767] Jun 22 07:02:05.383: INFO: VolumeSnapshot snapshot-z85mb found but is not ready. I0622 07:07:16.767] Jun 22 07:02:07.391: INFO: VolumeSnapshot snapshot-z85mb found but is not ready. I0622 07:07:16.767] Jun 22 07:02:09.396: INFO: VolumeSnapshot snapshot-z85mb found but is not ready. I0622 07:07:16.767] Jun 22 07:02:11.407: INFO: VolumeSnapshot snapshot-z85mb found but is not ready. I0622 07:07:16.767] Jun 22 07:02:13.413: INFO: VolumeSnapshot snapshot-z85mb found but is not ready. I0622 07:07:16.767] Jun 22 07:02:15.414: INFO: WaitUntil failed after reaching the timeout 5m0s I0622 07:07:16.768] Jun 22 07:02:15.414: INFO: Unexpected error: I0622 07:07:16.768] <*errors.errorString | 0xc004475db0>: { I0622 07:07:16.768] s: "VolumeSnapshot snapshot-z85mb is not ready within 5m0s", I0622 07:07:16.768] } I0622 07:07:16.768] Jun 22 07:02:15.414: FAIL: VolumeSnapshot snapshot-z85mb is not ready within 5m0s I0622 07:07:16.768] I0622 07:07:16.768] Full Stack Trace I0622 07:07:16.769] k8s.io/kubernetes/test/e2e/storage/utils.GetSnapshotContentFromSnapshot({0x79f49e0, 0xc002d7a2a0?}, 0xc002d7a1f8) I0622 07:07:16.769] test/e2e/storage/utils/snapshot.go:86 +0x1ad I0622 07:07:16.769] k8s.io/kubernetes/test/e2e/storage/framework.CreateSnapshotResource({0x7fd2a034bd60, 0xc000ba5b80}, 0xc0016d12c0, {{0x71fff99, 0x28}, {0x0, 0x0}, {0x713ca0a, 0x9}, {0x0, ...}, ...}, ...) I0622 07:07:16.769] test/e2e/storage/framework/snapshot_resource.go:92 +0x246 ... skipping 71 lines ... I0622 07:07:16.780] Jun 22 07:06:50.906: INFO: PersistentVolume pvc-e4a9edf9-396c-4d1d-bb3c-38dad786ab06 found and phase=Bound (4m35.475094686s) I0622 07:07:16.780] Jun 22 07:06:55.910: INFO: PersistentVolume pvc-e4a9edf9-396c-4d1d-bb3c-38dad786ab06 found and phase=Bound (4m40.479717695s) I0622 07:07:16.781] Jun 22 07:07:00.915: INFO: PersistentVolume pvc-e4a9edf9-396c-4d1d-bb3c-38dad786ab06 found and phase=Bound (4m45.48448295s) I0622 07:07:16.781] Jun 22 07:07:05.920: INFO: PersistentVolume pvc-e4a9edf9-396c-4d1d-bb3c-38dad786ab06 found and phase=Bound (4m50.489345295s) I0622 07:07:16.781] Jun 22 07:07:10.924: INFO: PersistentVolume pvc-e4a9edf9-396c-4d1d-bb3c-38dad786ab06 found and phase=Bound (4m55.493803247s) I0622 07:07:16.781] [1mSTEP[0m: Deleting sc I0622 07:07:16.781] Jun 22 07:07:15.932: INFO: Unexpected error: I0622 07:07:16.781] <errors.aggregate | len:1, cap:1>: [ I0622 07:07:16.781] { I0622 07:07:16.781] msg: "persistent Volume pvc-e4a9edf9-396c-4d1d-bb3c-38dad786ab06 not deleted by dynamic provisioner: PersistentVolume pvc-e4a9edf9-396c-4d1d-bb3c-38dad786ab06 still exists within 5m0s", I0622 07:07:16.781] err: { I0622 07:07:16.782] s: "PersistentVolume pvc-e4a9edf9-396c-4d1d-bb3c-38dad786ab06 still exists within 5m0s", I0622 07:07:16.782] }, I0622 07:07:16.782] }, I0622 07:07:16.782] ] I0622 07:07:16.782] Jun 22 07:07:15.932: FAIL: persistent Volume pvc-e4a9edf9-396c-4d1d-bb3c-38dad786ab06 not deleted by dynamic provisioner: PersistentVolume pvc-e4a9edf9-396c-4d1d-bb3c-38dad786ab06 still exists within 5m0s I0622 07:07:16.782] I0622 07:07:16.782] Full Stack Trace I0622 07:07:16.782] k8s.io/kubernetes/test/e2e/storage/testsuites.(*snapshottableTestSuite).DefineTests.func1.1.1() I0622 07:07:16.782] test/e2e/storage/testsuites/snapshottable.go:142 +0x3c I0622 07:07:16.782] k8s.io/kubernetes/test/e2e/storage/utils.TryFunc(0xc001b19340?) I0622 07:07:16.783] test/e2e/storage/utils/utils.go:714 +0x6d ... skipping 19 lines ... I0622 07:07:16.785] Jun 22 07:07:15.937: INFO: At 2022-06-22 06:57:06 +0000 UTC - event for csi-gcpfs-fs-sc-basic-hddhvs7h: {filestore.csi.storage.gke.io_e2e-test-prow-minion-group-qhf3_fd79bb18-f088-4002-8d73-e99bd9a91597 } ProvisioningSucceeded: Successfully provisioned volume pvc-e4a9edf9-396c-4d1d-bb3c-38dad786ab06 I0622 07:07:16.785] Jun 22 07:07:15.937: INFO: At 2022-06-22 06:57:07 +0000 UTC - event for pvc-snapshottable-tester-vcnkp: {default-scheduler } Scheduled: Successfully assigned snapshotting-1342/pvc-snapshottable-tester-vcnkp to e2e-test-prow-minion-group-gcmd I0622 07:07:16.786] Jun 22 07:07:15.937: INFO: At 2022-06-22 06:57:11 +0000 UTC - event for pvc-snapshottable-tester-vcnkp: {kubelet e2e-test-prow-minion-group-gcmd} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-2" already present on machine I0622 07:07:16.786] Jun 22 07:07:15.937: INFO: At 2022-06-22 06:57:11 +0000 UTC - event for pvc-snapshottable-tester-vcnkp: {kubelet e2e-test-prow-minion-group-gcmd} Created: Created container volume-tester I0622 07:07:16.786] Jun 22 07:07:15.937: INFO: At 2022-06-22 06:57:11 +0000 UTC - event for pvc-snapshottable-tester-vcnkp: {kubelet e2e-test-prow-minion-group-gcmd} Started: Started container volume-tester I0622 07:07:16.786] Jun 22 07:07:15.937: INFO: At 2022-06-22 06:57:14 +0000 UTC - event for snapshot-z85mb: {snapshot-controller } CreatingSnapshot: Waiting for a snapshot snapshotting-1342/snapshot-z85mb to be created by the CSI driver. I0622 07:07:16.787] Jun 22 07:07:15.937: INFO: At 2022-06-22 06:57:14 +0000 UTC - event for snapshot-z85mb: {snapshot-controller } SnapshotFinalizerError: Failed to check and update snapshot: snapshot controller failed to update snapshotting-1342/snapshot-z85mb on API server: volumesnapshots.snapshot.storage.k8s.io "snapshot-z85mb" is forbidden: User "system:serviceaccount:kube-system:volume-snapshot-controller" cannot patch resource "volumesnapshots" in API group "snapshot.storage.k8s.io" in the namespace "snapshotting-1342" I0622 07:07:16.787] Jun 22 07:07:15.940: INFO: POD NODE PHASE GRACE CONDITIONS I0622 07:07:16.787] Jun 22 07:07:15.940: INFO: I0622 07:07:16.787] Jun 22 07:07:15.945: INFO: I0622 07:07:16.787] Logging node info for node e2e-test-prow-master I0622 07:07:16.793] Jun 22 07:07:15.950: INFO: Node Info: &Node{ObjectMeta:{e2e-test-prow-master b8226c90-9b3c-48a2-b644-ddd710bc95a0 22976 0 2022-06-22 04:56:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-test-prow-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-central1 topology.kubernetes.io/zone:us-central1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-06-22 04:56:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-06-22 04:57:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-06-22 04:57:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-06-22 07:05:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://ci-kubernetes-e2e-gke-gpu/us-central1-b/e2e-test-prow-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3864322048 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3602178048 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-22 04:57:02 +0000 UTC,LastTransitionTime:2022-06-22 04:57:02 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-22 07:05:00 +0000 UTC,LastTransitionTime:2022-06-22 04:56:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-22 07:05:00 +0000 UTC,LastTransitionTime:2022-06-22 04:56:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-22 07:05:00 +0000 UTC,LastTransitionTime:2022-06-22 04:56:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-22 07:05:00 +0000 UTC,LastTransitionTime:2022-06-22 04:56:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.128.0.2,},NodeAddress{Type:ExternalIP,Address:34.123.17.188,},NodeAddress{Type:InternalDNS,Address:e2e-test-prow-master.c.ci-kubernetes-e2e-gke-gpu.internal,},NodeAddress{Type:Hostname,Address:e2e-test-prow-master.c.ci-kubernetes-e2e-gke-gpu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:827ce0da750632b542aeb837fcdcf6a8,SystemUUID:827ce0da-7506-32b5-42ae-b837fcdcf6a8,BootID:4bd09e53-aa16-47c2-b8ec-6d928be62687,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.25.0-alpha.1.65+3beb8dc5967801,KubeProxyVersion:v1.25.0-alpha.1.65+3beb8dc5967801,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.25.0-alpha.1.65_3beb8dc5967801],SizeBytes:127453679,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.25.0-alpha.1.65_3beb8dc5967801],SizeBytes:117361285,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:84029209,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.25.0-alpha.1.65_3beb8dc5967801],SizeBytes:51079300,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:be60ef505fc80879eeb5d8bf3ad8bb1146b395afc2394584645e99431806c26c gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.12.0],SizeBytes:32705362,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:d59ff043b173896cedeb897225ecfd2cdfc48591a04df035b439c04431421fc2 registry.k8s.io/kas-network-proxy/proxy-server:v0.0.30],SizeBytes:17920446,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} I0622 07:07:16.793] Jun 22 07:07:15.951: INFO: ... skipping 133 lines ... I0622 07:07:16.845] [90mtest/e2e/storage/testsuites/snapshottable.go:278[0m I0622 07:07:16.845] I0622 07:07:16.845] [91mJun 22 07:02:15.414: VolumeSnapshot snapshot-z85mb is not ready within 5m0s[0m I0622 07:07:16.846] I0622 07:07:16.846] test/e2e/storage/utils/snapshot.go:86 I0622 07:07:16.846] [90m------------------------------[0m I0622 07:07:16.847] {"msg":"FAILED External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":-1,"completed":12,"skipped":2200,"failed":4,"failures":["External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","External Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)"]} I0622 07:07:16.847] Jun 22 07:07:16.723: INFO: Running AfterSuite actions on all nodes I0622 07:07:16.847] Jun 22 07:07:16.723: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 I0622 07:07:16.847] Jun 22 07:07:16.723: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 I0622 07:07:16.848] Jun 22 07:07:16.723: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 I0622 07:07:16.848] Jun 22 07:07:16.723: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 I0622 07:07:16.848] Jun 22 07:07:16.723: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 8 lines ... I0622 07:07:16.860] Jun 22 07:03:24.761: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 I0622 07:07:16.860] Jun 22 07:03:24.761: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 I0622 07:07:16.860] Jun 22 07:03:24.761: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 I0622 07:07:16.861] Jun 22 07:03:24.761: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 I0622 07:07:16.861] Jun 22 07:07:16.858: INFO: Running AfterSuite actions on node 1 I0622 07:07:16.861] Jun 22 07:07:16.858: INFO: Dumping logs locally to: /workspace/_artifacts/fs-sc-basic-hdd/49b7873c-f1e3-11ec-aa53-764d9ce5d219 I0622 07:07:16.861] Jun 22 07:07:16.858: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory I0622 07:07:16.861] I0622 07:07:16.875] I0622 07:07:16.876] I0622 07:07:16.876] [91m[1mSummarizing 7 Failures:[0m I0622 07:07:16.876] I0622 07:07:16.877] [91m[1m[Fail] [0m[90mExternal Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [0m[0m[Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] [0m[90mvolume snapshot controller [0m[0m [0m[91m[1m[It] should check snapshot fields, check restore correctly works, check deletion (ephemeral) [0m I0622 07:07:16.877] [37mtest/e2e/storage/utils/snapshot.go:86[0m I0622 07:07:16.877] I0622 07:07:16.877] [91m[1m[Fail] [0m[90mExternal Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [0m[0m[Testpattern: Dynamic PV (default fs)] provisioning [0m[91m[1m[It] should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource] [0m I0622 07:07:16.877] [37mtest/e2e/storage/utils/snapshot.go:86[0m I0622 07:07:16.877] I0622 07:07:16.877] [91m[1m[Fail] [0m[90mExternal Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [0m[0m[Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] [0m[90mvolume snapshot controller [0m[0m [0m[91m[1m[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent) [0m I0622 07:07:16.878] [37mtest/e2e/storage/utils/snapshot.go:86[0m I0622 07:07:16.878] I0622 07:07:16.878] [91m[1m[Fail] [0m[90mExternal Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [0m[0m[Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] [0m[90mvolume snapshot controller [0m[0m [0m[91m[1m[It] should check snapshot fields, check restore correctly works, check deletion (ephemeral) [0m I0622 07:07:16.878] [37mtest/e2e/storage/utils/snapshot.go:86[0m I0622 07:07:16.878] I0622 07:07:16.879] [91m[1m[Fail] [0m[90mExternal Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [0m[0m[Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] [0m[90mvolume snapshot controller [0m[0m [0m[91m[1m[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent) [0m I0622 07:07:16.879] [37mtest/e2e/storage/utils/snapshot.go:86[0m I0622 07:07:16.879] I0622 07:07:16.879] [91m[1m[Fail] [0m[90mExternal Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [0m[0m[Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] [0m[90mvolume snapshot controller [0m[0m [0m[91m[1m[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent) [0m I0622 07:07:16.880] [37mtest/e2e/storage/utils/snapshot.go:86[0m I0622 07:07:16.880] I0622 07:07:16.880] [91m[1m[Fail] [0m[90mExternal Storage [Driver: csi-gcpfs-fs-sc-basic-hdd] [0m[0m[Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] [0m[90mvolume snapshot controller [0m[0m [0m[91m[1m[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent) [0m I0622 07:07:16.880] [37mtest/e2e/storage/utils/snapshot.go:86[0m I0622 07:07:16.880] I0622 07:07:16.880] [1m[91mRan 52 of 7309 Specs in 7755.045 seconds[0m I0622 07:07:16.881] [1m[91mFAIL![0m -- [32m[1m45 Passed[0m | [91m[1m7 Failed[0m | [33m[1m0 Pending[0m | [36m[1m7257 Skipped[0m I0622 07:07:16.886] I0622 07:07:16.887] I0622 07:07:16.887] Ginkgo ran 1 suite in 2h9m18.94234937s I0622 07:07:16.887] Test Suite Failed I0622 07:07:16.910] Deleting driver resources I0622 07:07:16.911] [/go/src/sigs.k8s.io/gcp-filestore-csi-driver/deploy/kubernetes/cluster_cleanup.sh] I0622 07:07:16.917] GOPATH is /go W0622 07:07:17.017] F0622 07:07:16.892601 102748 ginkgo.go:215] failed to run ginkgo tester: exit status 1 W0622 07:07:17.018] Error: exit status 255 I0622 07:07:18.142] namespace "gcp-filestore-csi-driver" deleted I0622 07:07:18.150] serviceaccount "gcp-filestore-csi-controller-sa" deleted I0622 07:07:18.159] serviceaccount "gcp-filestore-csi-node-sa" deleted I0622 07:07:18.167] role.rbac.authorization.k8s.io "gcp-filestore-csi-leaderelection-role" deleted I0622 07:07:18.176] clusterrole.rbac.authorization.k8s.io "gcp-filestore-csi-provisioner-role" deleted I0622 07:07:18.186] clusterrole.rbac.authorization.k8s.io "gcp-filestore-csi-resizer-role" deleted ... skipping 6 lines ... I0622 07:07:18.236] clusterrolebinding.rbac.authorization.k8s.io "gcp-filestore-csi-snapshotter-binding" deleted I0622 07:07:18.242] priorityclass.scheduling.k8s.io "csi-gcp-fs-controller" deleted I0622 07:07:18.260] priorityclass.scheduling.k8s.io "csi-gcp-fs-node" deleted I0622 07:07:18.271] deployment.apps "gcp-filestore-csi-controller" deleted I0622 07:07:18.328] daemonset.apps "gcp-filestore-csi-node" deleted I0622 07:07:18.342] csidriver.storage.k8s.io "filestore.csi.storage.gke.io" deleted W0622 07:08:03.166] Error from server (NotFound): namespaces "gcp-filestore-csi-driver" not found W0622 07:08:03.229] Project: ci-kubernetes-e2e-gke-gpu W0622 07:08:03.229] Network Project: ci-kubernetes-e2e-gke-gpu W0622 07:08:03.230] Zone: us-central1-b I0622 07:08:03.330] Bringing Down E2E Cluster on GCE I0622 07:08:03.330] [/tmp/gcp-fs-driver-tmp388079035/kubernetes/hack/e2e-internal/e2e-down.sh] I0622 07:08:03.331] Shutting down test cluster in background. ... skipping 51 lines ... W0622 07:15:14.634] Associated tags: W0622 07:15:14.635] - 310b4a5f-78fb-41bb-80ec-d37bd61c14c3 W0622 07:15:14.636] Tags: W0622 07:15:14.637] - gcr.io/ci-kubernetes-e2e-gke-gpu/gcp-filestore-csi-driver:310b4a5f-78fb-41bb-80ec-d37bd61c14c3 W0622 07:15:14.859] Deleted [gcr.io/ci-kubernetes-e2e-gke-gpu/gcp-filestore-csi-driver:310b4a5f-78fb-41bb-80ec-d37bd61c14c3]. W0622 07:15:15.724] Deleted [gcr.io/ci-kubernetes-e2e-gke-gpu/gcp-filestore-csi-driver@sha256:d629b6ca0f401ef919cbc889fe85c70d75c50a0afc239731c5f88f8e80270b5b]. W0622 07:15:17.336] F0622 07:15:17.335987 7001 main.go:179] Failed to run integration test: runCSITests failed: failed to run tests on e2e cluster: exit status 1 W0622 07:15:17.351] Traceback (most recent call last): W0622 07:15:17.352] File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 50, in <module> W0622 07:15:17.355] main(ARGS.env, ARGS.cmd + ARGS.args) W0622 07:15:17.356] File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 41, in main W0622 07:15:17.357] check(*cmd) W0622 07:15:17.358] File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 30, in check W0622 07:15:17.358] subprocess.check_call(cmd) W0622 07:15:17.359] File "/usr/lib/python2.7/subprocess.py", line 190, in check_call W0622 07:15:17.360] raise CalledProcessError(retcode, cmd) W0622 07:15:17.361] subprocess.CalledProcessError: Command '('test/run-k8s-integration.sh',)' returned non-zero exit status 255 E0622 07:15:17.406] Command failed I0622 07:15:17.407] process 622 exited with code 1 after 169.8m E0622 07:15:17.407] FAIL: pull-gcp-filestore-csi-driver-kubernetes-integration I0622 07:15:17.410] Call: gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json W0622 07:15:18.730] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com] I0622 07:15:18.920] process 104234 exited with code 0 after 0.0m I0622 07:15:18.921] Call: gcloud config get-value account I0622 07:15:20.248] process 104248 exited with code 0 after 0.0m I0622 07:15:20.249] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com I0622 07:15:20.249] Upload result and artifacts... I0622 07:15:20.249] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_gcp-filestore-csi-driver/311/pull-gcp-filestore-csi-driver-kubernetes-integration/1539464493598248960 I0622 07:15:20.251] Call: gsutil ls gs://kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_gcp-filestore-csi-driver/311/pull-gcp-filestore-csi-driver-kubernetes-integration/1539464493598248960/artifacts W0622 07:15:22.489] CommandException: One or more URLs matched no objects. E0622 07:15:22.980] Command failed I0622 07:15:22.980] process 104262 exited with code 1 after 0.0m W0622 07:15:22.981] Remote dir gs://kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_gcp-filestore-csi-driver/311/pull-gcp-filestore-csi-driver-kubernetes-integration/1539464493598248960/artifacts not exist yet I0622 07:15:22.982] Call: gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_gcp-filestore-csi-driver/311/pull-gcp-filestore-csi-driver-kubernetes-integration/1539464493598248960/artifacts I0622 07:15:29.511] process 104402 exited with code 0 after 0.1m W0622 07:15:29.512] metadata path /workspace/_artifacts/metadata.json does not exist W0622 07:15:29.512] metadata not found or invalid, init with empty metadata ... skipping 23 lines ...