This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-03 23:38
Elapsed31m39s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 159 lines ...
..done.
WARNING: No host aliases were added to your SSH configs because you do not have any running instances. Try running this command again after running some instances.
I1003 23:39:33.200584    4706 up.go:43] Cleaning up any leaked resources from previous cluster
I1003 23:39:33.200625    4706 dumplogs.go:40] /logs/artifacts/f1d77951-24a2-11ec-b766-fab168edef1d/kops toolbox dump --name e2e-5690f240d7-0a91d.k8s.local --dir /logs/artifacts --private-key /tmp/kops-ssh3970738516/key --ssh-user prow
I1003 23:39:33.218472    4743 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1003 23:39:33.218567    4743 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-5690f240d7-0a91d.k8s.local" not found
W1003 23:39:33.446609    4706 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1003 23:39:33.446650    4706 down.go:48] /logs/artifacts/f1d77951-24a2-11ec-b766-fab168edef1d/kops delete cluster --name e2e-5690f240d7-0a91d.k8s.local --yes
I1003 23:39:33.465114    4753 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1003 23:39:33.465208    4753 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-5690f240d7-0a91d.k8s.local" not found
I1003 23:39:33.640930    4706 gcs.go:51] gsutil ls -b -p gce-gci-upg-lat-1-3-ctl-skew gs://gce-gci-upg-lat-1-3-ctl-skew-state-f1
I1003 23:39:35.006531    4706 gcs.go:70] gsutil mb -p gce-gci-upg-lat-1-3-ctl-skew gs://gce-gci-upg-lat-1-3-ctl-skew-state-f1
Creating gs://gce-gci-upg-lat-1-3-ctl-skew-state-f1/...
I1003 23:39:36.687316    4706 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/03 23:39:36 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1003 23:39:36.695320    4706 http.go:37] curl https://ip.jsb.workers.dev
I1003 23:39:36.802877    4706 up.go:144] /logs/artifacts/f1d77951-24a2-11ec-b766-fab168edef1d/kops create cluster --name e2e-5690f240d7-0a91d.k8s.local --cloud gce --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.22.2 --ssh-public-key /tmp/kops-ssh3970738516/key.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --channel=alpha --networking=kubenet --container-runtime=docker --admin-access 35.193.93.145/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-west3-a --master-size e2-standard-2 --project gce-gci-upg-lat-1-3-ctl-skew --vpc e2e-5690f240d7-0a91d
I1003 23:39:36.822258    5041 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1003 23:39:36.822450    5041 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I1003 23:39:36.844402    5041 create_cluster.go:838] Using SSH public key: /tmp/kops-ssh3970738516/key.pub
W1003 23:39:37.207552    5041 new_cluster.go:355] VMs will be configured to use the GCE default compute Service Account! This is an anti-pattern
... skipping 20 lines ...
W1003 23:39:41.762681    5041 vfs_castore.go:377] CA private key was not found
I1003 23:39:41.856525    5041 keypair.go:213] Issuing new certificate: "service-account"
I1003 23:39:41.860316    5041 keypair.go:213] Issuing new certificate: "kubernetes-ca"
I1003 23:39:50.740915    5041 executor.go:111] Tasks: 39 done / 65 total; 19 can run
I1003 23:39:50.861063    5041 keypair.go:213] Issuing new certificate: "kubelet"
I1003 23:39:51.062118    5041 keypair.go:213] Issuing new certificate: "kube-proxy"
W1003 23:39:51.914838    5041 executor.go:139] error running task "FirewallRule/node-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:51.914914    5041 executor.go:139] error running task "FirewallRule/nodeport-external-to-node-ipv6-e2e-5690f240d7-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:51.914924    5041 executor.go:139] error running task "FirewallRule/https-api-ipv6-e2e-5690f240d7-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:51.914932    5041 executor.go:139] error running task "FirewallRule/master-to-master-e2e-5690f240d7-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:51.914940    5041 executor.go:139] error running task "FirewallRule/https-api-e2e-5690f240d7-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:51.914966    5041 executor.go:139] error running task "FirewallRule/nodeport-external-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:51.914975    5041 executor.go:139] error running task "FirewallRule/ssh-external-to-node-ipv6-e2e-5690f240d7-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:51.914981    5041 executor.go:139] error running task "FirewallRule/ssh-external-to-master-e2e-5690f240d7-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:51.914988    5041 executor.go:139] error running task "FirewallRule/ssh-external-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:51.914995    5041 executor.go:139] error running task "FirewallRule/master-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:51.915001    5041 executor.go:139] error running task "FirewallRule/node-to-master-e2e-5690f240d7-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:51.915007    5041 executor.go:139] error running task "FirewallRule/pod-cidrs-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:51.915013    5041 executor.go:139] error running task "FirewallRule/ssh-external-to-master-ipv6-e2e-5690f240d7-0a91d-k8s-local" (9m58s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
I1003 23:39:51.915029    5041 executor.go:111] Tasks: 45 done / 65 total; 16 can run
W1003 23:39:55.708815    5041 executor.go:139] error running task "FirewallRule/https-api-e2e-5690f240d7-0a91d-k8s-local" (9m55s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:55.708863    5041 executor.go:139] error running task "FirewallRule/nodeport-external-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m55s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:55.708873    5041 executor.go:139] error running task "FirewallRule/ssh-external-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m55s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:55.708882    5041 executor.go:139] error running task "FirewallRule/ssh-external-to-node-ipv6-e2e-5690f240d7-0a91d-k8s-local" (9m55s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:55.708890    5041 executor.go:139] error running task "FirewallRule/ssh-external-to-master-e2e-5690f240d7-0a91d-k8s-local" (9m55s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:55.708897    5041 executor.go:139] error running task "FirewallRule/master-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m55s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:55.708904    5041 executor.go:139] error running task "FirewallRule/pod-cidrs-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m55s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:55.708911    5041 executor.go:139] error running task "FirewallRule/node-to-master-e2e-5690f240d7-0a91d-k8s-local" (9m55s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:55.708945    5041 executor.go:139] error running task "FirewallRule/ssh-external-to-master-ipv6-e2e-5690f240d7-0a91d-k8s-local" (9m55s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:55.708952    5041 executor.go:139] error running task "FirewallRule/node-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m55s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:55.708958    5041 executor.go:139] error running task "FirewallRule/nodeport-external-to-node-ipv6-e2e-5690f240d7-0a91d-k8s-local" (9m55s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:55.708966    5041 executor.go:139] error running task "FirewallRule/https-api-ipv6-e2e-5690f240d7-0a91d-k8s-local" (9m55s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:39:55.708975    5041 executor.go:139] error running task "FirewallRule/master-to-master-e2e-5690f240d7-0a91d-k8s-local" (9m55s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
I1003 23:39:55.708999    5041 executor.go:111] Tasks: 48 done / 65 total; 16 can run
E1003 23:40:04.066891    5041 op.go:136] GCE operation failed: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready
W1003 23:40:04.066960    5041 executor.go:139] error running task "FirewallRule/https-api-ipv6-e2e-5690f240d7-0a91d-k8s-local" (9m46s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:04.066974    5041 executor.go:139] error running task "FirewallRule/master-to-master-e2e-5690f240d7-0a91d-k8s-local" (9m46s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:04.066984    5041 executor.go:139] error running task "FirewallRule/https-api-e2e-5690f240d7-0a91d-k8s-local" (9m46s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:04.066991    5041 executor.go:139] error running task "FirewallRule/nodeport-external-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m46s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:04.066998    5041 executor.go:139] error running task "FirewallRule/ssh-external-to-node-ipv6-e2e-5690f240d7-0a91d-k8s-local" (9m46s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:04.067005    5041 executor.go:139] error running task "FirewallRule/ssh-external-to-master-e2e-5690f240d7-0a91d-k8s-local" (9m46s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:04.067013    5041 executor.go:139] error running task "FirewallRule/ssh-external-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m46s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:04.067022    5041 executor.go:139] error running task "FirewallRule/master-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m46s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:04.067029    5041 executor.go:139] error running task "FirewallRule/node-to-master-e2e-5690f240d7-0a91d-k8s-local" (9m46s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:04.067036    5041 executor.go:139] error running task "InstanceGroupManager/a-master-us-west3-a-e2e-5690f240d7-0a91d-k8s-local" (9m51s remaining to succeed): error creating InstanceGroupManager: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready
W1003 23:40:04.067042    5041 executor.go:139] error running task "FirewallRule/pod-cidrs-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m46s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:04.067048    5041 executor.go:139] error running task "FirewallRule/ssh-external-to-master-ipv6-e2e-5690f240d7-0a91d-k8s-local" (9m46s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:04.067055    5041 executor.go:139] error running task "FirewallRule/node-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m46s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:04.067061    5041 executor.go:139] error running task "FirewallRule/nodeport-external-to-node-ipv6-e2e-5690f240d7-0a91d-k8s-local" (9m46s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
I1003 23:40:04.067076    5041 executor.go:111] Tasks: 50 done / 65 total; 15 can run
E1003 23:40:08.568707    5041 op.go:136] GCE operation failed: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready
W1003 23:40:08.568841    5041 executor.go:139] error running task "FirewallRule/ssh-external-to-master-ipv6-e2e-5690f240d7-0a91d-k8s-local" (9m42s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:08.568857    5041 executor.go:139] error running task "FirewallRule/node-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m42s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:08.568873    5041 executor.go:139] error running task "InstanceGroupManager/a-nodes-us-west3-a-e2e-5690f240d7-0a91d-k8s-local" (9m55s remaining to succeed): error creating InstanceGroupManager: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready
W1003 23:40:08.568891    5041 executor.go:139] error running task "FirewallRule/nodeport-external-to-node-ipv6-e2e-5690f240d7-0a91d-k8s-local" (9m42s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:08.568898    5041 executor.go:139] error running task "FirewallRule/https-api-ipv6-e2e-5690f240d7-0a91d-k8s-local" (9m42s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:08.568912    5041 executor.go:139] error running task "FirewallRule/master-to-master-e2e-5690f240d7-0a91d-k8s-local" (9m42s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:08.568930    5041 executor.go:139] error running task "FirewallRule/https-api-e2e-5690f240d7-0a91d-k8s-local" (9m42s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:08.568944    5041 executor.go:139] error running task "FirewallRule/nodeport-external-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m42s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:08.568951    5041 executor.go:139] error running task "FirewallRule/ssh-external-to-node-ipv6-e2e-5690f240d7-0a91d-k8s-local" (9m42s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:08.568966    5041 executor.go:139] error running task "FirewallRule/ssh-external-to-master-e2e-5690f240d7-0a91d-k8s-local" (9m42s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:08.568982    5041 executor.go:139] error running task "FirewallRule/ssh-external-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m42s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:08.569002    5041 executor.go:139] error running task "FirewallRule/master-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m42s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:08.569022    5041 executor.go:139] error running task "FirewallRule/node-to-master-e2e-5690f240d7-0a91d-k8s-local" (9m42s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
W1003 23:40:08.569037    5041 executor.go:139] error running task "InstanceGroupManager/a-master-us-west3-a-e2e-5690f240d7-0a91d-k8s-local" (9m47s remaining to succeed): error updating InstanceTemplate for InstanceGroupManager: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instanceGroupManagers/a-master-us-west3-a-e2e-5690f240d7-0a91d-k8s-local' is not ready, resourceNotReady
W1003 23:40:08.569056    5041 executor.go:139] error running task "FirewallRule/pod-cidrs-to-node-e2e-5690f240d7-0a91d-k8s-local" (9m42s remaining to succeed): error creating FirewallRule: googleapi: Error 400: The resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/global/networks/e2e-5690f240d7-0a91d' is not ready, resourceNotReady
I1003 23:40:08.569064    5041 executor.go:155] No progress made, sleeping before retrying 15 task(s)
I1003 23:40:18.569364    5041 executor.go:111] Tasks: 50 done / 65 total; 15 can run
I1003 23:40:43.900614    5041 executor.go:111] Tasks: 65 done / 65 total; 0 can run
I1003 23:40:43.958391    5041 update_cluster.go:326] Exporting kubeconfig for cluster
kOps has set your kubectl context to e2e-5690f240d7-0a91d.k8s.local

... skipping 8 lines ...

I1003 23:40:44.477059    4706 up.go:181] /logs/artifacts/f1d77951-24a2-11ec-b766-fab168edef1d/kops validate cluster --name e2e-5690f240d7-0a91d.k8s.local --count 10 --wait 15m0s
I1003 23:40:44.497497    5062 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1003 23:40:44.497613    5062 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-5690f240d7-0a91d.k8s.local

W1003 23:41:14.780784    5062 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.178.72/api/v1/nodes": dial tcp 34.106.178.72:443: i/o timeout
W1003 23:41:24.805413    5062 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.178.72/api/v1/nodes": dial tcp 34.106.178.72:443: connect: connection refused
W1003 23:41:34.827989    5062 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.178.72/api/v1/nodes": dial tcp 34.106.178.72:443: connect: connection refused
W1003 23:41:44.851046    5062 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.178.72/api/v1/nodes": dial tcp 34.106.178.72:443: connect: connection refused
W1003 23:41:54.875256    5062 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.178.72/api/v1/nodes": dial tcp 34.106.178.72:443: connect: connection refused
W1003 23:42:04.899699    5062 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.178.72/api/v1/nodes": dial tcp 34.106.178.72:443: connect: connection refused
W1003 23:42:14.923278    5062 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.178.72/api/v1/nodes": dial tcp 34.106.178.72:443: connect: connection refused
W1003 23:42:24.947785    5062 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.178.72/api/v1/nodes": dial tcp 34.106.178.72:443: connect: connection refused
W1003 23:42:34.970533    5062 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.178.72/api/v1/nodes": dial tcp 34.106.178.72:443: connect: connection refused
W1003 23:42:44.996695    5062 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.178.72/api/v1/nodes": dial tcp 34.106.178.72:443: connect: connection refused
W1003 23:42:55.019591    5062 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.178.72/api/v1/nodes": dial tcp 34.106.178.72:443: connect: connection refused
W1003 23:43:05.044075    5062 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.178.72/api/v1/nodes": dial tcp 34.106.178.72:443: connect: connection refused
W1003 23:43:25.069063    5062 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.178.72/api/v1/nodes": net/http: TLS handshake timeout
W1003 23:43:45.093533    5062 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.178.72/api/v1/nodes": net/http: TLS handshake timeout
I1003 23:43:58.172103    5062 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3

... skipping 5 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/master-us-west3-a-nkrg	machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/master-us-west3-a-nkrg" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-chbv	machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-chbv" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g	machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h	machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-zfb0	machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-zfb0" has not yet joined cluster

Validation Failed
W1003 23:43:59.011272    5062 validate_cluster.go:232] (will retry): cluster not yet healthy
I1003 23:44:09.321708    5062 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 7 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-chbv	machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-chbv" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g	machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h	machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-zfb0	machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-zfb0" has not yet joined cluster
Node	master-us-west3-a-nkrg														node "master-us-west3-a-nkrg" of role "master" is not ready

Validation Failed
W1003 23:44:10.114813    5062 validate_cluster.go:232] (will retry): cluster not yet healthy
I1003 23:44:20.489257    5062 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 7 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-chbv	machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-chbv" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g	machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h	machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-zfb0	machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-zfb0" has not yet joined cluster
Node	master-us-west3-a-nkrg														node "master-us-west3-a-nkrg" of role "master" is not ready

Validation Failed
W1003 23:44:21.238419    5062 validate_cluster.go:232] (will retry): cluster not yet healthy
I1003 23:44:31.617703    5062 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 8 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g	machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h	machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-zfb0	machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-zfb0" has not yet joined cluster
Node	master-us-west3-a-nkrg														node "master-us-west3-a-nkrg" of role "master" is not ready
Pod	kube-system/kube-proxy-master-us-west3-a-nkrg											system-node-critical pod "kube-proxy-master-us-west3-a-nkrg" is pending

Validation Failed
W1003 23:44:32.354641    5062 validate_cluster.go:232] (will retry): cluster not yet healthy
I1003 23:44:42.754203    5062 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 10 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-zfb0	machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-zfb0" has not yet joined cluster
Node	master-us-west3-a-nkrg														master "master-us-west3-a-nkrg" is missing kube-apiserver pod
Pod	kube-system/coredns-5dc785954d-s8wb9												system-cluster-critical pod "coredns-5dc785954d-s8wb9" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-6n4dm											system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-6n4dm" is pending
Pod	kube-system/metadata-proxy-v0.12-fx8zj												system-node-critical pod "metadata-proxy-v0.12-fx8zj" is pending

Validation Failed
W1003 23:44:43.495626    5062 validate_cluster.go:232] (will retry): cluster not yet healthy
I1003 23:44:53.830439    5062 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 9 lines ...
Machine	https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h	machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h" has not yet joined cluster
Machine	https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-zfb0	machine "https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-zfb0" has not yet joined cluster
Pod	kube-system/coredns-5dc785954d-s8wb9												system-cluster-critical pod "coredns-5dc785954d-s8wb9" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-6n4dm											system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-6n4dm" is pending
Pod	kube-system/etcd-manager-main-master-us-west3-a-nkrg										system-cluster-critical pod "etcd-manager-main-master-us-west3-a-nkrg" is pending

Validation Failed
W1003 23:44:54.509051    5062 validate_cluster.go:232] (will retry): cluster not yet healthy
I1003 23:45:04.805687    5062 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 16 lines ...
Pod	kube-system/coredns-autoscaler-84d4cfd89c-6n4dm	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-6n4dm" is pending
Pod	kube-system/metadata-proxy-v0.12-7p4qr		system-node-critical pod "metadata-proxy-v0.12-7p4qr" is pending
Pod	kube-system/metadata-proxy-v0.12-8k8rt		system-node-critical pod "metadata-proxy-v0.12-8k8rt" is pending
Pod	kube-system/metadata-proxy-v0.12-jhr44		system-node-critical pod "metadata-proxy-v0.12-jhr44" is pending
Pod	kube-system/metadata-proxy-v0.12-m2lls		system-node-critical pod "metadata-proxy-v0.12-m2lls" is pending

Validation Failed
W1003 23:45:05.527928    5062 validate_cluster.go:232] (will retry): cluster not yet healthy
I1003 23:45:15.846265    5062 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 16 lines ...
Pod	kube-system/kube-proxy-nodes-us-west3-a-zfb0	system-node-critical pod "kube-proxy-nodes-us-west3-a-zfb0" is pending
Pod	kube-system/metadata-proxy-v0.12-7p4qr		system-node-critical pod "metadata-proxy-v0.12-7p4qr" is pending
Pod	kube-system/metadata-proxy-v0.12-8k8rt		system-node-critical pod "metadata-proxy-v0.12-8k8rt" is pending
Pod	kube-system/metadata-proxy-v0.12-jhr44		system-node-critical pod "metadata-proxy-v0.12-jhr44" is pending
Pod	kube-system/metadata-proxy-v0.12-m2lls		system-node-critical pod "metadata-proxy-v0.12-m2lls" is pending

Validation Failed
W1003 23:45:16.614023    5062 validate_cluster.go:232] (will retry): cluster not yet healthy
I1003 23:45:26.994873    5062 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 12 lines ...
Pod	kube-system/coredns-autoscaler-84d4cfd89c-6n4dm	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-6n4dm" is pending
Pod	kube-system/metadata-proxy-v0.12-7p4qr		system-node-critical pod "metadata-proxy-v0.12-7p4qr" is pending
Pod	kube-system/metadata-proxy-v0.12-8k8rt		system-node-critical pod "metadata-proxy-v0.12-8k8rt" is pending
Pod	kube-system/metadata-proxy-v0.12-jhr44		system-node-critical pod "metadata-proxy-v0.12-jhr44" is pending
Pod	kube-system/metadata-proxy-v0.12-m2lls		system-node-critical pod "metadata-proxy-v0.12-m2lls" is pending

Validation Failed
W1003 23:45:27.729319    5062 validate_cluster.go:232] (will retry): cluster not yet healthy
I1003 23:45:38.052318    5062 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 9 lines ...
VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/coredns-5dc785954d-s8wb9	system-cluster-critical pod "coredns-5dc785954d-s8wb9" is not ready (coredns)
Pod	kube-system/metadata-proxy-v0.12-8k8rt	system-node-critical pod "metadata-proxy-v0.12-8k8rt" is pending
Pod	kube-system/metadata-proxy-v0.12-m2lls	system-node-critical pod "metadata-proxy-v0.12-m2lls" is pending

Validation Failed
W1003 23:45:38.785217    5062 validate_cluster.go:232] (will retry): cluster not yet healthy
I1003 23:45:49.201069    5062 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 8 lines ...

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/coredns-5dc785954d-s8wb9	system-cluster-critical pod "coredns-5dc785954d-s8wb9" is not ready (coredns)
Pod	kube-system/metadata-proxy-v0.12-m2lls	system-node-critical pod "metadata-proxy-v0.12-m2lls" is pending

Validation Failed
W1003 23:45:49.938441    5062 validate_cluster.go:232] (will retry): cluster not yet healthy
I1003 23:46:00.341567    5062 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 7 lines ...
nodes-us-west3-a-zfb0	node	True

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/coredns-5dc785954d-s8wb9	system-cluster-critical pod "coredns-5dc785954d-s8wb9" is not ready (coredns)

Validation Failed
W1003 23:46:01.055070    5062 validate_cluster.go:232] (will retry): cluster not yet healthy
I1003 23:46:11.484082    5062 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 7 lines ...
nodes-us-west3-a-zfb0	node	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Pod	kube-system/kube-proxy-nodes-us-west3-a-chbv	system-node-critical pod "kube-proxy-nodes-us-west3-a-chbv" is pending

Validation Failed
W1003 23:46:12.193174    5062 validate_cluster.go:232] (will retry): cluster not yet healthy
I1003 23:46:22.521817    5062 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 39 lines ...
nodes-us-west3-a-zfb0	node	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Pod	kube-system/kube-proxy-nodes-us-west3-a-h26h	system-node-critical pod "kube-proxy-nodes-us-west3-a-h26h" is pending

Validation Failed
W1003 23:46:45.480016    5062 validate_cluster.go:232] (will retry): cluster not yet healthy
I1003 23:46:55.885722    5062 gce_cloud.go:279] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west3-a	Master	e2-standard-2	1	1	us-west3
nodes-us-west3-a	Node	n1-standard-2	4	4	us-west3
... skipping 180 lines ...
===================================
Random Seed: 1633304932 - Will randomize all specs
Will run 6432 specs

Running in parallel across 25 nodes

Oct  3 23:49:07.719: INFO: lookupDiskImageSources: gcloud error with [[]string{"instance-groups", "list-instances", "", "--format=get(instance)"}]; err:exit status 1
Oct  3 23:49:07.719: INFO:  > ERROR: (gcloud.compute.instance-groups.list-instances) could not parse resource []
Oct  3 23:49:07.719: INFO:  > 
Oct  3 23:49:07.719: INFO: Cluster image sources lookup failed: exit status 1

Oct  3 23:49:07.719: INFO: >>> kubeConfig: /root/.kube/config
Oct  3 23:49:07.721: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Oct  3 23:49:07.824: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct  3 23:49:07.933: INFO: 20 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct  3 23:49:07.933: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
... skipping 144 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 539 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:49:08.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-3550" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:08.791: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 52 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:49:08.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8321" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:49:09.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9095" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:10.314: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 52 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:49:10.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-334" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:10.725: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 96 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:49:10.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-7405" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s","total":-1,"completed":1,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:10.915: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 38 lines ...
• [SLOW TEST:6.807 seconds]
[sig-api-machinery] ServerSideApply
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should work for CRDs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:569
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for CRDs","total":-1,"completed":1,"skipped":2,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:18.299 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:27.366: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 45 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  3 23:49:09.896: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c21a24c-d41b-4202-b792-1a993f243e2b" in namespace "projected-7462" to be "Succeeded or Failed"
Oct  3 23:49:10.290: INFO: Pod "downwardapi-volume-7c21a24c-d41b-4202-b792-1a993f243e2b": Phase="Pending", Reason="", readiness=false. Elapsed: 394.165282ms
Oct  3 23:49:12.315: INFO: Pod "downwardapi-volume-7c21a24c-d41b-4202-b792-1a993f243e2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.418549171s
Oct  3 23:49:14.342: INFO: Pod "downwardapi-volume-7c21a24c-d41b-4202-b792-1a993f243e2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.445798326s
Oct  3 23:49:16.367: INFO: Pod "downwardapi-volume-7c21a24c-d41b-4202-b792-1a993f243e2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.470835194s
Oct  3 23:49:18.392: INFO: Pod "downwardapi-volume-7c21a24c-d41b-4202-b792-1a993f243e2b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.496387074s
Oct  3 23:49:20.418: INFO: Pod "downwardapi-volume-7c21a24c-d41b-4202-b792-1a993f243e2b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.522451633s
Oct  3 23:49:22.444: INFO: Pod "downwardapi-volume-7c21a24c-d41b-4202-b792-1a993f243e2b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.547788545s
Oct  3 23:49:24.468: INFO: Pod "downwardapi-volume-7c21a24c-d41b-4202-b792-1a993f243e2b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.571775141s
Oct  3 23:49:26.493: INFO: Pod "downwardapi-volume-7c21a24c-d41b-4202-b792-1a993f243e2b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.597198157s
Oct  3 23:49:28.518: INFO: Pod "downwardapi-volume-7c21a24c-d41b-4202-b792-1a993f243e2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.622450383s
STEP: Saw pod success
Oct  3 23:49:28.519: INFO: Pod "downwardapi-volume-7c21a24c-d41b-4202-b792-1a993f243e2b" satisfied condition "Succeeded or Failed"
Oct  3 23:49:28.543: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod downwardapi-volume-7c21a24c-d41b-4202-b792-1a993f243e2b container client-container: <nil>
STEP: delete the pod
Oct  3 23:49:28.620: INFO: Waiting for pod downwardapi-volume-7c21a24c-d41b-4202-b792-1a993f243e2b to disappear
Oct  3 23:49:28.643: INFO: Pod downwardapi-volume-7c21a24c-d41b-4202-b792-1a993f243e2b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 37 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:49:15.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282
Oct  3 23:49:15.221: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-8c75b0ab-7cf1-44a3-afa6-2f0db43252fa" in namespace "security-context-test-6453" to be "Succeeded or Failed"
Oct  3 23:49:15.246: INFO: Pod "busybox-privileged-true-8c75b0ab-7cf1-44a3-afa6-2f0db43252fa": Phase="Pending", Reason="", readiness=false. Elapsed: 24.991002ms
Oct  3 23:49:17.271: INFO: Pod "busybox-privileged-true-8c75b0ab-7cf1-44a3-afa6-2f0db43252fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049255631s
Oct  3 23:49:19.296: INFO: Pod "busybox-privileged-true-8c75b0ab-7cf1-44a3-afa6-2f0db43252fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074930137s
Oct  3 23:49:21.321: INFO: Pod "busybox-privileged-true-8c75b0ab-7cf1-44a3-afa6-2f0db43252fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099667227s
Oct  3 23:49:23.347: INFO: Pod "busybox-privileged-true-8c75b0ab-7cf1-44a3-afa6-2f0db43252fa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.125622967s
Oct  3 23:49:25.372: INFO: Pod "busybox-privileged-true-8c75b0ab-7cf1-44a3-afa6-2f0db43252fa": Phase="Pending", Reason="", readiness=false. Elapsed: 10.150308312s
Oct  3 23:49:27.395: INFO: Pod "busybox-privileged-true-8c75b0ab-7cf1-44a3-afa6-2f0db43252fa": Phase="Pending", Reason="", readiness=false. Elapsed: 12.173766806s
Oct  3 23:49:29.423: INFO: Pod "busybox-privileged-true-8c75b0ab-7cf1-44a3-afa6-2f0db43252fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.202025909s
Oct  3 23:49:29.424: INFO: Pod "busybox-privileged-true-8c75b0ab-7cf1-44a3-afa6-2f0db43252fa" satisfied condition "Succeeded or Failed"
Oct  3 23:49:29.452: INFO: Got logs for pod "busybox-privileged-true-8c75b0ab-7cf1-44a3-afa6-2f0db43252fa": ""
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:49:29.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6453" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with privileged
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232
    should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":2,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:21.741 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:49:10.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-38516c6a-281f-4f74-a063-bf195ae3a7f8
STEP: Creating a pod to test consume configMaps
Oct  3 23:49:11.089: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fa623588-dd75-4ce9-b540-f4e81349e4a2" in namespace "projected-6218" to be "Succeeded or Failed"
Oct  3 23:49:11.113: INFO: Pod "pod-projected-configmaps-fa623588-dd75-4ce9-b540-f4e81349e4a2": Phase="Pending", Reason="", readiness=false. Elapsed: 24.004608ms
Oct  3 23:49:13.139: INFO: Pod "pod-projected-configmaps-fa623588-dd75-4ce9-b540-f4e81349e4a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049807995s
Oct  3 23:49:15.166: INFO: Pod "pod-projected-configmaps-fa623588-dd75-4ce9-b540-f4e81349e4a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077011711s
Oct  3 23:49:17.193: INFO: Pod "pod-projected-configmaps-fa623588-dd75-4ce9-b540-f4e81349e4a2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104005662s
Oct  3 23:49:19.221: INFO: Pod "pod-projected-configmaps-fa623588-dd75-4ce9-b540-f4e81349e4a2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.131476345s
Oct  3 23:49:21.248: INFO: Pod "pod-projected-configmaps-fa623588-dd75-4ce9-b540-f4e81349e4a2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.158932363s
Oct  3 23:49:23.274: INFO: Pod "pod-projected-configmaps-fa623588-dd75-4ce9-b540-f4e81349e4a2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.184269671s
Oct  3 23:49:25.299: INFO: Pod "pod-projected-configmaps-fa623588-dd75-4ce9-b540-f4e81349e4a2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.209983605s
Oct  3 23:49:27.327: INFO: Pod "pod-projected-configmaps-fa623588-dd75-4ce9-b540-f4e81349e4a2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.238203121s
Oct  3 23:49:29.353: INFO: Pod "pod-projected-configmaps-fa623588-dd75-4ce9-b540-f4e81349e4a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.263971492s
STEP: Saw pod success
Oct  3 23:49:29.353: INFO: Pod "pod-projected-configmaps-fa623588-dd75-4ce9-b540-f4e81349e4a2" satisfied condition "Succeeded or Failed"
Oct  3 23:49:29.378: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod pod-projected-configmaps-fa623588-dd75-4ce9-b540-f4e81349e4a2 container agnhost-container: <nil>
STEP: delete the pod
Oct  3 23:49:29.795: INFO: Waiting for pod pod-projected-configmaps-fa623588-dd75-4ce9-b540-f4e81349e4a2 to disappear
Oct  3 23:49:29.836: INFO: Pod pod-projected-configmaps-fa623588-dd75-4ce9-b540-f4e81349e4a2 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 50 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:49:28.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 61 lines ...
STEP: Destroying namespace "services-9400" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","total":-1,"completed":2,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:30.185: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 82 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:476
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:31.051: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 162 lines ...
• [SLOW TEST:23.143 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:26.482 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:51
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":2,"skipped":23,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:37.297: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:49:38.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6134" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":3,"skipped":29,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
Oct  3 23:49:10.588: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Oct  3 23:49:10.819: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-e44af5f3-9304-45e5-99a4-5eb924f46970" in namespace "security-context-test-1206" to be "Succeeded or Failed"
Oct  3 23:49:10.870: INFO: Pod "alpine-nnp-false-e44af5f3-9304-45e5-99a4-5eb924f46970": Phase="Pending", Reason="", readiness=false. Elapsed: 51.658299ms
Oct  3 23:49:12.906: INFO: Pod "alpine-nnp-false-e44af5f3-9304-45e5-99a4-5eb924f46970": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086861669s
Oct  3 23:49:14.949: INFO: Pod "alpine-nnp-false-e44af5f3-9304-45e5-99a4-5eb924f46970": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129876895s
Oct  3 23:49:16.984: INFO: Pod "alpine-nnp-false-e44af5f3-9304-45e5-99a4-5eb924f46970": Phase="Pending", Reason="", readiness=false. Elapsed: 6.165349964s
Oct  3 23:49:19.008: INFO: Pod "alpine-nnp-false-e44af5f3-9304-45e5-99a4-5eb924f46970": Phase="Pending", Reason="", readiness=false. Elapsed: 8.189189207s
Oct  3 23:49:21.037: INFO: Pod "alpine-nnp-false-e44af5f3-9304-45e5-99a4-5eb924f46970": Phase="Pending", Reason="", readiness=false. Elapsed: 10.217974626s
... skipping 4 lines ...
Oct  3 23:49:31.163: INFO: Pod "alpine-nnp-false-e44af5f3-9304-45e5-99a4-5eb924f46970": Phase="Pending", Reason="", readiness=false. Elapsed: 20.344357367s
Oct  3 23:49:33.188: INFO: Pod "alpine-nnp-false-e44af5f3-9304-45e5-99a4-5eb924f46970": Phase="Pending", Reason="", readiness=false. Elapsed: 22.36895181s
Oct  3 23:49:35.218: INFO: Pod "alpine-nnp-false-e44af5f3-9304-45e5-99a4-5eb924f46970": Phase="Pending", Reason="", readiness=false. Elapsed: 24.39870077s
Oct  3 23:49:37.242: INFO: Pod "alpine-nnp-false-e44af5f3-9304-45e5-99a4-5eb924f46970": Phase="Pending", Reason="", readiness=false. Elapsed: 26.422702272s
Oct  3 23:49:39.266: INFO: Pod "alpine-nnp-false-e44af5f3-9304-45e5-99a4-5eb924f46970": Phase="Pending", Reason="", readiness=false. Elapsed: 28.447390588s
Oct  3 23:49:41.291: INFO: Pod "alpine-nnp-false-e44af5f3-9304-45e5-99a4-5eb924f46970": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.471899459s
Oct  3 23:49:41.291: INFO: Pod "alpine-nnp-false-e44af5f3-9304-45e5-99a4-5eb924f46970" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:49:41.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1206" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:41.437: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 38 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:49:41.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-4579" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":2,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:41.813: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  3 23:49:30.455: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4fac6752-0884-444f-bfe9-af6c95d3159f" in namespace "projected-1943" to be "Succeeded or Failed"
Oct  3 23:49:30.479: INFO: Pod "downwardapi-volume-4fac6752-0884-444f-bfe9-af6c95d3159f": Phase="Pending", Reason="", readiness=false. Elapsed: 24.114719ms
Oct  3 23:49:32.505: INFO: Pod "downwardapi-volume-4fac6752-0884-444f-bfe9-af6c95d3159f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050058945s
Oct  3 23:49:34.534: INFO: Pod "downwardapi-volume-4fac6752-0884-444f-bfe9-af6c95d3159f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078691473s
Oct  3 23:49:36.559: INFO: Pod "downwardapi-volume-4fac6752-0884-444f-bfe9-af6c95d3159f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103919649s
Oct  3 23:49:38.584: INFO: Pod "downwardapi-volume-4fac6752-0884-444f-bfe9-af6c95d3159f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128469354s
Oct  3 23:49:40.609: INFO: Pod "downwardapi-volume-4fac6752-0884-444f-bfe9-af6c95d3159f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.153628055s
Oct  3 23:49:42.634: INFO: Pod "downwardapi-volume-4fac6752-0884-444f-bfe9-af6c95d3159f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.178579145s
Oct  3 23:49:44.659: INFO: Pod "downwardapi-volume-4fac6752-0884-444f-bfe9-af6c95d3159f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.203460125s
STEP: Saw pod success
Oct  3 23:49:44.659: INFO: Pod "downwardapi-volume-4fac6752-0884-444f-bfe9-af6c95d3159f" satisfied condition "Succeeded or Failed"
Oct  3 23:49:44.683: INFO: Trying to get logs from node nodes-us-west3-a-gn9g pod downwardapi-volume-4fac6752-0884-444f-bfe9-af6c95d3159f container client-container: <nil>
STEP: delete the pod
Oct  3 23:49:44.745: INFO: Waiting for pod downwardapi-volume-4fac6752-0884-444f-bfe9-af6c95d3159f to disappear
Oct  3 23:49:44.768: INFO: Pod downwardapi-volume-4fac6752-0884-444f-bfe9-af6c95d3159f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.616 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 73 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:391
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":1,"skipped":1,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:45.842: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 72 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:45.910: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 70 lines ...
Oct  3 23:49:31.464: INFO: PersistentVolumeClaim pvc-glc6v found but phase is Pending instead of Bound.
Oct  3 23:49:33.492: INFO: PersistentVolumeClaim pvc-glc6v found and phase=Bound (10.154566772s)
Oct  3 23:49:33.492: INFO: Waiting up to 3m0s for PersistentVolume local-bdq2r to have phase Bound
Oct  3 23:49:33.515: INFO: PersistentVolume local-bdq2r found and phase=Bound (23.564402ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-s6mq
STEP: Creating a pod to test subpath
Oct  3 23:49:33.601: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-s6mq" in namespace "provisioning-8273" to be "Succeeded or Failed"
Oct  3 23:49:33.627: INFO: Pod "pod-subpath-test-preprovisionedpv-s6mq": Phase="Pending", Reason="", readiness=false. Elapsed: 25.922339ms
Oct  3 23:49:35.653: INFO: Pod "pod-subpath-test-preprovisionedpv-s6mq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052085584s
Oct  3 23:49:37.680: INFO: Pod "pod-subpath-test-preprovisionedpv-s6mq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07850196s
Oct  3 23:49:39.706: INFO: Pod "pod-subpath-test-preprovisionedpv-s6mq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104474579s
Oct  3 23:49:41.731: INFO: Pod "pod-subpath-test-preprovisionedpv-s6mq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12990716s
Oct  3 23:49:43.757: INFO: Pod "pod-subpath-test-preprovisionedpv-s6mq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.155517718s
Oct  3 23:49:45.782: INFO: Pod "pod-subpath-test-preprovisionedpv-s6mq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.181315347s
STEP: Saw pod success
Oct  3 23:49:45.782: INFO: Pod "pod-subpath-test-preprovisionedpv-s6mq" satisfied condition "Succeeded or Failed"
Oct  3 23:49:45.807: INFO: Trying to get logs from node nodes-us-west3-a-gn9g pod pod-subpath-test-preprovisionedpv-s6mq container test-container-subpath-preprovisionedpv-s6mq: <nil>
STEP: delete the pod
Oct  3 23:49:45.888: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-s6mq to disappear
Oct  3 23:49:45.916: INFO: Pod pod-subpath-test-preprovisionedpv-s6mq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-s6mq
Oct  3 23:49:45.916: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-s6mq" in namespace "provisioning-8273"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":0,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 75 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:48.573: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 24 lines ...
Oct  3 23:49:31.475: INFO: PersistentVolumeClaim pvc-k598f found but phase is Pending instead of Bound.
Oct  3 23:49:33.498: INFO: PersistentVolumeClaim pvc-k598f found and phase=Bound (4.07266592s)
Oct  3 23:49:33.498: INFO: Waiting up to 3m0s for PersistentVolume local-k9zr5 to have phase Bound
Oct  3 23:49:33.521: INFO: PersistentVolume local-k9zr5 found and phase=Bound (23.070311ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-74lv
STEP: Creating a pod to test subpath
Oct  3 23:49:33.599: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-74lv" in namespace "provisioning-2583" to be "Succeeded or Failed"
Oct  3 23:49:33.625: INFO: Pod "pod-subpath-test-preprovisionedpv-74lv": Phase="Pending", Reason="", readiness=false. Elapsed: 25.982147ms
Oct  3 23:49:35.649: INFO: Pod "pod-subpath-test-preprovisionedpv-74lv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050516803s
Oct  3 23:49:37.676: INFO: Pod "pod-subpath-test-preprovisionedpv-74lv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076928067s
Oct  3 23:49:39.701: INFO: Pod "pod-subpath-test-preprovisionedpv-74lv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102553964s
Oct  3 23:49:41.728: INFO: Pod "pod-subpath-test-preprovisionedpv-74lv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128580761s
Oct  3 23:49:43.753: INFO: Pod "pod-subpath-test-preprovisionedpv-74lv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.154014637s
Oct  3 23:49:45.778: INFO: Pod "pod-subpath-test-preprovisionedpv-74lv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.179049324s
Oct  3 23:49:47.803: INFO: Pod "pod-subpath-test-preprovisionedpv-74lv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.204514066s
Oct  3 23:49:49.830: INFO: Pod "pod-subpath-test-preprovisionedpv-74lv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.230916555s
STEP: Saw pod success
Oct  3 23:49:49.830: INFO: Pod "pod-subpath-test-preprovisionedpv-74lv" satisfied condition "Succeeded or Failed"
Oct  3 23:49:49.854: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod pod-subpath-test-preprovisionedpv-74lv container test-container-volume-preprovisionedpv-74lv: <nil>
STEP: delete the pod
Oct  3 23:49:49.923: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-74lv to disappear
Oct  3 23:49:49.947: INFO: Pod pod-subpath-test-preprovisionedpv-74lv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-74lv
Oct  3 23:49:49.947: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-74lv" in namespace "provisioning-2583"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":17,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:50.685: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 208 lines ...
• [SLOW TEST:43.059 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for the cluster [Provider:GCE]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Provider:GCE]","total":-1,"completed":1,"skipped":22,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:51.377: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
STEP: Creating a pod to test hostPath subPath
Oct  3 23:49:38.498: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7881" to be "Succeeded or Failed"
Oct  3 23:49:38.521: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 23.132935ms
Oct  3 23:49:40.546: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0475625s
Oct  3 23:49:42.571: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073247036s
Oct  3 23:49:44.596: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098197582s
Oct  3 23:49:46.621: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123207406s
Oct  3 23:49:48.646: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.147674843s
Oct  3 23:49:50.672: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.173659012s
Oct  3 23:49:52.697: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.199274155s
Oct  3 23:49:54.723: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.224626343s
STEP: Saw pod success
Oct  3 23:49:54.723: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Oct  3 23:49:54.747: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Oct  3 23:49:54.822: INFO: Waiting for pod pod-host-path-test to disappear
Oct  3 23:49:54.846: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:16.552 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":4,"skipped":31,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:54.931: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 38 lines ...
Oct  3 23:49:43.018: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:49:55.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8302" for this suite.
STEP: Destroying namespace "webhook-8302-markers" for this suite.
... skipping 4 lines ...
• [SLOW TEST:26.885 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:55.843: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 92 lines ...
Oct  3 23:49:32.068: INFO: PersistentVolumeClaim pvc-gqnlg found but phase is Pending instead of Bound.
Oct  3 23:49:34.094: INFO: PersistentVolumeClaim pvc-gqnlg found and phase=Bound (8.134591453s)
Oct  3 23:49:34.094: INFO: Waiting up to 3m0s for PersistentVolume local-n7q4f to have phase Bound
Oct  3 23:49:34.121: INFO: PersistentVolume local-n7q4f found and phase=Bound (26.548353ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-4vxx
STEP: Creating a pod to test exec-volume-test
Oct  3 23:49:34.196: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-4vxx" in namespace "volume-6123" to be "Succeeded or Failed"
Oct  3 23:49:34.220: INFO: Pod "exec-volume-test-preprovisionedpv-4vxx": Phase="Pending", Reason="", readiness=false. Elapsed: 24.093543ms
Oct  3 23:49:36.246: INFO: Pod "exec-volume-test-preprovisionedpv-4vxx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050303584s
Oct  3 23:49:38.270: INFO: Pod "exec-volume-test-preprovisionedpv-4vxx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07440462s
Oct  3 23:49:40.295: INFO: Pod "exec-volume-test-preprovisionedpv-4vxx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099083585s
Oct  3 23:49:42.319: INFO: Pod "exec-volume-test-preprovisionedpv-4vxx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122968533s
Oct  3 23:49:44.343: INFO: Pod "exec-volume-test-preprovisionedpv-4vxx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.14716543s
Oct  3 23:49:46.368: INFO: Pod "exec-volume-test-preprovisionedpv-4vxx": Phase="Pending", Reason="", readiness=false. Elapsed: 12.172310414s
Oct  3 23:49:48.396: INFO: Pod "exec-volume-test-preprovisionedpv-4vxx": Phase="Pending", Reason="", readiness=false. Elapsed: 14.2007165s
Oct  3 23:49:50.433: INFO: Pod "exec-volume-test-preprovisionedpv-4vxx": Phase="Pending", Reason="", readiness=false. Elapsed: 16.236844072s
Oct  3 23:49:52.457: INFO: Pod "exec-volume-test-preprovisionedpv-4vxx": Phase="Pending", Reason="", readiness=false. Elapsed: 18.261115235s
Oct  3 23:49:54.485: INFO: Pod "exec-volume-test-preprovisionedpv-4vxx": Phase="Pending", Reason="", readiness=false. Elapsed: 20.288829336s
Oct  3 23:49:56.513: INFO: Pod "exec-volume-test-preprovisionedpv-4vxx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.316812732s
STEP: Saw pod success
Oct  3 23:49:56.513: INFO: Pod "exec-volume-test-preprovisionedpv-4vxx" satisfied condition "Succeeded or Failed"
Oct  3 23:49:56.544: INFO: Trying to get logs from node nodes-us-west3-a-chbv pod exec-volume-test-preprovisionedpv-4vxx container exec-container-preprovisionedpv-4vxx: <nil>
STEP: delete the pod
Oct  3 23:49:56.609: INFO: Waiting for pod exec-volume-test-preprovisionedpv-4vxx to disappear
Oct  3 23:49:56.634: INFO: Pod exec-volume-test-preprovisionedpv-4vxx no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-4vxx
Oct  3 23:49:56.634: INFO: Deleting pod "exec-volume-test-preprovisionedpv-4vxx" in namespace "volume-6123"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:57.537: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 88 lines ...
Oct  3 23:49:31.584: INFO: PersistentVolumeClaim pvc-q7cnr found but phase is Pending instead of Bound.
Oct  3 23:49:33.617: INFO: PersistentVolumeClaim pvc-q7cnr found and phase=Bound (10.152023763s)
Oct  3 23:49:33.617: INFO: Waiting up to 3m0s for PersistentVolume local-wwrqp to have phase Bound
Oct  3 23:49:33.639: INFO: PersistentVolume local-wwrqp found and phase=Bound (22.670858ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-fp2x
STEP: Creating a pod to test subpath
Oct  3 23:49:33.744: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fp2x" in namespace "provisioning-5709" to be "Succeeded or Failed"
Oct  3 23:49:33.775: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2x": Phase="Pending", Reason="", readiness=false. Elapsed: 30.977693ms
Oct  3 23:49:35.800: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055327713s
Oct  3 23:49:37.826: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081405808s
Oct  3 23:49:39.851: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106624486s
Oct  3 23:49:41.879: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134832719s
Oct  3 23:49:43.904: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.15932072s
Oct  3 23:49:45.927: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2x": Phase="Pending", Reason="", readiness=false. Elapsed: 12.182910494s
Oct  3 23:49:47.954: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2x": Phase="Pending", Reason="", readiness=false. Elapsed: 14.209615848s
Oct  3 23:49:49.983: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2x": Phase="Pending", Reason="", readiness=false. Elapsed: 16.238546197s
Oct  3 23:49:52.010: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.266015476s
STEP: Saw pod success
Oct  3 23:49:52.010: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2x" satisfied condition "Succeeded or Failed"
Oct  3 23:49:52.039: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod pod-subpath-test-preprovisionedpv-fp2x container test-container-subpath-preprovisionedpv-fp2x: <nil>
STEP: delete the pod
Oct  3 23:49:52.210: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fp2x to disappear
Oct  3 23:49:52.251: INFO: Pod pod-subpath-test-preprovisionedpv-fp2x no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fp2x
Oct  3 23:49:52.251: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fp2x" in namespace "provisioning-5709"
STEP: Creating pod pod-subpath-test-preprovisionedpv-fp2x
STEP: Creating a pod to test subpath
Oct  3 23:49:52.311: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fp2x" in namespace "provisioning-5709" to be "Succeeded or Failed"
Oct  3 23:49:52.335: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2x": Phase="Pending", Reason="", readiness=false. Elapsed: 23.382727ms
Oct  3 23:49:54.361: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050193321s
Oct  3 23:49:56.387: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07556306s
Oct  3 23:49:58.412: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100945479s
STEP: Saw pod success
Oct  3 23:49:58.412: INFO: Pod "pod-subpath-test-preprovisionedpv-fp2x" satisfied condition "Succeeded or Failed"
Oct  3 23:49:58.435: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod pod-subpath-test-preprovisionedpv-fp2x container test-container-subpath-preprovisionedpv-fp2x: <nil>
STEP: delete the pod
Oct  3 23:49:58.498: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fp2x to disappear
Oct  3 23:49:58.521: INFO: Pod pod-subpath-test-preprovisionedpv-fp2x no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fp2x
Oct  3 23:49:58.521: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fp2x" in namespace "provisioning-5709"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:58.955: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 43 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:49:59.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":2,"skipped":2,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:49:59.077: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 194 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Clean up pods on node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279
    kubelet should be able to delete 10 pods per node in 1m0s.
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341
------------------------------
{"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:00.022: INFO: Only supported for providers [openstack] (not gce)
... skipping 35 lines ...
• [SLOW TEST:34.269 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should create pods for an Indexed job with completion indexes and specified hostname
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:150
------------------------------
{"msg":"PASSED [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname","total":-1,"completed":3,"skipped":16,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 39 lines ...
• [SLOW TEST:30.502 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":2,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:01.747: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 62 lines ...
Oct  3 23:49:46.854: INFO: PersistentVolumeClaim pvc-f2whc found but phase is Pending instead of Bound.
Oct  3 23:49:48.879: INFO: PersistentVolumeClaim pvc-f2whc found and phase=Bound (4.07175198s)
Oct  3 23:49:48.879: INFO: Waiting up to 3m0s for PersistentVolume local-94dvg to have phase Bound
Oct  3 23:49:48.904: INFO: PersistentVolume local-94dvg found and phase=Bound (25.16329ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-czhg
STEP: Creating a pod to test subpath
Oct  3 23:49:48.980: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-czhg" in namespace "provisioning-253" to be "Succeeded or Failed"
Oct  3 23:49:49.008: INFO: Pod "pod-subpath-test-preprovisionedpv-czhg": Phase="Pending", Reason="", readiness=false. Elapsed: 28.084083ms
Oct  3 23:49:51.036: INFO: Pod "pod-subpath-test-preprovisionedpv-czhg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056431419s
Oct  3 23:49:53.063: INFO: Pod "pod-subpath-test-preprovisionedpv-czhg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082940922s
Oct  3 23:49:55.092: INFO: Pod "pod-subpath-test-preprovisionedpv-czhg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112001228s
Oct  3 23:49:57.117: INFO: Pod "pod-subpath-test-preprovisionedpv-czhg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.137303107s
Oct  3 23:49:59.151: INFO: Pod "pod-subpath-test-preprovisionedpv-czhg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.17112879s
Oct  3 23:50:01.176: INFO: Pod "pod-subpath-test-preprovisionedpv-czhg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.19659442s
STEP: Saw pod success
Oct  3 23:50:01.177: INFO: Pod "pod-subpath-test-preprovisionedpv-czhg" satisfied condition "Succeeded or Failed"
Oct  3 23:50:01.201: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod pod-subpath-test-preprovisionedpv-czhg container test-container-volume-preprovisionedpv-czhg: <nil>
STEP: delete the pod
Oct  3 23:50:01.276: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-czhg to disappear
Oct  3 23:50:01.300: INFO: Pod pod-subpath-test-preprovisionedpv-czhg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-czhg
Oct  3 23:50:01.300: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-czhg" in namespace "provisioning-253"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":16,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 17 lines ...
Oct  3 23:49:45.907: INFO: PersistentVolumeClaim pvc-dwjqc found but phase is Pending instead of Bound.
Oct  3 23:49:47.933: INFO: PersistentVolumeClaim pvc-dwjqc found and phase=Bound (6.104968242s)
Oct  3 23:49:47.933: INFO: Waiting up to 3m0s for PersistentVolume local-lbsmm to have phase Bound
Oct  3 23:49:47.957: INFO: PersistentVolume local-lbsmm found and phase=Bound (23.369147ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-2wrv
STEP: Creating a pod to test subpath
Oct  3 23:49:48.038: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-2wrv" in namespace "provisioning-3551" to be "Succeeded or Failed"
Oct  3 23:49:48.063: INFO: Pod "pod-subpath-test-preprovisionedpv-2wrv": Phase="Pending", Reason="", readiness=false. Elapsed: 25.337061ms
Oct  3 23:49:50.097: INFO: Pod "pod-subpath-test-preprovisionedpv-2wrv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058852672s
Oct  3 23:49:52.123: INFO: Pod "pod-subpath-test-preprovisionedpv-2wrv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085152081s
Oct  3 23:49:54.147: INFO: Pod "pod-subpath-test-preprovisionedpv-2wrv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109357624s
Oct  3 23:49:56.176: INFO: Pod "pod-subpath-test-preprovisionedpv-2wrv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.137572411s
Oct  3 23:49:58.201: INFO: Pod "pod-subpath-test-preprovisionedpv-2wrv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.162715082s
Oct  3 23:50:00.227: INFO: Pod "pod-subpath-test-preprovisionedpv-2wrv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.188787094s
Oct  3 23:50:02.253: INFO: Pod "pod-subpath-test-preprovisionedpv-2wrv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.215231457s
STEP: Saw pod success
Oct  3 23:50:02.253: INFO: Pod "pod-subpath-test-preprovisionedpv-2wrv" satisfied condition "Succeeded or Failed"
Oct  3 23:50:02.279: INFO: Trying to get logs from node nodes-us-west3-a-chbv pod pod-subpath-test-preprovisionedpv-2wrv container test-container-volume-preprovisionedpv-2wrv: <nil>
STEP: delete the pod
Oct  3 23:50:02.341: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-2wrv to disappear
Oct  3 23:50:02.368: INFO: Pod pod-subpath-test-preprovisionedpv-2wrv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-2wrv
Oct  3 23:50:02.368: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-2wrv" in namespace "provisioning-3551"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:49:29.964: INFO: >>> kubeConfig: /root/.kube/config
... skipping 20 lines ...
Oct  3 23:49:45.958: INFO: PersistentVolumeClaim pvc-n8q94 found but phase is Pending instead of Bound.
Oct  3 23:49:47.989: INFO: PersistentVolumeClaim pvc-n8q94 found and phase=Bound (8.132358378s)
Oct  3 23:49:47.989: INFO: Waiting up to 3m0s for PersistentVolume local-9gns7 to have phase Bound
Oct  3 23:49:48.014: INFO: PersistentVolume local-9gns7 found and phase=Bound (24.664175ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-pxc9
STEP: Creating a pod to test subpath
Oct  3 23:49:48.096: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pxc9" in namespace "provisioning-7814" to be "Succeeded or Failed"
Oct  3 23:49:48.126: INFO: Pod "pod-subpath-test-preprovisionedpv-pxc9": Phase="Pending", Reason="", readiness=false. Elapsed: 29.513316ms
Oct  3 23:49:50.151: INFO: Pod "pod-subpath-test-preprovisionedpv-pxc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055021639s
Oct  3 23:49:52.183: INFO: Pod "pod-subpath-test-preprovisionedpv-pxc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087058545s
Oct  3 23:49:54.212: INFO: Pod "pod-subpath-test-preprovisionedpv-pxc9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115566021s
Oct  3 23:49:56.241: INFO: Pod "pod-subpath-test-preprovisionedpv-pxc9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.14428887s
Oct  3 23:49:58.267: INFO: Pod "pod-subpath-test-preprovisionedpv-pxc9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.170102709s
Oct  3 23:50:00.296: INFO: Pod "pod-subpath-test-preprovisionedpv-pxc9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.199356981s
Oct  3 23:50:02.322: INFO: Pod "pod-subpath-test-preprovisionedpv-pxc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.226086992s
STEP: Saw pod success
Oct  3 23:50:02.323: INFO: Pod "pod-subpath-test-preprovisionedpv-pxc9" satisfied condition "Succeeded or Failed"
Oct  3 23:50:02.350: INFO: Trying to get logs from node nodes-us-west3-a-chbv pod pod-subpath-test-preprovisionedpv-pxc9 container test-container-volume-preprovisionedpv-pxc9: <nil>
STEP: delete the pod
Oct  3 23:50:02.424: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pxc9 to disappear
Oct  3 23:50:02.453: INFO: Pod pod-subpath-test-preprovisionedpv-pxc9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-pxc9
Oct  3 23:50:02.453: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pxc9" in namespace "provisioning-7814"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":3,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 23 lines ...
Oct  3 23:49:31.226: INFO: PersistentVolumeClaim pvc-rfn7k found but phase is Pending instead of Bound.
Oct  3 23:49:33.250: INFO: PersistentVolumeClaim pvc-rfn7k found and phase=Bound (4.071039438s)
Oct  3 23:49:33.250: INFO: Waiting up to 3m0s for PersistentVolume local-sh4hk to have phase Bound
Oct  3 23:49:33.273: INFO: PersistentVolume local-sh4hk found and phase=Bound (22.61167ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-wts5
STEP: Creating a pod to test exec-volume-test
Oct  3 23:49:33.343: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-wts5" in namespace "volume-4763" to be "Succeeded or Failed"
Oct  3 23:49:33.365: INFO: Pod "exec-volume-test-preprovisionedpv-wts5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.649434ms
Oct  3 23:49:35.391: INFO: Pod "exec-volume-test-preprovisionedpv-wts5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047884952s
Oct  3 23:49:37.415: INFO: Pod "exec-volume-test-preprovisionedpv-wts5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072219643s
Oct  3 23:49:39.439: INFO: Pod "exec-volume-test-preprovisionedpv-wts5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096646367s
Oct  3 23:49:41.473: INFO: Pod "exec-volume-test-preprovisionedpv-wts5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130435335s
Oct  3 23:49:43.500: INFO: Pod "exec-volume-test-preprovisionedpv-wts5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.15713829s
... skipping 5 lines ...
Oct  3 23:49:55.645: INFO: Pod "exec-volume-test-preprovisionedpv-wts5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.302260007s
Oct  3 23:49:57.676: INFO: Pod "exec-volume-test-preprovisionedpv-wts5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.332878156s
Oct  3 23:49:59.699: INFO: Pod "exec-volume-test-preprovisionedpv-wts5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.356255244s
Oct  3 23:50:01.729: INFO: Pod "exec-volume-test-preprovisionedpv-wts5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.385895453s
Oct  3 23:50:03.759: INFO: Pod "exec-volume-test-preprovisionedpv-wts5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.416136999s
STEP: Saw pod success
Oct  3 23:50:03.759: INFO: Pod "exec-volume-test-preprovisionedpv-wts5" satisfied condition "Succeeded or Failed"
Oct  3 23:50:03.808: INFO: Trying to get logs from node nodes-us-west3-a-chbv pod exec-volume-test-preprovisionedpv-wts5 container exec-container-preprovisionedpv-wts5: <nil>
STEP: delete the pod
Oct  3 23:50:04.021: INFO: Waiting for pod exec-volume-test-preprovisionedpv-wts5 to disappear
Oct  3 23:50:04.065: INFO: Pod exec-volume-test-preprovisionedpv-wts5 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-wts5
Oct  3 23:50:04.065: INFO: Deleting pod "exec-volume-test-preprovisionedpv-wts5" in namespace "volume-4763"
... skipping 35 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  3 23:49:59.299: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bbe242f9-b700-4e38-b608-a8f3ae8e7f61" in namespace "projected-7028" to be "Succeeded or Failed"
Oct  3 23:49:59.322: INFO: Pod "downwardapi-volume-bbe242f9-b700-4e38-b608-a8f3ae8e7f61": Phase="Pending", Reason="", readiness=false. Elapsed: 22.811882ms
Oct  3 23:50:01.347: INFO: Pod "downwardapi-volume-bbe242f9-b700-4e38-b608-a8f3ae8e7f61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047682091s
Oct  3 23:50:03.371: INFO: Pod "downwardapi-volume-bbe242f9-b700-4e38-b608-a8f3ae8e7f61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072106307s
Oct  3 23:50:05.462: INFO: Pod "downwardapi-volume-bbe242f9-b700-4e38-b608-a8f3ae8e7f61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.162923228s
STEP: Saw pod success
Oct  3 23:50:05.462: INFO: Pod "downwardapi-volume-bbe242f9-b700-4e38-b608-a8f3ae8e7f61" satisfied condition "Succeeded or Failed"
Oct  3 23:50:05.547: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod downwardapi-volume-bbe242f9-b700-4e38-b608-a8f3ae8e7f61 container client-container: <nil>
STEP: delete the pod
Oct  3 23:50:05.702: INFO: Waiting for pod downwardapi-volume-bbe242f9-b700-4e38-b608-a8f3ae8e7f61 to disappear
Oct  3 23:50:05.747: INFO: Pod downwardapi-volume-bbe242f9-b700-4e38-b608-a8f3ae8e7f61 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.730 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:05.904: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 149 lines ...
• [SLOW TEST:17.494 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:34.779 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify "immediate" deletion of a PVC that is not in active use by a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify \"immediate\" deletion of a PVC that is not in active use by a pod","total":-1,"completed":2,"skipped":12,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:06.176: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75
STEP: Creating configMap with name configmap-test-volume-af351e9e-86bb-4794-80a3-afd96461c2c2
STEP: Creating a pod to test consume configMaps
Oct  3 23:50:02.119: INFO: Waiting up to 5m0s for pod "pod-configmaps-c41c03c4-76ba-4d20-a82c-27a2f6b05433" in namespace "configmap-5424" to be "Succeeded or Failed"
Oct  3 23:50:02.143: INFO: Pod "pod-configmaps-c41c03c4-76ba-4d20-a82c-27a2f6b05433": Phase="Pending", Reason="", readiness=false. Elapsed: 23.428955ms
Oct  3 23:50:04.191: INFO: Pod "pod-configmaps-c41c03c4-76ba-4d20-a82c-27a2f6b05433": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072058519s
Oct  3 23:50:06.235: INFO: Pod "pod-configmaps-c41c03c4-76ba-4d20-a82c-27a2f6b05433": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115401007s
Oct  3 23:50:08.263: INFO: Pod "pod-configmaps-c41c03c4-76ba-4d20-a82c-27a2f6b05433": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.143852651s
STEP: Saw pod success
Oct  3 23:50:08.263: INFO: Pod "pod-configmaps-c41c03c4-76ba-4d20-a82c-27a2f6b05433" satisfied condition "Succeeded or Failed"
Oct  3 23:50:08.294: INFO: Trying to get logs from node nodes-us-west3-a-gn9g pod pod-configmaps-c41c03c4-76ba-4d20-a82c-27a2f6b05433 container agnhost-container: <nil>
STEP: delete the pod
Oct  3 23:50:08.384: INFO: Waiting for pod pod-configmaps-c41c03c4-76ba-4d20-a82c-27a2f6b05433 to disappear
Oct  3 23:50:08.427: INFO: Pod pod-configmaps-c41c03c4-76ba-4d20-a82c-27a2f6b05433 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.627 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":5,"skipped":19,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
Oct  3 23:50:06.328: INFO: Waiting for PV local-pvxkczq to bind to PVC pvc-bwcfl
Oct  3 23:50:06.328: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-bwcfl] to have phase Bound
Oct  3 23:50:06.365: INFO: PersistentVolumeClaim pvc-bwcfl found but phase is Pending instead of Bound.
Oct  3 23:50:08.394: INFO: PersistentVolumeClaim pvc-bwcfl found and phase=Bound (2.066150701s)
Oct  3 23:50:08.394: INFO: Waiting up to 3m0s for PersistentVolume local-pvxkczq to have phase Bound
Oct  3 23:50:08.427: INFO: PersistentVolume local-pvxkczq found and phase=Bound (33.175022ms)
[It] should fail scheduling due to different NodeSelector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
STEP: local-volume-type: dir
Oct  3 23:50:08.539: INFO: Waiting up to 5m0s for pod "pod-bc823eb1-323b-43d8-adc4-4492e7f44e42" in namespace "persistent-local-volumes-test-3200" to be "Unschedulable"
Oct  3 23:50:08.569: INFO: Pod "pod-bc823eb1-323b-43d8-adc4-4492e7f44e42": Phase="Pending", Reason="", readiness=false. Elapsed: 30.027969ms
Oct  3 23:50:08.570: INFO: Pod "pod-bc823eb1-323b-43d8-adc4-4492e7f44e42" satisfied condition "Unschedulable"
[AfterEach] Pod with node different from PV's NodeAffinity
... skipping 12 lines ...

• [SLOW TEST:11.643 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeSelector
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":2,"skipped":15,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:18.114 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Replace and Patch tests [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":2,"skipped":31,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] PrivilegedPod [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 143 lines ...
• [SLOW TEST:25.357 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":4,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:10.221: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 255 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294
    should scale a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:10.703: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 46 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 203 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:50:12.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7646" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:50:11.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:50:12.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6443" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:12.606: INFO: Only supported for providers [aws] (not gce)
... skipping 71 lines ...
• [SLOW TEST:12.914 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment reaping should cascade to its replica sets and pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:95
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":4,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:14.704: INFO: Driver "local" does not provide raw block - skipping
... skipping 130 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 46 lines ...
• [SLOW TEST:11.640 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:480
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim","total":-1,"completed":4,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:15.501: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 71 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not gce)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
... skipping 43 lines ...
• [SLOW TEST:10.481 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: no PDB => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:286
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: no PDB =\u003e should allow an eviction","total":-1,"completed":4,"skipped":42,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:50:01.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
STEP: Creating configMap with name projected-configmap-test-volume-56848797-368d-4d10-af5d-f344aca18f67
STEP: Creating a pod to test consume configMaps
Oct  3 23:50:02.086: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0c02b049-bee1-4549-aa98-0584e2b2899b" in namespace "projected-5779" to be "Succeeded or Failed"
Oct  3 23:50:02.117: INFO: Pod "pod-projected-configmaps-0c02b049-bee1-4549-aa98-0584e2b2899b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.161199ms
Oct  3 23:50:04.146: INFO: Pod "pod-projected-configmaps-0c02b049-bee1-4549-aa98-0584e2b2899b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06026945s
Oct  3 23:50:06.186: INFO: Pod "pod-projected-configmaps-0c02b049-bee1-4549-aa98-0584e2b2899b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099852782s
Oct  3 23:50:08.216: INFO: Pod "pod-projected-configmaps-0c02b049-bee1-4549-aa98-0584e2b2899b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130346281s
Oct  3 23:50:10.246: INFO: Pod "pod-projected-configmaps-0c02b049-bee1-4549-aa98-0584e2b2899b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160329759s
Oct  3 23:50:12.289: INFO: Pod "pod-projected-configmaps-0c02b049-bee1-4549-aa98-0584e2b2899b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.202625801s
Oct  3 23:50:14.327: INFO: Pod "pod-projected-configmaps-0c02b049-bee1-4549-aa98-0584e2b2899b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.240877833s
Oct  3 23:50:16.354: INFO: Pod "pod-projected-configmaps-0c02b049-bee1-4549-aa98-0584e2b2899b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.268062013s
STEP: Saw pod success
Oct  3 23:50:16.354: INFO: Pod "pod-projected-configmaps-0c02b049-bee1-4549-aa98-0584e2b2899b" satisfied condition "Succeeded or Failed"
Oct  3 23:50:16.381: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod pod-projected-configmaps-0c02b049-bee1-4549-aa98-0584e2b2899b container agnhost-container: <nil>
STEP: delete the pod
Oct  3 23:50:16.446: INFO: Waiting for pod pod-projected-configmaps-0c02b049-bee1-4549-aa98-0584e2b2899b to disappear
Oct  3 23:50:16.475: INFO: Pod pod-projected-configmaps-0c02b049-bee1-4549-aa98-0584e2b2899b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.763 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:16.555: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 28 lines ...
Oct  3 23:50:09.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct  3 23:50:09.846: INFO: Waiting up to 5m0s for pod "pod-e7033d00-69c3-4660-b412-c97b43dd5b6f" in namespace "emptydir-4604" to be "Succeeded or Failed"
Oct  3 23:50:09.882: INFO: Pod "pod-e7033d00-69c3-4660-b412-c97b43dd5b6f": Phase="Pending", Reason="", readiness=false. Elapsed: 35.425007ms
Oct  3 23:50:11.906: INFO: Pod "pod-e7033d00-69c3-4660-b412-c97b43dd5b6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05998397s
Oct  3 23:50:13.949: INFO: Pod "pod-e7033d00-69c3-4660-b412-c97b43dd5b6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103206488s
Oct  3 23:50:15.976: INFO: Pod "pod-e7033d00-69c3-4660-b412-c97b43dd5b6f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130118539s
Oct  3 23:50:17.999: INFO: Pod "pod-e7033d00-69c3-4660-b412-c97b43dd5b6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.152954005s
STEP: Saw pod success
Oct  3 23:50:17.999: INFO: Pod "pod-e7033d00-69c3-4660-b412-c97b43dd5b6f" satisfied condition "Succeeded or Failed"
Oct  3 23:50:18.028: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod pod-e7033d00-69c3-4660-b412-c97b43dd5b6f container test-container: <nil>
STEP: delete the pod
Oct  3 23:50:18.115: INFO: Waiting for pod pod-e7033d00-69c3-4660-b412-c97b43dd5b6f to disappear
Oct  3 23:50:18.137: INFO: Pod pod-e7033d00-69c3-4660-b412-c97b43dd5b6f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.633 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:18.200: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 125 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:19.009: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 19 lines ...
      Driver windows-gcepd doesn't support  -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:50:09.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-a4efa7bd-55a8-4a26-9020-d6b602ebb026
STEP: Creating a pod to test consume configMaps
Oct  3 23:50:09.960: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-249b23ba-2920-4e11-ba46-ac3d08435530" in namespace "projected-2786" to be "Succeeded or Failed"
Oct  3 23:50:10.007: INFO: Pod "pod-projected-configmaps-249b23ba-2920-4e11-ba46-ac3d08435530": Phase="Pending", Reason="", readiness=false. Elapsed: 46.787042ms
Oct  3 23:50:12.043: INFO: Pod "pod-projected-configmaps-249b23ba-2920-4e11-ba46-ac3d08435530": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083201667s
Oct  3 23:50:14.073: INFO: Pod "pod-projected-configmaps-249b23ba-2920-4e11-ba46-ac3d08435530": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112706462s
Oct  3 23:50:16.096: INFO: Pod "pod-projected-configmaps-249b23ba-2920-4e11-ba46-ac3d08435530": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136329594s
Oct  3 23:50:18.125: INFO: Pod "pod-projected-configmaps-249b23ba-2920-4e11-ba46-ac3d08435530": Phase="Pending", Reason="", readiness=false. Elapsed: 8.165307658s
Oct  3 23:50:20.148: INFO: Pod "pod-projected-configmaps-249b23ba-2920-4e11-ba46-ac3d08435530": Phase="Pending", Reason="", readiness=false. Elapsed: 10.188469732s
Oct  3 23:50:22.172: INFO: Pod "pod-projected-configmaps-249b23ba-2920-4e11-ba46-ac3d08435530": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.211873154s
STEP: Saw pod success
Oct  3 23:50:22.172: INFO: Pod "pod-projected-configmaps-249b23ba-2920-4e11-ba46-ac3d08435530" satisfied condition "Succeeded or Failed"
Oct  3 23:50:22.195: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod pod-projected-configmaps-249b23ba-2920-4e11-ba46-ac3d08435530 container agnhost-container: <nil>
STEP: delete the pod
Oct  3 23:50:22.256: INFO: Waiting for pod pod-projected-configmaps-249b23ba-2920-4e11-ba46-ac3d08435530 to disappear
Oct  3 23:50:22.283: INFO: Pod pod-projected-configmaps-249b23ba-2920-4e11-ba46-ac3d08435530 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.623 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:22.394: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 116 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 173 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":3,"skipped":22,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:50:11.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 64 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
... skipping 40 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:239

      Only supported for providers [vsphere] (not gce)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:50:05.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
• [SLOW TEST:19.348 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 9 lines ...
Oct  3 23:49:10.666: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(gcepd) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-7411glgrv
STEP: creating a claim
Oct  3 23:49:10.775: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-zx7z
STEP: Creating a pod to test subpath
Oct  3 23:49:11.027: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-zx7z" in namespace "provisioning-7411" to be "Succeeded or Failed"
Oct  3 23:49:11.086: INFO: Pod "pod-subpath-test-dynamicpv-zx7z": Phase="Pending", Reason="", readiness=false. Elapsed: 59.196127ms
Oct  3 23:49:13.112: INFO: Pod "pod-subpath-test-dynamicpv-zx7z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085383872s
Oct  3 23:49:15.151: INFO: Pod "pod-subpath-test-dynamicpv-zx7z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124169574s
Oct  3 23:49:17.180: INFO: Pod "pod-subpath-test-dynamicpv-zx7z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.153059834s
Oct  3 23:49:19.207: INFO: Pod "pod-subpath-test-dynamicpv-zx7z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.180322214s
Oct  3 23:49:21.237: INFO: Pod "pod-subpath-test-dynamicpv-zx7z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.210684685s
... skipping 14 lines ...
Oct  3 23:49:51.645: INFO: Pod "pod-subpath-test-dynamicpv-zx7z": Phase="Pending", Reason="", readiness=false. Elapsed: 40.618605469s
Oct  3 23:49:53.672: INFO: Pod "pod-subpath-test-dynamicpv-zx7z": Phase="Pending", Reason="", readiness=false. Elapsed: 42.644916603s
Oct  3 23:49:55.724: INFO: Pod "pod-subpath-test-dynamicpv-zx7z": Phase="Pending", Reason="", readiness=false. Elapsed: 44.696850349s
Oct  3 23:49:57.749: INFO: Pod "pod-subpath-test-dynamicpv-zx7z": Phase="Pending", Reason="", readiness=false. Elapsed: 46.722099578s
Oct  3 23:49:59.774: INFO: Pod "pod-subpath-test-dynamicpv-zx7z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 48.747428468s
STEP: Saw pod success
Oct  3 23:49:59.774: INFO: Pod "pod-subpath-test-dynamicpv-zx7z" satisfied condition "Succeeded or Failed"
Oct  3 23:49:59.799: INFO: Trying to get logs from node nodes-us-west3-a-chbv pod pod-subpath-test-dynamicpv-zx7z container test-container-volume-dynamicpv-zx7z: <nil>
STEP: delete the pod
Oct  3 23:49:59.869: INFO: Waiting for pod pod-subpath-test-dynamicpv-zx7z to disappear
Oct  3 23:49:59.893: INFO: Pod pod-subpath-test-dynamicpv-zx7z no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-zx7z
Oct  3 23:49:59.893: INFO: Deleting pod "pod-subpath-test-dynamicpv-zx7z" in namespace "provisioning-7411"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":2,"skipped":8,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:25.359: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 96 lines ...
• [SLOW TEST:11.440 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":5,"skipped":33,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:27.088: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 44 lines ...
Oct  3 23:50:01.188: INFO: PersistentVolumeClaim pvc-z9dxw found but phase is Pending instead of Bound.
Oct  3 23:50:03.213: INFO: PersistentVolumeClaim pvc-z9dxw found and phase=Bound (8.134656469s)
Oct  3 23:50:03.213: INFO: Waiting up to 3m0s for PersistentVolume local-h7fxr to have phase Bound
Oct  3 23:50:03.237: INFO: PersistentVolume local-h7fxr found and phase=Bound (24.443327ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-k4jv
STEP: Creating a pod to test subpath
Oct  3 23:50:03.312: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-k4jv" in namespace "provisioning-2974" to be "Succeeded or Failed"
Oct  3 23:50:03.336: INFO: Pod "pod-subpath-test-preprovisionedpv-k4jv": Phase="Pending", Reason="", readiness=false. Elapsed: 24.081123ms
Oct  3 23:50:05.370: INFO: Pod "pod-subpath-test-preprovisionedpv-k4jv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058087244s
Oct  3 23:50:07.415: INFO: Pod "pod-subpath-test-preprovisionedpv-k4jv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103224119s
Oct  3 23:50:09.535: INFO: Pod "pod-subpath-test-preprovisionedpv-k4jv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.223203397s
Oct  3 23:50:11.566: INFO: Pod "pod-subpath-test-preprovisionedpv-k4jv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.253982737s
Oct  3 23:50:13.618: INFO: Pod "pod-subpath-test-preprovisionedpv-k4jv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.306756249s
Oct  3 23:50:15.644: INFO: Pod "pod-subpath-test-preprovisionedpv-k4jv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.332210669s
Oct  3 23:50:17.671: INFO: Pod "pod-subpath-test-preprovisionedpv-k4jv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.359241089s
Oct  3 23:50:19.707: INFO: Pod "pod-subpath-test-preprovisionedpv-k4jv": Phase="Pending", Reason="", readiness=false. Elapsed: 16.39517383s
Oct  3 23:50:21.733: INFO: Pod "pod-subpath-test-preprovisionedpv-k4jv": Phase="Pending", Reason="", readiness=false. Elapsed: 18.421884545s
Oct  3 23:50:23.760: INFO: Pod "pod-subpath-test-preprovisionedpv-k4jv": Phase="Pending", Reason="", readiness=false. Elapsed: 20.448680877s
Oct  3 23:50:25.786: INFO: Pod "pod-subpath-test-preprovisionedpv-k4jv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.474282896s
STEP: Saw pod success
Oct  3 23:50:25.786: INFO: Pod "pod-subpath-test-preprovisionedpv-k4jv" satisfied condition "Succeeded or Failed"
Oct  3 23:50:25.811: INFO: Trying to get logs from node nodes-us-west3-a-chbv pod pod-subpath-test-preprovisionedpv-k4jv container test-container-subpath-preprovisionedpv-k4jv: <nil>
STEP: delete the pod
Oct  3 23:50:25.872: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-k4jv to disappear
Oct  3 23:50:25.896: INFO: Pod pod-subpath-test-preprovisionedpv-k4jv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-k4jv
Oct  3 23:50:25.896: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-k4jv" in namespace "provisioning-2974"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:27.124: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 39 lines ...
Oct  3 23:50:16.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Oct  3 23:50:16.667: INFO: Waiting up to 5m0s for pod "security-context-0a74e81e-c916-4b19-91ad-bbccc63c5672" in namespace "security-context-6219" to be "Succeeded or Failed"
Oct  3 23:50:16.697: INFO: Pod "security-context-0a74e81e-c916-4b19-91ad-bbccc63c5672": Phase="Pending", Reason="", readiness=false. Elapsed: 29.786299ms
Oct  3 23:50:18.721: INFO: Pod "security-context-0a74e81e-c916-4b19-91ad-bbccc63c5672": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053682915s
Oct  3 23:50:20.748: INFO: Pod "security-context-0a74e81e-c916-4b19-91ad-bbccc63c5672": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080981341s
Oct  3 23:50:22.789: INFO: Pod "security-context-0a74e81e-c916-4b19-91ad-bbccc63c5672": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1214374s
Oct  3 23:50:24.812: INFO: Pod "security-context-0a74e81e-c916-4b19-91ad-bbccc63c5672": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145375321s
Oct  3 23:50:26.854: INFO: Pod "security-context-0a74e81e-c916-4b19-91ad-bbccc63c5672": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.186751609s
STEP: Saw pod success
Oct  3 23:50:26.854: INFO: Pod "security-context-0a74e81e-c916-4b19-91ad-bbccc63c5672" satisfied condition "Succeeded or Failed"
Oct  3 23:50:26.905: INFO: Trying to get logs from node nodes-us-west3-a-gn9g pod security-context-0a74e81e-c916-4b19-91ad-bbccc63c5672 container test-container: <nil>
STEP: delete the pod
Oct  3 23:50:27.013: INFO: Waiting for pod security-context-0a74e81e-c916-4b19-91ad-bbccc63c5672 to disappear
Oct  3 23:50:27.057: INFO: Pod security-context-0a74e81e-c916-4b19-91ad-bbccc63c5672 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 27 lines ...
      Driver local doesn't support ext4 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":5,"skipped":44,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:27.170: INFO: Driver windows-gcepd doesn't support  -- skipping
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 155 lines ...
• [SLOW TEST:15.020 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":52,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:50:31.617: INFO: >>> kubeConfig: /root/.kube/config
... skipping 85 lines ...
• [SLOW TEST:14.814 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":4,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:33.059: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 39 lines ...
Oct  3 23:50:25.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override arguments
Oct  3 23:50:25.522: INFO: Waiting up to 5m0s for pod "client-containers-758b78fa-2e4d-4908-84e5-66fa4070804d" in namespace "containers-7933" to be "Succeeded or Failed"
Oct  3 23:50:25.546: INFO: Pod "client-containers-758b78fa-2e4d-4908-84e5-66fa4070804d": Phase="Pending", Reason="", readiness=false. Elapsed: 23.846935ms
Oct  3 23:50:27.571: INFO: Pod "client-containers-758b78fa-2e4d-4908-84e5-66fa4070804d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048945445s
Oct  3 23:50:29.596: INFO: Pod "client-containers-758b78fa-2e4d-4908-84e5-66fa4070804d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074417638s
Oct  3 23:50:31.620: INFO: Pod "client-containers-758b78fa-2e4d-4908-84e5-66fa4070804d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098145291s
Oct  3 23:50:33.644: INFO: Pod "client-containers-758b78fa-2e4d-4908-84e5-66fa4070804d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.122114965s
STEP: Saw pod success
Oct  3 23:50:33.644: INFO: Pod "client-containers-758b78fa-2e4d-4908-84e5-66fa4070804d" satisfied condition "Succeeded or Failed"
Oct  3 23:50:33.668: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod client-containers-758b78fa-2e4d-4908-84e5-66fa4070804d container agnhost-container: <nil>
STEP: delete the pod
Oct  3 23:50:33.741: INFO: Waiting for pod client-containers-758b78fa-2e4d-4908-84e5-66fa4070804d to disappear
Oct  3 23:50:33.765: INFO: Pod client-containers-758b78fa-2e4d-4908-84e5-66fa4070804d no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.446 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:33.827: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 41 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted startup probe fails
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:319
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":1,"skipped":5,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:33.845: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 46 lines ...
• [SLOW TEST:11.621 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":4,"skipped":47,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:34.202: INFO: Only supported for providers [aws] (not gce)
... skipping 110 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":5,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:35.222: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 22 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-575ed7eb-51e6-4784-acb1-fa44969640fe
STEP: Creating a pod to test consume configMaps
Oct  3 23:50:34.022: INFO: Waiting up to 5m0s for pod "pod-configmaps-15733bde-d4db-42cf-860e-008e2a74a1b5" in namespace "configmap-3251" to be "Succeeded or Failed"
Oct  3 23:50:34.050: INFO: Pod "pod-configmaps-15733bde-d4db-42cf-860e-008e2a74a1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 27.665907ms
Oct  3 23:50:36.074: INFO: Pod "pod-configmaps-15733bde-d4db-42cf-860e-008e2a74a1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052383864s
Oct  3 23:50:38.101: INFO: Pod "pod-configmaps-15733bde-d4db-42cf-860e-008e2a74a1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079003893s
Oct  3 23:50:40.125: INFO: Pod "pod-configmaps-15733bde-d4db-42cf-860e-008e2a74a1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102622663s
Oct  3 23:50:42.150: INFO: Pod "pod-configmaps-15733bde-d4db-42cf-860e-008e2a74a1b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.127854806s
STEP: Saw pod success
Oct  3 23:50:42.150: INFO: Pod "pod-configmaps-15733bde-d4db-42cf-860e-008e2a74a1b5" satisfied condition "Succeeded or Failed"
Oct  3 23:50:42.176: INFO: Trying to get logs from node nodes-us-west3-a-gn9g pod pod-configmaps-15733bde-d4db-42cf-860e-008e2a74a1b5 container agnhost-container: <nil>
STEP: delete the pod
Oct  3 23:50:42.249: INFO: Waiting for pod pod-configmaps-15733bde-d4db-42cf-860e-008e2a74a1b5 to disappear
Oct  3 23:50:42.280: INFO: Pod pod-configmaps-15733bde-d4db-42cf-860e-008e2a74a1b5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.508 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:42.373: INFO: Only supported for providers [aws] (not gce)
... skipping 66 lines ...
Oct  3 23:50:23.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test env composition
Oct  3 23:50:23.943: INFO: Waiting up to 5m0s for pod "var-expansion-16ef3c21-766a-4b0a-8b08-2a135bb586cd" in namespace "var-expansion-485" to be "Succeeded or Failed"
Oct  3 23:50:23.968: INFO: Pod "var-expansion-16ef3c21-766a-4b0a-8b08-2a135bb586cd": Phase="Pending", Reason="", readiness=false. Elapsed: 24.193953ms
Oct  3 23:50:25.992: INFO: Pod "var-expansion-16ef3c21-766a-4b0a-8b08-2a135bb586cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048468725s
Oct  3 23:50:28.019: INFO: Pod "var-expansion-16ef3c21-766a-4b0a-8b08-2a135bb586cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075633026s
Oct  3 23:50:30.044: INFO: Pod "var-expansion-16ef3c21-766a-4b0a-8b08-2a135bb586cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101066783s
Oct  3 23:50:32.072: INFO: Pod "var-expansion-16ef3c21-766a-4b0a-8b08-2a135bb586cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128944013s
Oct  3 23:50:34.101: INFO: Pod "var-expansion-16ef3c21-766a-4b0a-8b08-2a135bb586cd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.157173781s
Oct  3 23:50:36.125: INFO: Pod "var-expansion-16ef3c21-766a-4b0a-8b08-2a135bb586cd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.181225454s
Oct  3 23:50:38.153: INFO: Pod "var-expansion-16ef3c21-766a-4b0a-8b08-2a135bb586cd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.209840545s
Oct  3 23:50:40.177: INFO: Pod "var-expansion-16ef3c21-766a-4b0a-8b08-2a135bb586cd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.233838533s
Oct  3 23:50:42.209: INFO: Pod "var-expansion-16ef3c21-766a-4b0a-8b08-2a135bb586cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.265971669s
STEP: Saw pod success
Oct  3 23:50:42.209: INFO: Pod "var-expansion-16ef3c21-766a-4b0a-8b08-2a135bb586cd" satisfied condition "Succeeded or Failed"
Oct  3 23:50:42.235: INFO: Trying to get logs from node nodes-us-west3-a-gn9g pod var-expansion-16ef3c21-766a-4b0a-8b08-2a135bb586cd container dapi-container: <nil>
STEP: delete the pod
Oct  3 23:50:42.333: INFO: Waiting for pod var-expansion-16ef3c21-766a-4b0a-8b08-2a135bb586cd to disappear
Oct  3 23:50:42.362: INFO: Pod var-expansion-16ef3c21-766a-4b0a-8b08-2a135bb586cd no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:18.684 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:42.491: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver local doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
S
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":2,"skipped":30,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:50:12.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:44.612: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 30 lines ...
Oct  3 23:49:58.142: INFO: Waiting for PV gce-hxjml to bind to PVC pvc-svknh
Oct  3 23:49:58.142: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-svknh] to have phase Bound
Oct  3 23:49:58.168: INFO: PersistentVolumeClaim pvc-svknh found and phase=Bound (25.338542ms)
Oct  3 23:49:58.168: INFO: Waiting up to 3m0s for PersistentVolume gce-hxjml to have phase Bound
Oct  3 23:49:58.191: INFO: PersistentVolume gce-hxjml found and phase=Bound (23.190229ms)
STEP: Creating the Client Pod
[It] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:142
STEP: Deleting the Persistent Volume
Oct  3 23:50:22.351: INFO: Deleting PersistentVolume "gce-hxjml"
STEP: Deleting the client pod
Oct  3 23:50:22.498: INFO: Deleting pod "pvc-tester-m8p8z" in namespace "pv-1707"
Oct  3 23:50:22.558: INFO: Wait up to 5m0s for pod "pvc-tester-m8p8z" to be fully deleted
... skipping 14 lines ...
Oct  3 23:50:45.102: INFO: Successfully deleted PD "e2e-3ce2dfbe-a8f7-48ba-aef2-df625cbff2f6".


• [SLOW TEST:49.214 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:142
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes GCEPD should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach","total":-1,"completed":3,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:50:45.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-4225" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":4,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Server request timeout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:50:45.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-9644" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":5,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:46.007: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 144 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1257
    CSIStorageCapacity disabled
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":-1,"completed":2,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:50:50.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3373" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":6,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:50.346: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 32 lines ...
Oct  3 23:50:15.187: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(gcepd) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-774c2zft
STEP: creating a claim
Oct  3 23:50:15.213: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-qf6v
STEP: Creating a pod to test subpath
Oct  3 23:50:15.332: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-qf6v" in namespace "provisioning-774" to be "Succeeded or Failed"
Oct  3 23:50:15.379: INFO: Pod "pod-subpath-test-dynamicpv-qf6v": Phase="Pending", Reason="", readiness=false. Elapsed: 46.564233ms
Oct  3 23:50:17.404: INFO: Pod "pod-subpath-test-dynamicpv-qf6v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071303037s
Oct  3 23:50:19.514: INFO: Pod "pod-subpath-test-dynamicpv-qf6v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182059382s
Oct  3 23:50:21.542: INFO: Pod "pod-subpath-test-dynamicpv-qf6v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209873115s
Oct  3 23:50:23.568: INFO: Pod "pod-subpath-test-dynamicpv-qf6v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.235798934s
Oct  3 23:50:25.594: INFO: Pod "pod-subpath-test-dynamicpv-qf6v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.261724486s
Oct  3 23:50:27.620: INFO: Pod "pod-subpath-test-dynamicpv-qf6v": Phase="Pending", Reason="", readiness=false. Elapsed: 12.287425243s
Oct  3 23:50:29.645: INFO: Pod "pod-subpath-test-dynamicpv-qf6v": Phase="Pending", Reason="", readiness=false. Elapsed: 14.312879633s
Oct  3 23:50:31.672: INFO: Pod "pod-subpath-test-dynamicpv-qf6v": Phase="Pending", Reason="", readiness=false. Elapsed: 16.340058288s
Oct  3 23:50:33.704: INFO: Pod "pod-subpath-test-dynamicpv-qf6v": Phase="Pending", Reason="", readiness=false. Elapsed: 18.371796055s
Oct  3 23:50:35.734: INFO: Pod "pod-subpath-test-dynamicpv-qf6v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.401193295s
STEP: Saw pod success
Oct  3 23:50:35.734: INFO: Pod "pod-subpath-test-dynamicpv-qf6v" satisfied condition "Succeeded or Failed"
Oct  3 23:50:35.759: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod pod-subpath-test-dynamicpv-qf6v container test-container-subpath-dynamicpv-qf6v: <nil>
STEP: delete the pod
Oct  3 23:50:35.817: INFO: Waiting for pod pod-subpath-test-dynamicpv-qf6v to disappear
Oct  3 23:50:35.841: INFO: Pod pod-subpath-test-dynamicpv-qf6v no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-qf6v
Oct  3 23:50:35.841: INFO: Deleting pod "pod-subpath-test-dynamicpv-qf6v" in namespace "provisioning-774"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":5,"skipped":65,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:51.160: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 75 lines ...
Oct  3 23:50:08.741: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(gcepd) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8946w6rs5
STEP: creating a claim
Oct  3 23:50:08.770: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-8d7k
STEP: Creating a pod to test subpath
Oct  3 23:50:08.869: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-8d7k" in namespace "provisioning-8946" to be "Succeeded or Failed"
Oct  3 23:50:08.908: INFO: Pod "pod-subpath-test-dynamicpv-8d7k": Phase="Pending", Reason="", readiness=false. Elapsed: 38.802346ms
Oct  3 23:50:10.949: INFO: Pod "pod-subpath-test-dynamicpv-8d7k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079222383s
Oct  3 23:50:12.972: INFO: Pod "pod-subpath-test-dynamicpv-8d7k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103014579s
Oct  3 23:50:15.000: INFO: Pod "pod-subpath-test-dynamicpv-8d7k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130408664s
Oct  3 23:50:17.027: INFO: Pod "pod-subpath-test-dynamicpv-8d7k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157281777s
Oct  3 23:50:19.060: INFO: Pod "pod-subpath-test-dynamicpv-8d7k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.19026991s
Oct  3 23:50:21.084: INFO: Pod "pod-subpath-test-dynamicpv-8d7k": Phase="Pending", Reason="", readiness=false. Elapsed: 12.214616565s
Oct  3 23:50:23.111: INFO: Pod "pod-subpath-test-dynamicpv-8d7k": Phase="Pending", Reason="", readiness=false. Elapsed: 14.241902295s
Oct  3 23:50:25.136: INFO: Pod "pod-subpath-test-dynamicpv-8d7k": Phase="Pending", Reason="", readiness=false. Elapsed: 16.266685475s
Oct  3 23:50:27.163: INFO: Pod "pod-subpath-test-dynamicpv-8d7k": Phase="Pending", Reason="", readiness=false. Elapsed: 18.293686828s
Oct  3 23:50:29.188: INFO: Pod "pod-subpath-test-dynamicpv-8d7k": Phase="Pending", Reason="", readiness=false. Elapsed: 20.318709478s
Oct  3 23:50:31.213: INFO: Pod "pod-subpath-test-dynamicpv-8d7k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.34364974s
STEP: Saw pod success
Oct  3 23:50:31.213: INFO: Pod "pod-subpath-test-dynamicpv-8d7k" satisfied condition "Succeeded or Failed"
Oct  3 23:50:31.238: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod pod-subpath-test-dynamicpv-8d7k container test-container-subpath-dynamicpv-8d7k: <nil>
STEP: delete the pod
Oct  3 23:50:31.311: INFO: Waiting for pod pod-subpath-test-dynamicpv-8d7k to disappear
Oct  3 23:50:31.334: INFO: Pod pod-subpath-test-dynamicpv-8d7k no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-8d7k
Oct  3 23:50:31.334: INFO: Deleting pod "pod-subpath-test-dynamicpv-8d7k" in namespace "provisioning-8946"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":6,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:51.718: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 172 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":6,"skipped":38,"failed":0}
[BeforeEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:50:42.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
• [SLOW TEST:10.381 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":38,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:52.839: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 43 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-5788cb95-60b3-4231-bd34-ca6e15b965a9
STEP: Creating a pod to test consume secrets
Oct  3 23:50:42.717: INFO: Waiting up to 5m0s for pod "pod-secrets-8037f13a-f2bb-4583-8de6-94cd33283ff3" in namespace "secrets-2014" to be "Succeeded or Failed"
Oct  3 23:50:42.743: INFO: Pod "pod-secrets-8037f13a-f2bb-4583-8de6-94cd33283ff3": Phase="Pending", Reason="", readiness=false. Elapsed: 25.515253ms
Oct  3 23:50:44.770: INFO: Pod "pod-secrets-8037f13a-f2bb-4583-8de6-94cd33283ff3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052746571s
Oct  3 23:50:46.795: INFO: Pod "pod-secrets-8037f13a-f2bb-4583-8de6-94cd33283ff3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077717378s
Oct  3 23:50:48.820: INFO: Pod "pod-secrets-8037f13a-f2bb-4583-8de6-94cd33283ff3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103012272s
Oct  3 23:50:50.846: INFO: Pod "pod-secrets-8037f13a-f2bb-4583-8de6-94cd33283ff3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12871996s
Oct  3 23:50:52.871: INFO: Pod "pod-secrets-8037f13a-f2bb-4583-8de6-94cd33283ff3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.153535312s
STEP: Saw pod success
Oct  3 23:50:52.871: INFO: Pod "pod-secrets-8037f13a-f2bb-4583-8de6-94cd33283ff3" satisfied condition "Succeeded or Failed"
Oct  3 23:50:52.896: INFO: Trying to get logs from node nodes-us-west3-a-gn9g pod pod-secrets-8037f13a-f2bb-4583-8de6-94cd33283ff3 container secret-volume-test: <nil>
STEP: delete the pod
Oct  3 23:50:52.964: INFO: Waiting for pod pod-secrets-8037f13a-f2bb-4583-8de6-94cd33283ff3 to disappear
Oct  3 23:50:52.988: INFO: Pod pod-secrets-8037f13a-f2bb-4583-8de6-94cd33283ff3 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.625 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:50:53.064: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  3 23:50:48.836: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb732a6d-c628-44dd-b174-85d098715026" in namespace "downward-api-7638" to be "Succeeded or Failed"
Oct  3 23:50:48.862: INFO: Pod "downwardapi-volume-cb732a6d-c628-44dd-b174-85d098715026": Phase="Pending", Reason="", readiness=false. Elapsed: 25.382143ms
Oct  3 23:50:50.892: INFO: Pod "downwardapi-volume-cb732a6d-c628-44dd-b174-85d098715026": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056261241s
Oct  3 23:50:52.919: INFO: Pod "downwardapi-volume-cb732a6d-c628-44dd-b174-85d098715026": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082512224s
STEP: Saw pod success
Oct  3 23:50:52.919: INFO: Pod "downwardapi-volume-cb732a6d-c628-44dd-b174-85d098715026" satisfied condition "Succeeded or Failed"
Oct  3 23:50:52.947: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod downwardapi-volume-cb732a6d-c628-44dd-b174-85d098715026 container client-container: <nil>
STEP: delete the pod
Oct  3 23:50:53.007: INFO: Waiting for pod downwardapi-volume-cb732a6d-c628-44dd-b174-85d098715026 to disappear
Oct  3 23:50:53.035: INFO: Pod downwardapi-volume-cb732a6d-c628-44dd-b174-85d098715026 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 73 lines ...
Oct  3 23:49:58.538: INFO: PersistentVolumeClaim csi-hostpathl8gvd found but phase is Pending instead of Bound.
Oct  3 23:50:00.562: INFO: PersistentVolumeClaim csi-hostpathl8gvd found but phase is Pending instead of Bound.
Oct  3 23:50:02.587: INFO: PersistentVolumeClaim csi-hostpathl8gvd found but phase is Pending instead of Bound.
Oct  3 23:50:04.610: INFO: PersistentVolumeClaim csi-hostpathl8gvd found and phase=Bound (16.229748307s)
STEP: Expanding non-expandable pvc
Oct  3 23:50:04.669: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Oct  3 23:50:04.724: INFO: Error updating pvc csi-hostpathl8gvd: persistentvolumeclaims "csi-hostpathl8gvd" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  3 23:50:06.793: INFO: Error updating pvc csi-hostpathl8gvd: persistentvolumeclaims "csi-hostpathl8gvd" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  3 23:50:08.799: INFO: Error updating pvc csi-hostpathl8gvd: persistentvolumeclaims "csi-hostpathl8gvd" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  3 23:50:10.780: INFO: Error updating pvc csi-hostpathl8gvd: persistentvolumeclaims "csi-hostpathl8gvd" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  3 23:50:12.775: INFO: Error updating pvc csi-hostpathl8gvd: persistentvolumeclaims "csi-hostpathl8gvd" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  3 23:50:14.786: INFO: Error updating pvc csi-hostpathl8gvd: persistentvolumeclaims "csi-hostpathl8gvd" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  3 23:50:16.789: INFO: Error updating pvc csi-hostpathl8gvd: persistentvolumeclaims "csi-hostpathl8gvd" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  3 23:50:18.784: INFO: Error updating pvc csi-hostpathl8gvd: persistentvolumeclaims "csi-hostpathl8gvd" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  3 23:50:20.773: INFO: Error updating pvc csi-hostpathl8gvd: persistentvolumeclaims "csi-hostpathl8gvd" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  3 23:50:22.828: INFO: Error updating pvc csi-hostpathl8gvd: persistentvolumeclaims "csi-hostpathl8gvd" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  3 23:50:24.775: INFO: Error updating pvc csi-hostpathl8gvd: persistentvolumeclaims "csi-hostpathl8gvd" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  3 23:50:26.788: INFO: Error updating pvc csi-hostpathl8gvd: persistentvolumeclaims "csi-hostpathl8gvd" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  3 23:50:28.780: INFO: Error updating pvc csi-hostpathl8gvd: persistentvolumeclaims "csi-hostpathl8gvd" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  3 23:50:30.773: INFO: Error updating pvc csi-hostpathl8gvd: persistentvolumeclaims "csi-hostpathl8gvd" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  3 23:50:32.776: INFO: Error updating pvc csi-hostpathl8gvd: persistentvolumeclaims "csi-hostpathl8gvd" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  3 23:50:34.774: INFO: Error updating pvc csi-hostpathl8gvd: persistentvolumeclaims "csi-hostpathl8gvd" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  3 23:50:34.824: INFO: Error updating pvc csi-hostpathl8gvd: persistentvolumeclaims "csi-hostpathl8gvd" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Oct  3 23:50:34.824: INFO: Deleting PersistentVolumeClaim "csi-hostpathl8gvd"
Oct  3 23:50:34.851: INFO: Waiting up to 5m0s for PersistentVolume pvc-86a491a4-0658-490f-9c37-883b4a41a935 to get deleted
Oct  3 23:50:34.879: INFO: PersistentVolume pvc-86a491a4-0658-490f-9c37-883b4a41a935 found and phase=Released (27.328481ms)
Oct  3 23:50:39.903: INFO: PersistentVolume pvc-86a491a4-0658-490f-9c37-883b4a41a935 was removed
STEP: Deleting sc
... skipping 64 lines ...
Oct  3 23:50:27.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Oct  3 23:50:27.289: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  3 23:50:27.346: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2371" in namespace "provisioning-2371" to be "Succeeded or Failed"
Oct  3 23:50:27.369: INFO: Pod "hostpath-symlink-prep-provisioning-2371": Phase="Pending", Reason="", readiness=false. Elapsed: 23.455712ms
Oct  3 23:50:29.400: INFO: Pod "hostpath-symlink-prep-provisioning-2371": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054161383s
Oct  3 23:50:31.427: INFO: Pod "hostpath-symlink-prep-provisioning-2371": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081341474s
Oct  3 23:50:33.459: INFO: Pod "hostpath-symlink-prep-provisioning-2371": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113390422s
Oct  3 23:50:35.523: INFO: Pod "hostpath-symlink-prep-provisioning-2371": Phase="Pending", Reason="", readiness=false. Elapsed: 8.177385144s
Oct  3 23:50:37.548: INFO: Pod "hostpath-symlink-prep-provisioning-2371": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.202293161s
STEP: Saw pod success
Oct  3 23:50:37.548: INFO: Pod "hostpath-symlink-prep-provisioning-2371" satisfied condition "Succeeded or Failed"
Oct  3 23:50:37.548: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2371" in namespace "provisioning-2371"
Oct  3 23:50:37.578: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2371" to be fully deleted
Oct  3 23:50:37.603: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-7sbl
STEP: Creating a pod to test subpath
Oct  3 23:50:37.630: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-7sbl" in namespace "provisioning-2371" to be "Succeeded or Failed"
Oct  3 23:50:37.658: INFO: Pod "pod-subpath-test-inlinevolume-7sbl": Phase="Pending", Reason="", readiness=false. Elapsed: 28.691852ms
Oct  3 23:50:39.684: INFO: Pod "pod-subpath-test-inlinevolume-7sbl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054032365s
Oct  3 23:50:41.709: INFO: Pod "pod-subpath-test-inlinevolume-7sbl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079728753s
Oct  3 23:50:43.735: INFO: Pod "pod-subpath-test-inlinevolume-7sbl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105228844s
Oct  3 23:50:45.761: INFO: Pod "pod-subpath-test-inlinevolume-7sbl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.131794442s
STEP: Saw pod success
Oct  3 23:50:45.761: INFO: Pod "pod-subpath-test-inlinevolume-7sbl" satisfied condition "Succeeded or Failed"
Oct  3 23:50:45.786: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod pod-subpath-test-inlinevolume-7sbl container test-container-volume-inlinevolume-7sbl: <nil>
STEP: delete the pod
Oct  3 23:50:45.878: INFO: Waiting for pod pod-subpath-test-inlinevolume-7sbl to disappear
Oct  3 23:50:45.903: INFO: Pod pod-subpath-test-inlinevolume-7sbl no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-7sbl
Oct  3 23:50:45.904: INFO: Deleting pod "pod-subpath-test-inlinevolume-7sbl" in namespace "provisioning-2371"
STEP: Deleting pod
Oct  3 23:50:45.927: INFO: Deleting pod "pod-subpath-test-inlinevolume-7sbl" in namespace "provisioning-2371"
Oct  3 23:50:45.982: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2371" in namespace "provisioning-2371" to be "Succeeded or Failed"
Oct  3 23:50:46.005: INFO: Pod "hostpath-symlink-prep-provisioning-2371": Phase="Pending", Reason="", readiness=false. Elapsed: 23.562792ms
Oct  3 23:50:48.030: INFO: Pod "hostpath-symlink-prep-provisioning-2371": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048558775s
Oct  3 23:50:50.056: INFO: Pod "hostpath-symlink-prep-provisioning-2371": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074249884s
Oct  3 23:50:52.084: INFO: Pod "hostpath-symlink-prep-provisioning-2371": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102455618s
Oct  3 23:50:54.114: INFO: Pod "hostpath-symlink-prep-provisioning-2371": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.132373201s
STEP: Saw pod success
Oct  3 23:50:54.114: INFO: Pod "hostpath-symlink-prep-provisioning-2371" satisfied condition "Succeeded or Failed"
Oct  3 23:50:54.114: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2371" in namespace "provisioning-2371"
Oct  3 23:50:54.153: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2371" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:50:54.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-2371" for this suite.
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-qvfc
STEP: Creating a pod to test atomic-volume-subpath
Oct  3 23:50:19.321: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qvfc" in namespace "subpath-5811" to be "Succeeded or Failed"
Oct  3 23:50:19.349: INFO: Pod "pod-subpath-test-configmap-qvfc": Phase="Pending", Reason="", readiness=false. Elapsed: 27.955499ms
Oct  3 23:50:21.373: INFO: Pod "pod-subpath-test-configmap-qvfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051771192s
Oct  3 23:50:23.424: INFO: Pod "pod-subpath-test-configmap-qvfc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10307056s
Oct  3 23:50:25.449: INFO: Pod "pod-subpath-test-configmap-qvfc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127687016s
Oct  3 23:50:27.475: INFO: Pod "pod-subpath-test-configmap-qvfc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.154033609s
Oct  3 23:50:29.498: INFO: Pod "pod-subpath-test-configmap-qvfc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.176984894s
... skipping 8 lines ...
Oct  3 23:50:47.736: INFO: Pod "pod-subpath-test-configmap-qvfc": Phase="Running", Reason="", readiness=true. Elapsed: 28.414533512s
Oct  3 23:50:49.759: INFO: Pod "pod-subpath-test-configmap-qvfc": Phase="Running", Reason="", readiness=true. Elapsed: 30.43804039s
Oct  3 23:50:51.783: INFO: Pod "pod-subpath-test-configmap-qvfc": Phase="Running", Reason="", readiness=true. Elapsed: 32.461751799s
Oct  3 23:50:53.806: INFO: Pod "pod-subpath-test-configmap-qvfc": Phase="Running", Reason="", readiness=true. Elapsed: 34.484802443s
Oct  3 23:50:55.830: INFO: Pod "pod-subpath-test-configmap-qvfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.509498119s
STEP: Saw pod success
Oct  3 23:50:55.831: INFO: Pod "pod-subpath-test-configmap-qvfc" satisfied condition "Succeeded or Failed"
Oct  3 23:50:55.853: INFO: Trying to get logs from node nodes-us-west3-a-gn9g pod pod-subpath-test-configmap-qvfc container test-container-subpath-configmap-qvfc: <nil>
STEP: delete the pod
Oct  3 23:50:55.917: INFO: Waiting for pod pod-subpath-test-configmap-qvfc to disappear
Oct  3 23:50:55.942: INFO: Pod pod-subpath-test-configmap-qvfc no longer exists
STEP: Deleting pod pod-subpath-test-configmap-qvfc
Oct  3 23:50:55.942: INFO: Deleting pod "pod-subpath-test-configmap-qvfc" in namespace "subpath-5811"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":26,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:7.934 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should ensure a single API token exists
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":4,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:01.030: INFO: Only supported for providers [openstack] (not gce)
... skipping 67 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a busybox command in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":46,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:51:02.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume-provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:144
[It] should report an error and create no PV
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:738
Oct  3 23:51:02.278: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:51:02.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-provisioning-4803" for this suite.


S [SKIPPING] [0.171 seconds]
[sig-storage] Dynamic Provisioning
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:737
    should report an error and create no PV [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:738

    Only supported for providers [aws] (not gce)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:739
------------------------------
... skipping 42 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":38,"failed":0}
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:50:54.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-458/configmap-test-eba4e418-6ff7-454f-a041-685bcbf14e81
STEP: Creating a pod to test consume configMaps
Oct  3 23:50:54.453: INFO: Waiting up to 5m0s for pod "pod-configmaps-324fe50e-6547-4262-8b07-57455408543f" in namespace "configmap-458" to be "Succeeded or Failed"
Oct  3 23:50:54.480: INFO: Pod "pod-configmaps-324fe50e-6547-4262-8b07-57455408543f": Phase="Pending", Reason="", readiness=false. Elapsed: 26.633765ms
Oct  3 23:50:56.504: INFO: Pod "pod-configmaps-324fe50e-6547-4262-8b07-57455408543f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051001979s
Oct  3 23:50:58.529: INFO: Pod "pod-configmaps-324fe50e-6547-4262-8b07-57455408543f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075255366s
Oct  3 23:51:00.555: INFO: Pod "pod-configmaps-324fe50e-6547-4262-8b07-57455408543f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101409867s
Oct  3 23:51:02.579: INFO: Pod "pod-configmaps-324fe50e-6547-4262-8b07-57455408543f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.126140254s
Oct  3 23:51:04.605: INFO: Pod "pod-configmaps-324fe50e-6547-4262-8b07-57455408543f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.152127572s
STEP: Saw pod success
Oct  3 23:51:04.606: INFO: Pod "pod-configmaps-324fe50e-6547-4262-8b07-57455408543f" satisfied condition "Succeeded or Failed"
Oct  3 23:51:04.629: INFO: Trying to get logs from node nodes-us-west3-a-gn9g pod pod-configmaps-324fe50e-6547-4262-8b07-57455408543f container env-test: <nil>
STEP: delete the pod
Oct  3 23:51:04.687: INFO: Waiting for pod pod-configmaps-324fe50e-6547-4262-8b07-57455408543f to disappear
Oct  3 23:51:04.711: INFO: Pod pod-configmaps-324fe50e-6547-4262-8b07-57455408543f no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.504 seconds]
[sig-node] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":38,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":57,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:50:22.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 176 lines ...
• [SLOW TEST:61.114 seconds]
[sig-storage] Mounted volume expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Should verify mounted devices can be resized
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:122
------------------------------
{"msg":"PASSED [sig-storage] Mounted volume expand Should verify mounted devices can be resized","total":-1,"completed":3,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:07.224: INFO: Driver windows-gcepd doesn't support  -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver windows-gcepd doesn't support  -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":53,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:50:52.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 64 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":53,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:12.002 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should test the lifecycle of a ReplicationController [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":4,"skipped":30,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-340
STEP: Waiting until pod test-pod will start running in namespace statefulset-340
STEP: Creating statefulset with conflicting port in namespace statefulset-340
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-340
Oct  3 23:50:42.211: INFO: Observed stateful pod in namespace: statefulset-340, name: ss-0, uid: 7c413096-4008-4c61-9829-07ed001000ee, status phase: Pending. Waiting for statefulset controller to delete.
Oct  3 23:50:43.873: INFO: Observed stateful pod in namespace: statefulset-340, name: ss-0, uid: 7c413096-4008-4c61-9829-07ed001000ee, status phase: Failed. Waiting for statefulset controller to delete.
Oct  3 23:50:43.885: INFO: Observed stateful pod in namespace: statefulset-340, name: ss-0, uid: 7c413096-4008-4c61-9829-07ed001000ee, status phase: Failed. Waiting for statefulset controller to delete.
Oct  3 23:50:43.892: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-340
STEP: Removing pod with conflicting port in namespace statefulset-340
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-340 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118
Oct  3 23:50:58.131: INFO: Deleting all statefulset in ns statefulset-340
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    Should recreate evicted statefulset [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":4,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:08.440: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 81 lines ...
Oct  3 23:50:31.054: INFO: PersistentVolumeClaim pvc-l7gk4 found but phase is Pending instead of Bound.
Oct  3 23:50:33.077: INFO: PersistentVolumeClaim pvc-l7gk4 found and phase=Bound (6.120770355s)
Oct  3 23:50:33.077: INFO: Waiting up to 3m0s for PersistentVolume gcepd-7m8hb to have phase Bound
Oct  3 23:50:33.105: INFO: PersistentVolume gcepd-7m8hb found and phase=Bound (27.976513ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-7p2b
STEP: Creating a pod to test exec-volume-test
Oct  3 23:50:33.175: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-7p2b" in namespace "volume-4122" to be "Succeeded or Failed"
Oct  3 23:50:33.197: INFO: Pod "exec-volume-test-preprovisionedpv-7p2b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.066916ms
Oct  3 23:50:35.220: INFO: Pod "exec-volume-test-preprovisionedpv-7p2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04528662s
Oct  3 23:50:37.249: INFO: Pod "exec-volume-test-preprovisionedpv-7p2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073609543s
Oct  3 23:50:39.279: INFO: Pod "exec-volume-test-preprovisionedpv-7p2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104204165s
Oct  3 23:50:41.303: INFO: Pod "exec-volume-test-preprovisionedpv-7p2b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127559635s
Oct  3 23:50:43.326: INFO: Pod "exec-volume-test-preprovisionedpv-7p2b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.15100033s
Oct  3 23:50:45.350: INFO: Pod "exec-volume-test-preprovisionedpv-7p2b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.175283799s
Oct  3 23:50:47.381: INFO: Pod "exec-volume-test-preprovisionedpv-7p2b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.205818805s
Oct  3 23:50:49.405: INFO: Pod "exec-volume-test-preprovisionedpv-7p2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.229955626s
STEP: Saw pod success
Oct  3 23:50:49.405: INFO: Pod "exec-volume-test-preprovisionedpv-7p2b" satisfied condition "Succeeded or Failed"
Oct  3 23:50:49.428: INFO: Trying to get logs from node nodes-us-west3-a-gn9g pod exec-volume-test-preprovisionedpv-7p2b container exec-container-preprovisionedpv-7p2b: <nil>
STEP: delete the pod
Oct  3 23:50:49.488: INFO: Waiting for pod exec-volume-test-preprovisionedpv-7p2b to disappear
Oct  3 23:50:49.511: INFO: Pod exec-volume-test-preprovisionedpv-7p2b no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-7p2b
Oct  3 23:50:49.511: INFO: Deleting pod "exec-volume-test-preprovisionedpv-7p2b" in namespace "volume-4122"
STEP: Deleting pv and pvc
Oct  3 23:50:49.534: INFO: Deleting PersistentVolumeClaim "pvc-l7gk4"
Oct  3 23:50:49.558: INFO: Deleting PersistentVolume "gcepd-7m8hb"
Oct  3 23:50:50.275: INFO: error deleting PD "e2e-09bfe141-3d0b-4518-9e93-9df46685e49d": googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-09bfe141-3d0b-4518-9e93-9df46685e49d' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g', resourceInUseByAnotherResource
Oct  3 23:50:50.275: INFO: Couldn't delete PD "e2e-09bfe141-3d0b-4518-9e93-9df46685e49d", sleeping 5s: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-09bfe141-3d0b-4518-9e93-9df46685e49d' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g', resourceInUseByAnotherResource
Oct  3 23:50:55.936: INFO: error deleting PD "e2e-09bfe141-3d0b-4518-9e93-9df46685e49d": googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-09bfe141-3d0b-4518-9e93-9df46685e49d' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g', resourceInUseByAnotherResource
Oct  3 23:50:55.936: INFO: Couldn't delete PD "e2e-09bfe141-3d0b-4518-9e93-9df46685e49d", sleeping 5s: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-09bfe141-3d0b-4518-9e93-9df46685e49d' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g', resourceInUseByAnotherResource
Oct  3 23:51:01.596: INFO: error deleting PD "e2e-09bfe141-3d0b-4518-9e93-9df46685e49d": googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-09bfe141-3d0b-4518-9e93-9df46685e49d' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g', resourceInUseByAnotherResource
Oct  3 23:51:01.596: INFO: Couldn't delete PD "e2e-09bfe141-3d0b-4518-9e93-9df46685e49d", sleeping 5s: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-09bfe141-3d0b-4518-9e93-9df46685e49d' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g', resourceInUseByAnotherResource
Oct  3 23:51:08.602: INFO: Successfully deleted PD "e2e-09bfe141-3d0b-4518-9e93-9df46685e49d".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:51:08.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4122" for this suite.

... skipping 5 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:08.670: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 194 lines ...
• [SLOW TEST:18.922 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":7,"skipped":55,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-x9zw
STEP: Creating a pod to test atomic-volume-subpath
Oct  3 23:50:44.886: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-x9zw" in namespace "subpath-816" to be "Succeeded or Failed"
Oct  3 23:50:44.916: INFO: Pod "pod-subpath-test-configmap-x9zw": Phase="Pending", Reason="", readiness=false. Elapsed: 30.384414ms
Oct  3 23:50:46.946: INFO: Pod "pod-subpath-test-configmap-x9zw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060558787s
Oct  3 23:50:48.972: INFO: Pod "pod-subpath-test-configmap-x9zw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085779485s
Oct  3 23:50:50.997: INFO: Pod "pod-subpath-test-configmap-x9zw": Phase="Running", Reason="", readiness=true. Elapsed: 6.110964517s
Oct  3 23:50:53.022: INFO: Pod "pod-subpath-test-configmap-x9zw": Phase="Running", Reason="", readiness=true. Elapsed: 8.135784773s
Oct  3 23:50:55.047: INFO: Pod "pod-subpath-test-configmap-x9zw": Phase="Running", Reason="", readiness=true. Elapsed: 10.161491305s
... skipping 2 lines ...
Oct  3 23:51:01.134: INFO: Pod "pod-subpath-test-configmap-x9zw": Phase="Running", Reason="", readiness=true. Elapsed: 16.247921088s
Oct  3 23:51:03.159: INFO: Pod "pod-subpath-test-configmap-x9zw": Phase="Running", Reason="", readiness=true. Elapsed: 18.273149496s
Oct  3 23:51:05.185: INFO: Pod "pod-subpath-test-configmap-x9zw": Phase="Running", Reason="", readiness=true. Elapsed: 20.298654673s
Oct  3 23:51:07.220: INFO: Pod "pod-subpath-test-configmap-x9zw": Phase="Running", Reason="", readiness=true. Elapsed: 22.334039279s
Oct  3 23:51:09.246: INFO: Pod "pod-subpath-test-configmap-x9zw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.359925858s
STEP: Saw pod success
Oct  3 23:51:09.246: INFO: Pod "pod-subpath-test-configmap-x9zw" satisfied condition "Succeeded or Failed"
Oct  3 23:51:09.289: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod pod-subpath-test-configmap-x9zw container test-container-subpath-configmap-x9zw: <nil>
STEP: delete the pod
Oct  3 23:51:09.392: INFO: Waiting for pod pod-subpath-test-configmap-x9zw to disappear
Oct  3 23:51:09.428: INFO: Pod pod-subpath-test-configmap-x9zw no longer exists
STEP: Deleting pod pod-subpath-test-configmap-x9zw
Oct  3 23:51:09.428: INFO: Deleting pod "pod-subpath-test-configmap-x9zw" in namespace "subpath-816"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] EndpointSliceMirroring
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:51:09.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslicemirroring-5843" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":4,"skipped":10,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
STEP: Creating a pod to test downward API volume plugin
Oct  3 23:51:02.561: INFO: Waiting up to 5m0s for pod "metadata-volume-d50e3c21-be06-4c83-9b9a-0e29f6bb2be2" in namespace "downward-api-7949" to be "Succeeded or Failed"
Oct  3 23:51:02.585: INFO: Pod "metadata-volume-d50e3c21-be06-4c83-9b9a-0e29f6bb2be2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.48661ms
Oct  3 23:51:04.608: INFO: Pod "metadata-volume-d50e3c21-be06-4c83-9b9a-0e29f6bb2be2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046608716s
Oct  3 23:51:06.636: INFO: Pod "metadata-volume-d50e3c21-be06-4c83-9b9a-0e29f6bb2be2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074830284s
Oct  3 23:51:08.662: INFO: Pod "metadata-volume-d50e3c21-be06-4c83-9b9a-0e29f6bb2be2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10043552s
Oct  3 23:51:10.697: INFO: Pod "metadata-volume-d50e3c21-be06-4c83-9b9a-0e29f6bb2be2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.136019394s
STEP: Saw pod success
Oct  3 23:51:10.697: INFO: Pod "metadata-volume-d50e3c21-be06-4c83-9b9a-0e29f6bb2be2" satisfied condition "Succeeded or Failed"
Oct  3 23:51:10.721: INFO: Trying to get logs from node nodes-us-west3-a-gn9g pod metadata-volume-d50e3c21-be06-4c83-9b9a-0e29f6bb2be2 container client-container: <nil>
STEP: delete the pod
Oct  3 23:51:10.790: INFO: Waiting for pod metadata-volume-d50e3c21-be06-4c83-9b9a-0e29f6bb2be2 to disappear
Oct  3 23:51:10.815: INFO: Pod metadata-volume-d50e3c21-be06-4c83-9b9a-0e29f6bb2be2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.451 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":71,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:51:09.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
Oct  3 23:51:09.531: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-9d4040c4-87b4-434a-8bcf-3ec3313c510e" in namespace "security-context-test-2847" to be "Succeeded or Failed"
Oct  3 23:51:09.562: INFO: Pod "busybox-readonly-true-9d4040c4-87b4-434a-8bcf-3ec3313c510e": Phase="Pending", Reason="", readiness=false. Elapsed: 30.618934ms
Oct  3 23:51:11.588: INFO: Pod "busybox-readonly-true-9d4040c4-87b4-434a-8bcf-3ec3313c510e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056882658s
Oct  3 23:51:13.615: INFO: Pod "busybox-readonly-true-9d4040c4-87b4-434a-8bcf-3ec3313c510e": Phase="Failed", Reason="", readiness=false. Elapsed: 4.083385616s
Oct  3 23:51:13.615: INFO: Pod "busybox-readonly-true-9d4040c4-87b4-434a-8bcf-3ec3313c510e" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:51:13.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2847" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":8,"skipped":61,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:13.682: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 119 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:317
    should not require VolumeAttach for drivers without attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:339
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":6,"skipped":70,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on tmpfs should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75
STEP: Creating a pod to test emptydir volume type on tmpfs
Oct  3 23:51:09.709: INFO: Waiting up to 5m0s for pod "pod-44da382b-0f37-475e-9975-0b7f6283bfdc" in namespace "emptydir-3866" to be "Succeeded or Failed"
Oct  3 23:51:09.733: INFO: Pod "pod-44da382b-0f37-475e-9975-0b7f6283bfdc": Phase="Pending", Reason="", readiness=false. Elapsed: 24.219952ms
Oct  3 23:51:11.759: INFO: Pod "pod-44da382b-0f37-475e-9975-0b7f6283bfdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049550497s
Oct  3 23:51:13.797: INFO: Pod "pod-44da382b-0f37-475e-9975-0b7f6283bfdc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088223087s
Oct  3 23:51:15.826: INFO: Pod "pod-44da382b-0f37-475e-9975-0b7f6283bfdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117380992s
STEP: Saw pod success
Oct  3 23:51:15.827: INFO: Pod "pod-44da382b-0f37-475e-9975-0b7f6283bfdc" satisfied condition "Succeeded or Failed"
Oct  3 23:51:15.851: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod pod-44da382b-0f37-475e-9975-0b7f6283bfdc container test-container: <nil>
STEP: delete the pod
Oct  3 23:51:15.927: INFO: Waiting for pod pod-44da382b-0f37-475e-9975-0b7f6283bfdc to disappear
Oct  3 23:51:15.957: INFO: Pod pod-44da382b-0f37-475e-9975-0b7f6283bfdc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    volume on tmpfs should have the correct mode using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":5,"skipped":37,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:51:16.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3107" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":6,"skipped":45,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:16.427: INFO: Only supported for providers [vsphere] (not gce)
... skipping 238 lines ...
Oct  3 23:51:14.830: INFO: Waiting for PV local-pvvjf28 to bind to PVC pvc-r4j9k
Oct  3 23:51:14.830: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-r4j9k] to have phase Bound
Oct  3 23:51:14.877: INFO: PersistentVolumeClaim pvc-r4j9k found but phase is Pending instead of Bound.
Oct  3 23:51:16.920: INFO: PersistentVolumeClaim pvc-r4j9k found and phase=Bound (2.089645473s)
Oct  3 23:51:16.920: INFO: Waiting up to 3m0s for PersistentVolume local-pvvjf28 to have phase Bound
Oct  3 23:51:16.948: INFO: PersistentVolume local-pvvjf28 found and phase=Bound (28.344125ms)
[It] should fail scheduling due to different NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
STEP: local-volume-type: dir
Oct  3 23:51:17.047: INFO: Waiting up to 5m0s for pod "pod-564200d8-0b4c-42b4-b2a5-7d361d4a5b49" in namespace "persistent-local-volumes-test-3610" to be "Unschedulable"
Oct  3 23:51:17.072: INFO: Pod "pod-564200d8-0b4c-42b4-b2a5-7d361d4a5b49": Phase="Pending", Reason="", readiness=false. Elapsed: 25.576463ms
Oct  3 23:51:17.072: INFO: Pod "pod-564200d8-0b4c-42b4-b2a5-7d361d4a5b49" satisfied condition "Unschedulable"
[AfterEach] Pod with node different from PV's NodeAffinity
... skipping 12 lines ...

• [SLOW TEST:11.403 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeAffinity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":6,"skipped":42,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a busybox Pod with hostAliases
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:137
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":55,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:17.967: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 56 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:50:53.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-downwardapi-9qhq
STEP: Creating a pod to test atomic-volume-subpath
Oct  3 23:50:53.363: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-9qhq" in namespace "subpath-1928" to be "Succeeded or Failed"
Oct  3 23:50:53.389: INFO: Pod "pod-subpath-test-downwardapi-9qhq": Phase="Pending", Reason="", readiness=false. Elapsed: 26.182482ms
Oct  3 23:50:55.414: INFO: Pod "pod-subpath-test-downwardapi-9qhq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051145559s
Oct  3 23:50:57.448: INFO: Pod "pod-subpath-test-downwardapi-9qhq": Phase="Running", Reason="", readiness=true. Elapsed: 4.08542265s
Oct  3 23:50:59.474: INFO: Pod "pod-subpath-test-downwardapi-9qhq": Phase="Running", Reason="", readiness=true. Elapsed: 6.111027033s
Oct  3 23:51:01.511: INFO: Pod "pod-subpath-test-downwardapi-9qhq": Phase="Running", Reason="", readiness=true. Elapsed: 8.14830816s
Oct  3 23:51:03.536: INFO: Pod "pod-subpath-test-downwardapi-9qhq": Phase="Running", Reason="", readiness=true. Elapsed: 10.172842297s
... skipping 3 lines ...
Oct  3 23:51:11.658: INFO: Pod "pod-subpath-test-downwardapi-9qhq": Phase="Running", Reason="", readiness=true. Elapsed: 18.294964162s
Oct  3 23:51:13.694: INFO: Pod "pod-subpath-test-downwardapi-9qhq": Phase="Running", Reason="", readiness=true. Elapsed: 20.331298223s
Oct  3 23:51:15.722: INFO: Pod "pod-subpath-test-downwardapi-9qhq": Phase="Running", Reason="", readiness=true. Elapsed: 22.35953828s
Oct  3 23:51:17.764: INFO: Pod "pod-subpath-test-downwardapi-9qhq": Phase="Running", Reason="", readiness=true. Elapsed: 24.401028773s
Oct  3 23:51:19.791: INFO: Pod "pod-subpath-test-downwardapi-9qhq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.427805711s
STEP: Saw pod success
Oct  3 23:51:19.791: INFO: Pod "pod-subpath-test-downwardapi-9qhq" satisfied condition "Succeeded or Failed"
Oct  3 23:51:19.816: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod pod-subpath-test-downwardapi-9qhq container test-container-subpath-downwardapi-9qhq: <nil>
STEP: delete the pod
Oct  3 23:51:19.875: INFO: Waiting for pod pod-subpath-test-downwardapi-9qhq to disappear
Oct  3 23:51:19.899: INFO: Pod pod-subpath-test-downwardapi-9qhq no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-9qhq
Oct  3 23:51:19.899: INFO: Deleting pod "pod-subpath-test-downwardapi-9qhq" in namespace "subpath-1928"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:20.004: INFO: Only supported for providers [azure] (not gce)
... skipping 88 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:23.160: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 120 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents","total":-1,"completed":3,"skipped":20,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:24.886: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 182 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":5,"skipped":40,"failed":0}
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:51:27.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename apply
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
STEP: Destroying namespace "apply-4009" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:50:53.102: INFO: >>> kubeConfig: /root/.kube/config
... skipping 23 lines ...
Oct  3 23:51:16.783: INFO: PersistentVolumeClaim pvc-jltnx found but phase is Pending instead of Bound.
Oct  3 23:51:18.809: INFO: PersistentVolumeClaim pvc-jltnx found and phase=Bound (14.248758404s)
Oct  3 23:51:18.809: INFO: Waiting up to 3m0s for PersistentVolume local-fgjd8 to have phase Bound
Oct  3 23:51:18.841: INFO: PersistentVolume local-fgjd8 found and phase=Bound (31.954949ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-nf4t
STEP: Creating a pod to test subpath
Oct  3 23:51:18.945: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nf4t" in namespace "provisioning-3637" to be "Succeeded or Failed"
Oct  3 23:51:18.984: INFO: Pod "pod-subpath-test-preprovisionedpv-nf4t": Phase="Pending", Reason="", readiness=false. Elapsed: 39.618418ms
Oct  3 23:51:21.010: INFO: Pod "pod-subpath-test-preprovisionedpv-nf4t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064690308s
Oct  3 23:51:23.035: INFO: Pod "pod-subpath-test-preprovisionedpv-nf4t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090093737s
Oct  3 23:51:25.062: INFO: Pod "pod-subpath-test-preprovisionedpv-nf4t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117075623s
Oct  3 23:51:27.089: INFO: Pod "pod-subpath-test-preprovisionedpv-nf4t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.143921118s
STEP: Saw pod success
Oct  3 23:51:27.089: INFO: Pod "pod-subpath-test-preprovisionedpv-nf4t" satisfied condition "Succeeded or Failed"
Oct  3 23:51:27.114: INFO: Trying to get logs from node nodes-us-west3-a-gn9g pod pod-subpath-test-preprovisionedpv-nf4t container test-container-subpath-preprovisionedpv-nf4t: <nil>
STEP: delete the pod
Oct  3 23:51:27.175: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nf4t to disappear
Oct  3 23:51:27.199: INFO: Pod pod-subpath-test-preprovisionedpv-nf4t no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nf4t
Oct  3 23:51:27.199: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nf4t" in namespace "provisioning-3637"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:28.484: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 39 lines ...
Oct  3 23:51:17.486: INFO: PersistentVolumeClaim pvc-48l2h found but phase is Pending instead of Bound.
Oct  3 23:51:19.512: INFO: PersistentVolumeClaim pvc-48l2h found and phase=Bound (4.084680129s)
Oct  3 23:51:19.512: INFO: Waiting up to 3m0s for PersistentVolume local-srmtz to have phase Bound
Oct  3 23:51:19.542: INFO: PersistentVolume local-srmtz found and phase=Bound (29.76661ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-k6dk
STEP: Creating a pod to test subpath
Oct  3 23:51:19.629: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-k6dk" in namespace "provisioning-2270" to be "Succeeded or Failed"
Oct  3 23:51:19.694: INFO: Pod "pod-subpath-test-preprovisionedpv-k6dk": Phase="Pending", Reason="", readiness=false. Elapsed: 65.277927ms
Oct  3 23:51:21.737: INFO: Pod "pod-subpath-test-preprovisionedpv-k6dk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107511315s
Oct  3 23:51:23.761: INFO: Pod "pod-subpath-test-preprovisionedpv-k6dk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.132132779s
STEP: Saw pod success
Oct  3 23:51:23.761: INFO: Pod "pod-subpath-test-preprovisionedpv-k6dk" satisfied condition "Succeeded or Failed"
Oct  3 23:51:23.787: INFO: Trying to get logs from node nodes-us-west3-a-chbv pod pod-subpath-test-preprovisionedpv-k6dk container test-container-subpath-preprovisionedpv-k6dk: <nil>
STEP: delete the pod
Oct  3 23:51:23.866: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-k6dk to disappear
Oct  3 23:51:23.890: INFO: Pod pod-subpath-test-preprovisionedpv-k6dk no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-k6dk
Oct  3 23:51:23.890: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-k6dk" in namespace "provisioning-2270"
STEP: Creating pod pod-subpath-test-preprovisionedpv-k6dk
STEP: Creating a pod to test subpath
Oct  3 23:51:23.937: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-k6dk" in namespace "provisioning-2270" to be "Succeeded or Failed"
Oct  3 23:51:23.963: INFO: Pod "pod-subpath-test-preprovisionedpv-k6dk": Phase="Pending", Reason="", readiness=false. Elapsed: 26.135549ms
Oct  3 23:51:25.989: INFO: Pod "pod-subpath-test-preprovisionedpv-k6dk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052450258s
Oct  3 23:51:28.027: INFO: Pod "pod-subpath-test-preprovisionedpv-k6dk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090005067s
STEP: Saw pod success
Oct  3 23:51:28.027: INFO: Pod "pod-subpath-test-preprovisionedpv-k6dk" satisfied condition "Succeeded or Failed"
Oct  3 23:51:28.051: INFO: Trying to get logs from node nodes-us-west3-a-chbv pod pod-subpath-test-preprovisionedpv-k6dk container test-container-subpath-preprovisionedpv-k6dk: <nil>
STEP: delete the pod
Oct  3 23:51:28.122: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-k6dk to disappear
Oct  3 23:51:28.152: INFO: Pod pod-subpath-test-preprovisionedpv-k6dk no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-k6dk
Oct  3 23:51:28.152: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-k6dk" in namespace "provisioning-2270"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":9,"skipped":72,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 50 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support port-forward
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:629
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":7,"skipped":44,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 71 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:51:30.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-233" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":6,"skipped":43,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:30.926: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 65 lines ...
• [SLOW TEST:22.514 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":32,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:31.346: INFO: Only supported for providers [azure] (not gce)
... skipping 47 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:51:31.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-5619" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":5,"skipped":37,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
Oct  3 23:51:17.560: INFO: PersistentVolumeClaim pvc-d44s6 found but phase is Pending instead of Bound.
Oct  3 23:51:19.591: INFO: PersistentVolumeClaim pvc-d44s6 found and phase=Bound (12.21546644s)
Oct  3 23:51:19.591: INFO: Waiting up to 3m0s for PersistentVolume local-qfkt2 to have phase Bound
Oct  3 23:51:19.617: INFO: PersistentVolume local-qfkt2 found and phase=Bound (25.97788ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zkjg
STEP: Creating a pod to test subpath
Oct  3 23:51:19.759: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zkjg" in namespace "provisioning-9039" to be "Succeeded or Failed"
Oct  3 23:51:19.796: INFO: Pod "pod-subpath-test-preprovisionedpv-zkjg": Phase="Pending", Reason="", readiness=false. Elapsed: 36.527553ms
Oct  3 23:51:21.820: INFO: Pod "pod-subpath-test-preprovisionedpv-zkjg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060551639s
Oct  3 23:51:23.845: INFO: Pod "pod-subpath-test-preprovisionedpv-zkjg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086102834s
Oct  3 23:51:25.869: INFO: Pod "pod-subpath-test-preprovisionedpv-zkjg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110293779s
Oct  3 23:51:27.897: INFO: Pod "pod-subpath-test-preprovisionedpv-zkjg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.13833489s
Oct  3 23:51:29.923: INFO: Pod "pod-subpath-test-preprovisionedpv-zkjg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.16438738s
Oct  3 23:51:31.951: INFO: Pod "pod-subpath-test-preprovisionedpv-zkjg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.19224423s
STEP: Saw pod success
Oct  3 23:51:31.951: INFO: Pod "pod-subpath-test-preprovisionedpv-zkjg" satisfied condition "Succeeded or Failed"
Oct  3 23:51:31.977: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod pod-subpath-test-preprovisionedpv-zkjg container test-container-subpath-preprovisionedpv-zkjg: <nil>
STEP: delete the pod
Oct  3 23:51:32.069: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zkjg to disappear
Oct  3 23:51:32.094: INFO: Pod pod-subpath-test-preprovisionedpv-zkjg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zkjg
Oct  3 23:51:32.094: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zkjg" in namespace "provisioning-9039"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":40,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:32.797: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 146 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1257
    CSIStorageCapacity unused
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","total":-1,"completed":6,"skipped":76,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:34.924: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:51:35.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7479" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":7,"skipped":92,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:51:35.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6518" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":60,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:35.403: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 95 lines ...
STEP: SSH'ing host 34.106.41.188:22
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
STEP: SSH'ing host 34.106.41.188:22
Oct  3 23:51:31.613: INFO: Got stdout from 34.106.41.188:22: stdout
Oct  3 23:51:31.613: INFO: Got stderr from 34.106.41.188:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing prow@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [sig-node] SSH
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:51:36.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-7959" for this suite.


... skipping 37 lines ...
• [SLOW TEST:9.168 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":8,"skipped":48,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:38.394: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 118 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":67,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:39.428: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 32 lines ...
STEP: Destroying namespace "pod-disks-2381" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.352 seconds]
[sig-storage] Pod Disks
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should be able to delete a non-existent PD without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449

  Requires at least 2 nodes (not 0)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75
------------------------------
... skipping 47 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:39.835: INFO: Driver windows-gcepd doesn't support  -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":9,"skipped":62,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:51:38.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:249
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:51:40.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3356" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":10,"skipped":62,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:40.998: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 117 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:51:44.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1879" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":5,"skipped":35,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":8,"skipped":94,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 45 lines ...
STEP: Creating pod
Oct  3 23:51:03.836: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Oct  3 23:51:03.869: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-sz2jl] to have phase Bound
Oct  3 23:51:03.895: INFO: PersistentVolumeClaim pvc-sz2jl found but phase is Pending instead of Bound.
Oct  3 23:51:05.922: INFO: PersistentVolumeClaim pvc-sz2jl found and phase=Bound (2.05221562s)
STEP: checking for CSIInlineVolumes feature
Oct  3 23:51:20.133: INFO: Error getting logs for pod inline-volume-6k8n9: the server rejected our request for an unknown reason (get pods inline-volume-6k8n9)
Oct  3 23:51:20.188: INFO: Deleting pod "inline-volume-6k8n9" in namespace "csi-mock-volumes-1593"
Oct  3 23:51:20.229: INFO: Wait up to 5m0s for pod "inline-volume-6k8n9" to be fully deleted
STEP: Deleting the previously created pod
Oct  3 23:51:26.287: INFO: Deleting pod "pvc-volume-tester-9jcp7" in namespace "csi-mock-volumes-1593"
Oct  3 23:51:26.314: INFO: Wait up to 5m0s for pod "pvc-volume-tester-9jcp7" to be fully deleted
STEP: Checking CSI driver logs
Oct  3 23:51:28.459: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-9jcp7
Oct  3 23:51:28.460: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-1593
Oct  3 23:51:28.460: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 8d78d042-6b5d-4610-a917-b467464d0680
Oct  3 23:51:28.460: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Oct  3 23:51:28.460: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
Oct  3 23:51:28.460: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/8d78d042-6b5d-4610-a917-b467464d0680/volumes/kubernetes.io~csi/pvc-c4aa2542-0cc8-4697-bc52-cb8fb51fe0f6/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-9jcp7
Oct  3 23:51:28.460: INFO: Deleting pod "pvc-volume-tester-9jcp7" in namespace "csi-mock-volumes-1593"
STEP: Deleting claim pvc-sz2jl
Oct  3 23:51:28.550: INFO: Waiting up to 2m0s for PersistentVolume pvc-c4aa2542-0cc8-4697-bc52-cb8fb51fe0f6 to get deleted
Oct  3 23:51:28.579: INFO: PersistentVolume pvc-c4aa2542-0cc8-4697-bc52-cb8fb51fe0f6 found and phase=Bound (28.237538ms)
Oct  3 23:51:30.611: INFO: PersistentVolume pvc-c4aa2542-0cc8-4697-bc52-cb8fb51fe0f6 found and phase=Released (2.060989636s)
... skipping 46 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444
    should be passed when podInfoOnMount=true
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":8,"skipped":42,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:9.349 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":9,"skipped":80,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:49.229: INFO: Driver windows-gcepd doesn't support  -- skipping
... skipping 23 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  3 23:51:46.430: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0986c8e6-f3ab-41f2-9c01-8a602c7471ca" in namespace "downward-api-582" to be "Succeeded or Failed"
Oct  3 23:51:46.454: INFO: Pod "downwardapi-volume-0986c8e6-f3ab-41f2-9c01-8a602c7471ca": Phase="Pending", Reason="", readiness=false. Elapsed: 24.59873ms
Oct  3 23:51:48.488: INFO: Pod "downwardapi-volume-0986c8e6-f3ab-41f2-9c01-8a602c7471ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057845489s
Oct  3 23:51:50.512: INFO: Pod "downwardapi-volume-0986c8e6-f3ab-41f2-9c01-8a602c7471ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082474666s
STEP: Saw pod success
Oct  3 23:51:50.512: INFO: Pod "downwardapi-volume-0986c8e6-f3ab-41f2-9c01-8a602c7471ca" satisfied condition "Succeeded or Failed"
Oct  3 23:51:50.538: INFO: Trying to get logs from node nodes-us-west3-a-gn9g pod downwardapi-volume-0986c8e6-f3ab-41f2-9c01-8a602c7471ca container client-container: <nil>
STEP: delete the pod
Oct  3 23:51:50.603: INFO: Waiting for pod downwardapi-volume-0986c8e6-f3ab-41f2-9c01-8a602c7471ca to disappear
Oct  3 23:51:50.631: INFO: Pod downwardapi-volume-0986c8e6-f3ab-41f2-9c01-8a602c7471ca no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:51:50.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-582" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] SSH should SSH to all nodes and run commands","total":-1,"completed":7,"skipped":71,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:51:36.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":71,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:51.773: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 32 lines ...
W1003 23:51:13.817968    5651 gce_instances.go:410] Cloud object does not have informers set, should only happen in E2E binary.
Oct  3 23:51:15.636: INFO: Successfully created a new PD: "e2e-6b4528f7-c043-451f-82ed-40e4e85fd547".
Oct  3 23:51:15.636: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-pm9g
STEP: Creating a pod to test exec-volume-test
W1003 23:51:15.669171    5651 warnings.go:70] spec.nodeSelector[failure-domain.beta.kubernetes.io/zone]: deprecated since v1.17; use "topology.kubernetes.io/zone" instead
Oct  3 23:51:15.669: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-pm9g" in namespace "volume-7055" to be "Succeeded or Failed"
Oct  3 23:51:15.692: INFO: Pod "exec-volume-test-inlinevolume-pm9g": Phase="Pending", Reason="", readiness=false. Elapsed: 23.394286ms
Oct  3 23:51:17.725: INFO: Pod "exec-volume-test-inlinevolume-pm9g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05601794s
Oct  3 23:51:19.759: INFO: Pod "exec-volume-test-inlinevolume-pm9g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089633984s
Oct  3 23:51:21.783: INFO: Pod "exec-volume-test-inlinevolume-pm9g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113651961s
Oct  3 23:51:23.807: INFO: Pod "exec-volume-test-inlinevolume-pm9g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.138421214s
Oct  3 23:51:25.832: INFO: Pod "exec-volume-test-inlinevolume-pm9g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.163474183s
Oct  3 23:51:27.857: INFO: Pod "exec-volume-test-inlinevolume-pm9g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.188125855s
STEP: Saw pod success
Oct  3 23:51:27.857: INFO: Pod "exec-volume-test-inlinevolume-pm9g" satisfied condition "Succeeded or Failed"
Oct  3 23:51:27.882: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod exec-volume-test-inlinevolume-pm9g container exec-container-inlinevolume-pm9g: <nil>
STEP: delete the pod
Oct  3 23:51:27.945: INFO: Waiting for pod exec-volume-test-inlinevolume-pm9g to disappear
Oct  3 23:51:27.968: INFO: Pod exec-volume-test-inlinevolume-pm9g no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-pm9g
Oct  3 23:51:27.968: INFO: Deleting pod "exec-volume-test-inlinevolume-pm9g" in namespace "volume-7055"
Oct  3 23:51:28.656: INFO: error deleting PD "e2e-6b4528f7-c043-451f-82ed-40e4e85fd547": googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-6b4528f7-c043-451f-82ed-40e4e85fd547' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource
Oct  3 23:51:28.656: INFO: Couldn't delete PD "e2e-6b4528f7-c043-451f-82ed-40e4e85fd547", sleeping 5s: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-6b4528f7-c043-451f-82ed-40e4e85fd547' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource
Oct  3 23:51:34.234: INFO: error deleting PD "e2e-6b4528f7-c043-451f-82ed-40e4e85fd547": googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-6b4528f7-c043-451f-82ed-40e4e85fd547' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource
Oct  3 23:51:34.234: INFO: Couldn't delete PD "e2e-6b4528f7-c043-451f-82ed-40e4e85fd547", sleeping 5s: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-6b4528f7-c043-451f-82ed-40e4e85fd547' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource
Oct  3 23:51:39.790: INFO: error deleting PD "e2e-6b4528f7-c043-451f-82ed-40e4e85fd547": googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-6b4528f7-c043-451f-82ed-40e4e85fd547' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource
Oct  3 23:51:39.790: INFO: Couldn't delete PD "e2e-6b4528f7-c043-451f-82ed-40e4e85fd547", sleeping 5s: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-6b4528f7-c043-451f-82ed-40e4e85fd547' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource
Oct  3 23:51:45.515: INFO: error deleting PD "e2e-6b4528f7-c043-451f-82ed-40e4e85fd547": googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-6b4528f7-c043-451f-82ed-40e4e85fd547' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource
Oct  3 23:51:45.515: INFO: Couldn't delete PD "e2e-6b4528f7-c043-451f-82ed-40e4e85fd547", sleeping 5s: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-6b4528f7-c043-451f-82ed-40e4e85fd547' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource
Oct  3 23:51:52.631: INFO: Successfully deleted PD "e2e-6b4528f7-c043-451f-82ed-40e4e85fd547".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:51:52.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7055" for this suite.

... skipping 5 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":9,"skipped":62,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 85 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:50:12.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 35 lines ...
Oct  3 23:50:14.191: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5282
Oct  3 23:50:14.226: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5282
Oct  3 23:50:14.258: INFO: creating *v1.StatefulSet: csi-mock-volumes-5282-1353/csi-mockplugin
Oct  3 23:50:14.287: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5282
Oct  3 23:50:14.312: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5282"
Oct  3 23:50:14.354: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5282 to register on node nodes-us-west3-a-chbv
I1003 23:50:32.118172    5758 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I1003 23:50:32.172420    5758 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5282","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1003 23:50:32.197661    5758 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I1003 23:50:32.228614    5758 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I1003 23:50:32.342104    5758 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5282","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1003 23:50:32.541903    5758 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-5282"},"Error":"","FullError":null}
STEP: Creating pod
Oct  3 23:50:40.967: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
I1003 23:50:41.045854    5758 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-6c581b36-42cf-4c44-99c1-e74b32477fc9","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I1003 23:50:41.079579    5758 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-6c581b36-42cf-4c44-99c1-e74b32477fc9","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-6c581b36-42cf-4c44-99c1-e74b32477fc9"}}},"Error":"","FullError":null}
I1003 23:50:43.812618    5758 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I1003 23:50:43.838671    5758 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct  3 23:50:43.861: INFO: >>> kubeConfig: /root/.kube/config
I1003 23:50:44.105419    5758 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6c581b36-42cf-4c44-99c1-e74b32477fc9/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-6c581b36-42cf-4c44-99c1-e74b32477fc9","storage.kubernetes.io/csiProvisionerIdentity":"1633305032241-8081-csi-mock-csi-mock-volumes-5282"}},"Response":{},"Error":"","FullError":null}
I1003 23:50:45.072786    5758 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I1003 23:50:45.098070    5758 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct  3 23:50:45.120: INFO: >>> kubeConfig: /root/.kube/config
Oct  3 23:50:45.365: INFO: >>> kubeConfig: /root/.kube/config
Oct  3 23:50:45.591: INFO: >>> kubeConfig: /root/.kube/config
I1003 23:50:45.824707    5758 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6c581b36-42cf-4c44-99c1-e74b32477fc9/globalmount","target_path":"/var/lib/kubelet/pods/3c0ad015-6220-4cb8-906e-2848f2be1213/volumes/kubernetes.io~csi/pvc-6c581b36-42cf-4c44-99c1-e74b32477fc9/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-6c581b36-42cf-4c44-99c1-e74b32477fc9","storage.kubernetes.io/csiProvisionerIdentity":"1633305032241-8081-csi-mock-csi-mock-volumes-5282"}},"Response":{},"Error":"","FullError":null}
Oct  3 23:50:53.073: INFO: Deleting pod "pvc-volume-tester-45w4g" in namespace "csi-mock-volumes-5282"
Oct  3 23:50:53.105: INFO: Wait up to 5m0s for pod "pvc-volume-tester-45w4g" to be fully deleted
Oct  3 23:50:53.933: INFO: >>> kubeConfig: /root/.kube/config
I1003 23:50:54.263465    5758 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/3c0ad015-6220-4cb8-906e-2848f2be1213/volumes/kubernetes.io~csi/pvc-6c581b36-42cf-4c44-99c1-e74b32477fc9/mount"},"Response":{},"Error":"","FullError":null}
I1003 23:50:54.357201    5758 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I1003 23:50:54.388318    5758 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6c581b36-42cf-4c44-99c1-e74b32477fc9/globalmount"},"Response":{},"Error":"","FullError":null}
I1003 23:51:01.250228    5758 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Oct  3 23:51:02.182: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-jxv9n", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5282", SelfLink:"", UID:"6c581b36-42cf-4c44-99c1-e74b32477fc9", ResourceVersion:"5063", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63768901840, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001c73248), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001c73260), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002bbe350), VolumeMode:(*v1.PersistentVolumeMode)(0xc002bbe360), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  3 23:51:02.182: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-jxv9n", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5282", SelfLink:"", UID:"6c581b36-42cf-4c44-99c1-e74b32477fc9", ResourceVersion:"5065", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63768901840, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"nodes-us-west3-a-chbv"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002385bf0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002385c08), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002385c20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002385c38), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002b9d110), VolumeMode:(*v1.PersistentVolumeMode)(0xc002b9d120), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  3 23:51:02.182: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-jxv9n", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5282", SelfLink:"", UID:"6c581b36-42cf-4c44-99c1-e74b32477fc9", ResourceVersion:"5066", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63768901840, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5282", "volume.kubernetes.io/selected-node":"nodes-us-west3-a-chbv"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b27518), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b27530), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b27548), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b27560), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b27578), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b27590), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002bd23f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002bd2400), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  3 23:51:02.182: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-jxv9n", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5282", SelfLink:"", UID:"6c581b36-42cf-4c44-99c1-e74b32477fc9", ResourceVersion:"5070", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63768901840, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5282", "volume.kubernetes.io/selected-node":"nodes-us-west3-a-chbv"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b275c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b275d8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b275f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b27608), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b27620), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b27638), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-6c581b36-42cf-4c44-99c1-e74b32477fc9", StorageClassName:(*string)(0xc002bd2430), VolumeMode:(*v1.PersistentVolumeMode)(0xc002bd2440), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  3 23:51:02.182: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-jxv9n", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5282", SelfLink:"", UID:"6c581b36-42cf-4c44-99c1-e74b32477fc9", ResourceVersion:"5071", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63768901840, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5282", "volume.kubernetes.io/selected-node":"nodes-us-west3-a-chbv"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b27668), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b27680), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b27698), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b276b0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b276c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b276e0), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b276f8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b27710), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-6c581b36-42cf-4c44-99c1-e74b32477fc9", StorageClassName:(*string)(0xc002bd2470), VolumeMode:(*v1.PersistentVolumeMode)(0xc002bd2480), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 77 lines ...
Oct  3 23:51:26.451: INFO: PersistentVolumeClaim pvc-wsl9p found and phase=Bound (2.058239629s)
Oct  3 23:51:26.451: INFO: Waiting up to 3m0s for PersistentVolume nfs-lfxmc to have phase Bound
Oct  3 23:51:26.475: INFO: PersistentVolume nfs-lfxmc found and phase=Bound (24.283046ms)
[It] should test that a PV becomes Available and is clean after the PVC is deleted.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:283
STEP: Writing to the volume.
Oct  3 23:51:26.561: INFO: Waiting up to 5m0s for pod "pvc-tester-5wrrj" in namespace "pv-27" to be "Succeeded or Failed"
Oct  3 23:51:26.588: INFO: Pod "pvc-tester-5wrrj": Phase="Pending", Reason="", readiness=false. Elapsed: 26.899032ms
Oct  3 23:51:28.625: INFO: Pod "pvc-tester-5wrrj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063783842s
Oct  3 23:51:30.653: INFO: Pod "pvc-tester-5wrrj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091839255s
Oct  3 23:51:32.677: INFO: Pod "pvc-tester-5wrrj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115978776s
Oct  3 23:51:34.702: INFO: Pod "pvc-tester-5wrrj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.141218122s
STEP: Saw pod success
Oct  3 23:51:34.702: INFO: Pod "pvc-tester-5wrrj" satisfied condition "Succeeded or Failed"
STEP: Deleting the claim
Oct  3 23:51:34.702: INFO: Deleting pod "pvc-tester-5wrrj" in namespace "pv-27"
Oct  3 23:51:34.734: INFO: Wait up to 5m0s for pod "pvc-tester-5wrrj" to be fully deleted
Oct  3 23:51:34.757: INFO: Deleting PVC pvc-wsl9p to trigger reclamation of PV 
Oct  3 23:51:34.757: INFO: Deleting PersistentVolumeClaim "pvc-wsl9p"
Oct  3 23:51:34.783: INFO: Waiting for reclaim process to complete.
... skipping 5 lines ...
Oct  3 23:51:40.944: INFO: PV nfs-lfxmc now in "Available" phase
STEP: Re-mounting the volume.
Oct  3 23:51:40.975: INFO: Waiting up to timeout=1m0s for PersistentVolumeClaims [pvc-w7l7v] to have phase Bound
Oct  3 23:51:41.000: INFO: PersistentVolumeClaim pvc-w7l7v found but phase is Pending instead of Bound.
Oct  3 23:51:43.024: INFO: PersistentVolumeClaim pvc-w7l7v found and phase=Bound (2.048932083s)
STEP: Verifying the mount has been cleaned.
Oct  3 23:51:43.058: INFO: Waiting up to 5m0s for pod "pvc-tester-mc262" in namespace "pv-27" to be "Succeeded or Failed"
Oct  3 23:51:43.087: INFO: Pod "pvc-tester-mc262": Phase="Pending", Reason="", readiness=false. Elapsed: 28.760168ms
Oct  3 23:51:45.113: INFO: Pod "pvc-tester-mc262": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05450793s
Oct  3 23:51:47.141: INFO: Pod "pvc-tester-mc262": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082318872s
STEP: Saw pod success
Oct  3 23:51:47.141: INFO: Pod "pvc-tester-mc262" satisfied condition "Succeeded or Failed"
Oct  3 23:51:47.141: INFO: Deleting pod "pvc-tester-mc262" in namespace "pv-27"
Oct  3 23:51:47.184: INFO: Wait up to 5m0s for pod "pvc-tester-mc262" to be fully deleted
Oct  3 23:51:47.210: INFO: Pod exited without failure; the volume has been recycled.
Oct  3 23:51:47.210: INFO: Removing second PVC, waiting for the recycler to finish before cleanup.
Oct  3 23:51:47.210: INFO: Deleting PVC pvc-w7l7v to trigger reclamation of PV 
Oct  3 23:51:47.210: INFO: Deleting PersistentVolumeClaim "pvc-w7l7v"
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    when invoking the Recycle reclaim policy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:265
      should test that a PV becomes Available and is clean after the PVC is deleted.
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:283
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":4,"skipped":16,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","total":-1,"completed":3,"skipped":60,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:51:55.519: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 86 lines ...
Oct  3 23:51:47.654: INFO: PersistentVolumeClaim pvc-m79np found but phase is Pending instead of Bound.
Oct  3 23:51:49.680: INFO: PersistentVolumeClaim pvc-m79np found and phase=Bound (12.210270731s)
Oct  3 23:51:49.680: INFO: Waiting up to 3m0s for PersistentVolume local-7zlg5 to have phase Bound
Oct  3 23:51:49.704: INFO: PersistentVolume local-7zlg5 found and phase=Bound (24.110392ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4swt
STEP: Creating a pod to test subpath
Oct  3 23:51:49.797: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4swt" in namespace "provisioning-9368" to be "Succeeded or Failed"
Oct  3 23:51:49.822: INFO: Pod "pod-subpath-test-preprovisionedpv-4swt": Phase="Pending", Reason="", readiness=false. Elapsed: 24.647541ms
Oct  3 23:51:51.847: INFO: Pod "pod-subpath-test-preprovisionedpv-4swt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050154804s
Oct  3 23:51:53.873: INFO: Pod "pod-subpath-test-preprovisionedpv-4swt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075361555s
Oct  3 23:51:55.909: INFO: Pod "pod-subpath-test-preprovisionedpv-4swt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111511628s
STEP: Saw pod success
Oct  3 23:51:55.909: INFO: Pod "pod-subpath-test-preprovisionedpv-4swt" satisfied condition "Succeeded or Failed"
Oct  3 23:51:55.957: INFO: Trying to get logs from node nodes-us-west3-a-chbv pod pod-subpath-test-preprovisionedpv-4swt container test-container-volume-preprovisionedpv-4swt: <nil>
STEP: delete the pod
Oct  3 23:51:56.081: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4swt to disappear
Oct  3 23:51:56.119: INFO: Pod pod-subpath-test-preprovisionedpv-4swt no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4swt
Oct  3 23:51:56.119: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4swt" in namespace "provisioning-9368"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":7,"skipped":50,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:51:56.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
STEP: Destroying namespace "services-5299" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":8,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:51:41.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:227
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:51:57.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-410" for this suite.


• [SLOW TEST:16.259 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:227
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":11,"skipped":83,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":97,"failed":0}
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:51:50.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 67 lines ...
• [SLOW TEST:59.863 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete jobs and pods created by cronjob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1155
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","total":-1,"completed":5,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:32.286 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should be able to schedule after more than 100 missed schedule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:189
------------------------------
{"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","total":-1,"completed":10,"skipped":75,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] crictl
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 59 lines ...
• [SLOW TEST:9.851 seconds]
[sig-node] crictl
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should be able to run crictl on the node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40
------------------------------
{"msg":"PASSED [sig-node] crictl should be able to run crictl on the node","total":-1,"completed":9,"skipped":75,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:01.652: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on default medium should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
STEP: Creating a pod to test emptydir volume type on node default medium
Oct  3 23:51:56.925: INFO: Waiting up to 5m0s for pod "pod-d18795b0-3f1d-4222-afff-c16ac8fb1cd3" in namespace "emptydir-4512" to be "Succeeded or Failed"
Oct  3 23:51:56.952: INFO: Pod "pod-d18795b0-3f1d-4222-afff-c16ac8fb1cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 25.961458ms
Oct  3 23:51:58.975: INFO: Pod "pod-d18795b0-3f1d-4222-afff-c16ac8fb1cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049337342s
Oct  3 23:52:01.006: INFO: Pod "pod-d18795b0-3f1d-4222-afff-c16ac8fb1cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079979933s
Oct  3 23:52:03.029: INFO: Pod "pod-d18795b0-3f1d-4222-afff-c16ac8fb1cd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.103306463s
STEP: Saw pod success
Oct  3 23:52:03.029: INFO: Pod "pod-d18795b0-3f1d-4222-afff-c16ac8fb1cd3" satisfied condition "Succeeded or Failed"
Oct  3 23:52:03.053: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod pod-d18795b0-3f1d-4222-afff-c16ac8fb1cd3 container test-container: <nil>
STEP: delete the pod
Oct  3 23:52:03.127: INFO: Waiting for pod pod-d18795b0-3f1d-4222-afff-c16ac8fb1cd3 to disappear
Oct  3 23:52:03.152: INFO: Pod pod-d18795b0-3f1d-4222-afff-c16ac8fb1cd3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    volume on default medium should have the correct mode using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":9,"skipped":51,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:03.222: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 24 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-8249022b-2b4b-4ba3-bcc3-220933a160be
STEP: Creating a pod to test consume secrets
Oct  3 23:52:01.148: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6343bbe5-d93e-4647-a090-225a03fa2faf" in namespace "projected-3484" to be "Succeeded or Failed"
Oct  3 23:52:01.176: INFO: Pod "pod-projected-secrets-6343bbe5-d93e-4647-a090-225a03fa2faf": Phase="Pending", Reason="", readiness=false. Elapsed: 28.718444ms
Oct  3 23:52:03.200: INFO: Pod "pod-projected-secrets-6343bbe5-d93e-4647-a090-225a03fa2faf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.052712546s
STEP: Saw pod success
Oct  3 23:52:03.201: INFO: Pod "pod-projected-secrets-6343bbe5-d93e-4647-a090-225a03fa2faf" satisfied condition "Succeeded or Failed"
Oct  3 23:52:03.223: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod pod-projected-secrets-6343bbe5-d93e-4647-a090-225a03fa2faf container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct  3 23:52:03.307: INFO: Waiting for pod pod-projected-secrets-6343bbe5-d93e-4647-a090-225a03fa2faf to disappear
Oct  3 23:52:03.330: INFO: Pod pod-projected-secrets-6343bbe5-d93e-4647-a090-225a03fa2faf no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:52:03.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3484" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:03.396: INFO: Only supported for providers [azure] (not gce)
... skipping 20 lines ...
STEP: Creating a kubernetes client
Oct  3 23:51:08.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Oct  3 23:51:08.198: INFO: PodSpec: initContainers in spec.initContainers
Oct  3 23:52:07.697: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-4f3cae45-48c1-437e-8396-bd530a65b5ee", GenerateName:"", Namespace:"init-container-2473", SelfLink:"", UID:"9d58bf45-327d-42d2-8b70-f01b6a312db2", ResourceVersion:"8635", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63768901868, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"198181358"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0033ac300), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0033ac318), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0033ac330), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0033ac348), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-ndckz", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00365ada0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-ndckz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-ndckz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-ndckz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0034c8428), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"nodes-us-west3-a-zfb0", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0015810a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0034c84b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0034c84d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0034c84d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0034c84dc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00319a840), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768901868, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768901868, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768901868, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768901868, loc:(*time.Location)(0xa09bc80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.180.0.5", PodIP:"100.96.1.51", PodIPs:[]v1.PodIP{v1.PodIP{IP:"100.96.1.51"}}, StartTime:(*v1.Time)(0xc0033ac378), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001581180)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0015811f0)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"docker://b4f8f07b94bb0858684e233ac589e558e6f07e52540325a5a602dd2f294d1de0", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00365ae60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00365ae00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.5", ImageID:"", ContainerID:"", Started:(*bool)(0xc0034c855f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:52:07.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2473" for this suite.


• [SLOW TEST:59.678 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":5,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:07.778: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 72 lines ...
• [SLOW TEST:21.002 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":10,"skipped":85,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 67 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499
      execing into a container with a successful command
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:500
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command","total":-1,"completed":12,"skipped":86,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
Oct  3 23:51:55.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Oct  3 23:51:55.709: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  3 23:51:55.813: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6578" in namespace "provisioning-6578" to be "Succeeded or Failed"
Oct  3 23:51:55.852: INFO: Pod "hostpath-symlink-prep-provisioning-6578": Phase="Pending", Reason="", readiness=false. Elapsed: 39.325709ms
Oct  3 23:51:57.883: INFO: Pod "hostpath-symlink-prep-provisioning-6578": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070029869s
Oct  3 23:51:59.908: INFO: Pod "hostpath-symlink-prep-provisioning-6578": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095563563s
STEP: Saw pod success
Oct  3 23:51:59.908: INFO: Pod "hostpath-symlink-prep-provisioning-6578" satisfied condition "Succeeded or Failed"
Oct  3 23:51:59.908: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6578" in namespace "provisioning-6578"
Oct  3 23:51:59.947: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6578" to be fully deleted
Oct  3 23:51:59.969: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-r44n
STEP: Creating a pod to test subpath
Oct  3 23:51:59.995: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-r44n" in namespace "provisioning-6578" to be "Succeeded or Failed"
Oct  3 23:52:00.022: INFO: Pod "pod-subpath-test-inlinevolume-r44n": Phase="Pending", Reason="", readiness=false. Elapsed: 26.823518ms
Oct  3 23:52:02.048: INFO: Pod "pod-subpath-test-inlinevolume-r44n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05279228s
Oct  3 23:52:04.071: INFO: Pod "pod-subpath-test-inlinevolume-r44n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076058875s
STEP: Saw pod success
Oct  3 23:52:04.071: INFO: Pod "pod-subpath-test-inlinevolume-r44n" satisfied condition "Succeeded or Failed"
Oct  3 23:52:04.096: INFO: Trying to get logs from node nodes-us-west3-a-gn9g pod pod-subpath-test-inlinevolume-r44n container test-container-volume-inlinevolume-r44n: <nil>
STEP: delete the pod
Oct  3 23:52:04.160: INFO: Waiting for pod pod-subpath-test-inlinevolume-r44n to disappear
Oct  3 23:52:04.190: INFO: Pod pod-subpath-test-inlinevolume-r44n no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-r44n
Oct  3 23:52:04.190: INFO: Deleting pod "pod-subpath-test-inlinevolume-r44n" in namespace "provisioning-6578"
STEP: Deleting pod
Oct  3 23:52:04.218: INFO: Deleting pod "pod-subpath-test-inlinevolume-r44n" in namespace "provisioning-6578"
Oct  3 23:52:04.266: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6578" in namespace "provisioning-6578" to be "Succeeded or Failed"
Oct  3 23:52:04.290: INFO: Pod "hostpath-symlink-prep-provisioning-6578": Phase="Pending", Reason="", readiness=false. Elapsed: 23.257148ms
Oct  3 23:52:06.314: INFO: Pod "hostpath-symlink-prep-provisioning-6578": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047560115s
Oct  3 23:52:08.345: INFO: Pod "hostpath-symlink-prep-provisioning-6578": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078965803s
Oct  3 23:52:10.369: INFO: Pod "hostpath-symlink-prep-provisioning-6578": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102375706s
Oct  3 23:52:12.399: INFO: Pod "hostpath-symlink-prep-provisioning-6578": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.133192396s
STEP: Saw pod success
Oct  3 23:52:12.400: INFO: Pod "hostpath-symlink-prep-provisioning-6578" satisfied condition "Succeeded or Failed"
Oct  3 23:52:12.400: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6578" in namespace "provisioning-6578"
Oct  3 23:52:12.449: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6578" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:52:12.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-6578" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":76,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:12.584: INFO: Driver local doesn't support ext3 -- skipping
... skipping 107 lines ...
• [SLOW TEST:63.197 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:12.931: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 134 lines ...
Oct  3 23:51:41.580: INFO: PersistentVolumeClaim csi-hostpathw9zvq found but phase is Pending instead of Bound.
Oct  3 23:51:43.603: INFO: PersistentVolumeClaim csi-hostpathw9zvq found but phase is Pending instead of Bound.
Oct  3 23:51:45.627: INFO: PersistentVolumeClaim csi-hostpathw9zvq found but phase is Pending instead of Bound.
Oct  3 23:51:47.651: INFO: PersistentVolumeClaim csi-hostpathw9zvq found and phase=Bound (14.218406132s)
STEP: Creating pod pod-subpath-test-dynamicpv-d5qw
STEP: Creating a pod to test subpath
Oct  3 23:51:47.726: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-d5qw" in namespace "provisioning-6712" to be "Succeeded or Failed"
Oct  3 23:51:47.751: INFO: Pod "pod-subpath-test-dynamicpv-d5qw": Phase="Pending", Reason="", readiness=false. Elapsed: 24.195746ms
Oct  3 23:51:49.780: INFO: Pod "pod-subpath-test-dynamicpv-d5qw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053282505s
Oct  3 23:51:51.807: INFO: Pod "pod-subpath-test-dynamicpv-d5qw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080420408s
Oct  3 23:51:53.831: INFO: Pod "pod-subpath-test-dynamicpv-d5qw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104904768s
Oct  3 23:51:55.871: INFO: Pod "pod-subpath-test-dynamicpv-d5qw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.144752226s
STEP: Saw pod success
Oct  3 23:51:55.871: INFO: Pod "pod-subpath-test-dynamicpv-d5qw" satisfied condition "Succeeded or Failed"
Oct  3 23:51:55.907: INFO: Trying to get logs from node nodes-us-west3-a-chbv pod pod-subpath-test-dynamicpv-d5qw container test-container-volume-dynamicpv-d5qw: <nil>
STEP: delete the pod
Oct  3 23:51:56.043: INFO: Waiting for pod pod-subpath-test-dynamicpv-d5qw to disappear
Oct  3 23:51:56.079: INFO: Pod pod-subpath-test-dynamicpv-d5qw no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-d5qw
Oct  3 23:51:56.079: INFO: Deleting pod "pod-subpath-test-dynamicpv-d5qw" in namespace "provisioning-6712"
... skipping 61 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":6,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 428 lines ...
• [SLOW TEST:16.064 seconds]
[sig-network] Service endpoints latency
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":10,"skipped":80,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:17.762: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":6,"skipped":27,"failed":0}
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:52:13.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct  3 23:52:13.594: INFO: Waiting up to 5m0s for pod "downward-api-3fdf4705-daa0-40df-9718-3686f31ee323" in namespace "downward-api-5765" to be "Succeeded or Failed"
Oct  3 23:52:13.634: INFO: Pod "downward-api-3fdf4705-daa0-40df-9718-3686f31ee323": Phase="Pending", Reason="", readiness=false. Elapsed: 40.451523ms
Oct  3 23:52:15.663: INFO: Pod "downward-api-3fdf4705-daa0-40df-9718-3686f31ee323": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069520573s
Oct  3 23:52:17.690: INFO: Pod "downward-api-3fdf4705-daa0-40df-9718-3686f31ee323": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096594284s
STEP: Saw pod success
Oct  3 23:52:17.690: INFO: Pod "downward-api-3fdf4705-daa0-40df-9718-3686f31ee323" satisfied condition "Succeeded or Failed"
Oct  3 23:52:17.716: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod downward-api-3fdf4705-daa0-40df-9718-3686f31ee323 container dapi-container: <nil>
STEP: delete the pod
Oct  3 23:52:17.793: INFO: Waiting for pod downward-api-3fdf4705-daa0-40df-9718-3686f31ee323 to disappear
Oct  3 23:52:17.818: INFO: Pod downward-api-3fdf4705-daa0-40df-9718-3686f31ee323 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:52:17.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5765" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":27,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should not be able to pull image from invalid registry [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":13,"skipped":87,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:18.616: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 38 lines ...
Oct  3 23:52:01.326: INFO: PersistentVolumeClaim pvc-x6q8k found but phase is Pending instead of Bound.
Oct  3 23:52:03.351: INFO: PersistentVolumeClaim pvc-x6q8k found and phase=Bound (8.125853924s)
Oct  3 23:52:03.351: INFO: Waiting up to 3m0s for PersistentVolume local-68k4s to have phase Bound
Oct  3 23:52:03.377: INFO: PersistentVolume local-68k4s found and phase=Bound (25.86523ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4hb9
STEP: Creating a pod to test subpath
Oct  3 23:52:03.466: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4hb9" in namespace "provisioning-3605" to be "Succeeded or Failed"
Oct  3 23:52:03.493: INFO: Pod "pod-subpath-test-preprovisionedpv-4hb9": Phase="Pending", Reason="", readiness=false. Elapsed: 27.019801ms
Oct  3 23:52:05.518: INFO: Pod "pod-subpath-test-preprovisionedpv-4hb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051961394s
Oct  3 23:52:07.545: INFO: Pod "pod-subpath-test-preprovisionedpv-4hb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078973223s
Oct  3 23:52:09.585: INFO: Pod "pod-subpath-test-preprovisionedpv-4hb9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118533109s
Oct  3 23:52:11.623: INFO: Pod "pod-subpath-test-preprovisionedpv-4hb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.157024636s
STEP: Saw pod success
Oct  3 23:52:11.623: INFO: Pod "pod-subpath-test-preprovisionedpv-4hb9" satisfied condition "Succeeded or Failed"
Oct  3 23:52:11.705: INFO: Trying to get logs from node nodes-us-west3-a-gn9g pod pod-subpath-test-preprovisionedpv-4hb9 container test-container-subpath-preprovisionedpv-4hb9: <nil>
STEP: delete the pod
Oct  3 23:52:11.870: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4hb9 to disappear
Oct  3 23:52:11.921: INFO: Pod pod-subpath-test-preprovisionedpv-4hb9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4hb9
Oct  3 23:52:11.921: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4hb9" in namespace "provisioning-3605"
STEP: Creating pod pod-subpath-test-preprovisionedpv-4hb9
STEP: Creating a pod to test subpath
Oct  3 23:52:12.027: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4hb9" in namespace "provisioning-3605" to be "Succeeded or Failed"
Oct  3 23:52:12.063: INFO: Pod "pod-subpath-test-preprovisionedpv-4hb9": Phase="Pending", Reason="", readiness=false. Elapsed: 35.429804ms
Oct  3 23:52:14.093: INFO: Pod "pod-subpath-test-preprovisionedpv-4hb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066093234s
Oct  3 23:52:16.164: INFO: Pod "pod-subpath-test-preprovisionedpv-4hb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137028833s
Oct  3 23:52:18.209: INFO: Pod "pod-subpath-test-preprovisionedpv-4hb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.181318099s
STEP: Saw pod success
Oct  3 23:52:18.209: INFO: Pod "pod-subpath-test-preprovisionedpv-4hb9" satisfied condition "Succeeded or Failed"
Oct  3 23:52:18.235: INFO: Trying to get logs from node nodes-us-west3-a-gn9g pod pod-subpath-test-preprovisionedpv-4hb9 container test-container-subpath-preprovisionedpv-4hb9: <nil>
STEP: delete the pod
Oct  3 23:52:18.323: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4hb9 to disappear
Oct  3 23:52:18.349: INFO: Pod pod-subpath-test-preprovisionedpv-4hb9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4hb9
Oct  3 23:52:18.349: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4hb9" in namespace "provisioning-3605"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":10,"skipped":67,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:18.844: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 42 lines ...
• [SLOW TEST:6.944 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":7,"skipped":42,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:21.625: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:52:21.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8285" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":11,"skipped":70,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":97,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:51:59.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
• [SLOW TEST:23.324 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":11,"skipped":97,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:22.773: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 175 lines ...
Oct  3 23:52:12.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Oct  3 23:52:12.805: INFO: Waiting up to 5m0s for pod "pod-5cb5f029-1f98-496d-921e-310775e7d573" in namespace "emptydir-4587" to be "Succeeded or Failed"
Oct  3 23:52:12.834: INFO: Pod "pod-5cb5f029-1f98-496d-921e-310775e7d573": Phase="Pending", Reason="", readiness=false. Elapsed: 28.550405ms
Oct  3 23:52:14.874: INFO: Pod "pod-5cb5f029-1f98-496d-921e-310775e7d573": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068966415s
Oct  3 23:52:16.900: INFO: Pod "pod-5cb5f029-1f98-496d-921e-310775e7d573": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094122219s
Oct  3 23:52:18.929: INFO: Pod "pod-5cb5f029-1f98-496d-921e-310775e7d573": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123819838s
Oct  3 23:52:21.005: INFO: Pod "pod-5cb5f029-1f98-496d-921e-310775e7d573": Phase="Pending", Reason="", readiness=false. Elapsed: 8.199313746s
Oct  3 23:52:23.109: INFO: Pod "pod-5cb5f029-1f98-496d-921e-310775e7d573": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.303303656s
STEP: Saw pod success
Oct  3 23:52:23.109: INFO: Pod "pod-5cb5f029-1f98-496d-921e-310775e7d573" satisfied condition "Succeeded or Failed"
Oct  3 23:52:23.225: INFO: Trying to get logs from node nodes-us-west3-a-chbv pod pod-5cb5f029-1f98-496d-921e-310775e7d573 container test-container: <nil>
STEP: delete the pod
Oct  3 23:52:23.528: INFO: Waiting for pod pod-5cb5f029-1f98-496d-921e-310775e7d573 to disappear
Oct  3 23:52:23.599: INFO: Pod pod-5cb5f029-1f98-496d-921e-310775e7d573 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.077 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":96,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:23.749: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 72 lines ...
• [SLOW TEST:14.766 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":11,"skipped":88,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:25.043: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 62 lines ...
Oct  3 23:52:16.163: INFO: PersistentVolumeClaim pvc-kzm7c found but phase is Pending instead of Bound.
Oct  3 23:52:18.201: INFO: PersistentVolumeClaim pvc-kzm7c found and phase=Bound (10.220399347s)
Oct  3 23:52:18.201: INFO: Waiting up to 3m0s for PersistentVolume local-cb9sw to have phase Bound
Oct  3 23:52:18.233: INFO: PersistentVolume local-cb9sw found and phase=Bound (31.840654ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xj67
STEP: Creating a pod to test subpath
Oct  3 23:52:18.316: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xj67" in namespace "provisioning-5998" to be "Succeeded or Failed"
Oct  3 23:52:18.339: INFO: Pod "pod-subpath-test-preprovisionedpv-xj67": Phase="Pending", Reason="", readiness=false. Elapsed: 22.883305ms
Oct  3 23:52:20.373: INFO: Pod "pod-subpath-test-preprovisionedpv-xj67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056370156s
Oct  3 23:52:22.437: INFO: Pod "pod-subpath-test-preprovisionedpv-xj67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120033826s
Oct  3 23:52:24.478: INFO: Pod "pod-subpath-test-preprovisionedpv-xj67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.161751811s
STEP: Saw pod success
Oct  3 23:52:24.478: INFO: Pod "pod-subpath-test-preprovisionedpv-xj67" satisfied condition "Succeeded or Failed"
Oct  3 23:52:24.505: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod pod-subpath-test-preprovisionedpv-xj67 container test-container-subpath-preprovisionedpv-xj67: <nil>
STEP: delete the pod
Oct  3 23:52:24.647: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xj67 to disappear
Oct  3 23:52:24.691: INFO: Pod pod-subpath-test-preprovisionedpv-xj67 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xj67
Oct  3 23:52:24.691: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xj67" in namespace "provisioning-5998"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":10,"skipped":62,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:25.321: INFO: Driver windows-gcepd doesn't support  -- skipping
... skipping 112 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":6,"skipped":35,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:25.470: INFO: Only supported for providers [azure] (not gce)
... skipping 40 lines ...
Oct  3 23:52:16.224: INFO: PersistentVolumeClaim pvc-g8lh2 found but phase is Pending instead of Bound.
Oct  3 23:52:18.247: INFO: PersistentVolumeClaim pvc-g8lh2 found and phase=Bound (10.206545098s)
Oct  3 23:52:18.247: INFO: Waiting up to 3m0s for PersistentVolume local-vds69 to have phase Bound
Oct  3 23:52:18.276: INFO: PersistentVolume local-vds69 found and phase=Bound (29.41387ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-mhvc
STEP: Creating a pod to test subpath
Oct  3 23:52:18.362: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mhvc" in namespace "provisioning-5516" to be "Succeeded or Failed"
Oct  3 23:52:18.398: INFO: Pod "pod-subpath-test-preprovisionedpv-mhvc": Phase="Pending", Reason="", readiness=false. Elapsed: 36.58849ms
Oct  3 23:52:20.433: INFO: Pod "pod-subpath-test-preprovisionedpv-mhvc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071429052s
Oct  3 23:52:22.509: INFO: Pod "pod-subpath-test-preprovisionedpv-mhvc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14694394s
Oct  3 23:52:24.553: INFO: Pod "pod-subpath-test-preprovisionedpv-mhvc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.190863152s
STEP: Saw pod success
Oct  3 23:52:24.553: INFO: Pod "pod-subpath-test-preprovisionedpv-mhvc" satisfied condition "Succeeded or Failed"
Oct  3 23:52:24.593: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod pod-subpath-test-preprovisionedpv-mhvc container test-container-subpath-preprovisionedpv-mhvc: <nil>
STEP: delete the pod
Oct  3 23:52:24.740: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mhvc to disappear
Oct  3 23:52:24.779: INFO: Pod pod-subpath-test-preprovisionedpv-mhvc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mhvc
Oct  3 23:52:24.779: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mhvc" in namespace "provisioning-5516"
... skipping 21 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":7,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:25.486: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 105 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:52:26.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-458" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":12,"skipped":94,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:26.148: INFO: Only supported for providers [openstack] (not gce)
... skipping 353 lines ...
• [SLOW TEST:9.067 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":14,"skipped":93,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:27.738: INFO: Driver windows-gcepd doesn't support  -- skipping
... skipping 137 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59
STEP: Creating configMap with name configmap-test-volume-e666fc02-f465-49bd-8443-e64991a53a7c
STEP: Creating a pod to test consume configMaps
Oct  3 23:52:25.626: INFO: Waiting up to 5m0s for pod "pod-configmaps-d0925435-2a43-40aa-a3fd-4bd02f657e33" in namespace "configmap-4724" to be "Succeeded or Failed"
Oct  3 23:52:25.650: INFO: Pod "pod-configmaps-d0925435-2a43-40aa-a3fd-4bd02f657e33": Phase="Pending", Reason="", readiness=false. Elapsed: 24.219581ms
Oct  3 23:52:27.688: INFO: Pod "pod-configmaps-d0925435-2a43-40aa-a3fd-4bd02f657e33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062095967s
Oct  3 23:52:29.748: INFO: Pod "pod-configmaps-d0925435-2a43-40aa-a3fd-4bd02f657e33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.122421036s
STEP: Saw pod success
Oct  3 23:52:29.748: INFO: Pod "pod-configmaps-d0925435-2a43-40aa-a3fd-4bd02f657e33" satisfied condition "Succeeded or Failed"
Oct  3 23:52:29.816: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod pod-configmaps-d0925435-2a43-40aa-a3fd-4bd02f657e33 container agnhost-container: <nil>
STEP: delete the pod
Oct  3 23:52:29.937: INFO: Waiting for pod pod-configmaps-d0925435-2a43-40aa-a3fd-4bd02f657e33 to disappear
Oct  3 23:52:30.020: INFO: Pod pod-configmaps-d0925435-2a43-40aa-a3fd-4bd02f657e33 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:52:30.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4724" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":11,"skipped":70,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:30.310: INFO: Only supported for providers [aws] (not gce)
... skipping 26 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not gce)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
... skipping 5 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106
STEP: Creating a pod to test downward API volume plugin
Oct  3 23:52:25.680: INFO: Waiting up to 5m0s for pod "metadata-volume-2c5902ce-7bdf-409f-8801-70e72335c550" in namespace "projected-6051" to be "Succeeded or Failed"
Oct  3 23:52:25.710: INFO: Pod "metadata-volume-2c5902ce-7bdf-409f-8801-70e72335c550": Phase="Pending", Reason="", readiness=false. Elapsed: 30.063113ms
Oct  3 23:52:27.757: INFO: Pod "metadata-volume-2c5902ce-7bdf-409f-8801-70e72335c550": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076893882s
Oct  3 23:52:29.815: INFO: Pod "metadata-volume-2c5902ce-7bdf-409f-8801-70e72335c550": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.134744484s
STEP: Saw pod success
Oct  3 23:52:29.815: INFO: Pod "metadata-volume-2c5902ce-7bdf-409f-8801-70e72335c550" satisfied condition "Succeeded or Failed"
Oct  3 23:52:29.848: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod metadata-volume-2c5902ce-7bdf-409f-8801-70e72335c550 container client-container: <nil>
STEP: delete the pod
Oct  3 23:52:30.118: INFO: Waiting for pod metadata-volume-2c5902ce-7bdf-409f-8801-70e72335c550 to disappear
Oct  3 23:52:30.193: INFO: Pod metadata-volume-2c5902ce-7bdf-409f-8801-70e72335c550 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:52:30.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6051" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":7,"skipped":45,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:30.399: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 41 lines ...
Oct  3 23:52:02.869: INFO: PersistentVolumeClaim pvc-zhzmn found but phase is Pending instead of Bound.
Oct  3 23:52:04.893: INFO: PersistentVolumeClaim pvc-zhzmn found and phase=Bound (6.0967964s)
Oct  3 23:52:04.893: INFO: Waiting up to 3m0s for PersistentVolume gcepd-nk88r to have phase Bound
Oct  3 23:52:04.916: INFO: PersistentVolume gcepd-nk88r found and phase=Bound (23.144313ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-mqsp
STEP: Creating a pod to test exec-volume-test
Oct  3 23:52:04.986: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-mqsp" in namespace "volume-740" to be "Succeeded or Failed"
Oct  3 23:52:05.008: INFO: Pod "exec-volume-test-preprovisionedpv-mqsp": Phase="Pending", Reason="", readiness=false. Elapsed: 22.671253ms
Oct  3 23:52:07.032: INFO: Pod "exec-volume-test-preprovisionedpv-mqsp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046099501s
Oct  3 23:52:09.056: INFO: Pod "exec-volume-test-preprovisionedpv-mqsp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070081146s
Oct  3 23:52:11.086: INFO: Pod "exec-volume-test-preprovisionedpv-mqsp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09995231s
Oct  3 23:52:13.113: INFO: Pod "exec-volume-test-preprovisionedpv-mqsp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.126827785s
Oct  3 23:52:15.137: INFO: Pod "exec-volume-test-preprovisionedpv-mqsp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.150945114s
Oct  3 23:52:17.161: INFO: Pod "exec-volume-test-preprovisionedpv-mqsp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.175106307s
STEP: Saw pod success
Oct  3 23:52:17.161: INFO: Pod "exec-volume-test-preprovisionedpv-mqsp" satisfied condition "Succeeded or Failed"
Oct  3 23:52:17.184: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod exec-volume-test-preprovisionedpv-mqsp container exec-container-preprovisionedpv-mqsp: <nil>
STEP: delete the pod
Oct  3 23:52:17.243: INFO: Waiting for pod exec-volume-test-preprovisionedpv-mqsp to disappear
Oct  3 23:52:17.267: INFO: Pod exec-volume-test-preprovisionedpv-mqsp no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-mqsp
Oct  3 23:52:17.267: INFO: Deleting pod "exec-volume-test-preprovisionedpv-mqsp" in namespace "volume-740"
STEP: Deleting pv and pvc
Oct  3 23:52:17.290: INFO: Deleting PersistentVolumeClaim "pvc-zhzmn"
Oct  3 23:52:17.316: INFO: Deleting PersistentVolume "gcepd-nk88r"
Oct  3 23:52:18.045: INFO: error deleting PD "e2e-7791a18b-a4ec-4efa-95c6-3fb75dba72dd": googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-7791a18b-a4ec-4efa-95c6-3fb75dba72dd' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource
Oct  3 23:52:18.045: INFO: Couldn't delete PD "e2e-7791a18b-a4ec-4efa-95c6-3fb75dba72dd", sleeping 5s: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-7791a18b-a4ec-4efa-95c6-3fb75dba72dd' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource
Oct  3 23:52:23.702: INFO: error deleting PD "e2e-7791a18b-a4ec-4efa-95c6-3fb75dba72dd": googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-7791a18b-a4ec-4efa-95c6-3fb75dba72dd' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource
Oct  3 23:52:23.703: INFO: Couldn't delete PD "e2e-7791a18b-a4ec-4efa-95c6-3fb75dba72dd", sleeping 5s: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-7791a18b-a4ec-4efa-95c6-3fb75dba72dd' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource
Oct  3 23:52:30.819: INFO: Successfully deleted PD "e2e-7791a18b-a4ec-4efa-95c6-3fb75dba72dd".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:52:30.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-740" for this suite.

... skipping 5 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:30.917: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 72 lines ...
Oct  3 23:52:01.216: INFO: Creating resource for dynamic PV
Oct  3 23:52:01.216: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(gcepd) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-8058dssxm
STEP: creating a claim
STEP: Expanding non-expandable pvc
Oct  3 23:52:01.296: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Oct  3 23:52:01.349: INFO: Error updating pvc gcepdlx4pj: PersistentVolumeClaim "gcepdlx4pj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8058dssxm",
  	... // 3 identical fields
  }

Oct  3 23:52:03.400: INFO: Error updating pvc gcepdlx4pj: PersistentVolumeClaim "gcepdlx4pj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8058dssxm",
  	... // 3 identical fields
  }

Oct  3 23:52:05.397: INFO: Error updating pvc gcepdlx4pj: PersistentVolumeClaim "gcepdlx4pj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8058dssxm",
  	... // 3 identical fields
  }

Oct  3 23:52:07.396: INFO: Error updating pvc gcepdlx4pj: PersistentVolumeClaim "gcepdlx4pj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8058dssxm",
  	... // 3 identical fields
  }

Oct  3 23:52:09.444: INFO: Error updating pvc gcepdlx4pj: PersistentVolumeClaim "gcepdlx4pj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8058dssxm",
  	... // 3 identical fields
  }

Oct  3 23:52:11.429: INFO: Error updating pvc gcepdlx4pj: PersistentVolumeClaim "gcepdlx4pj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8058dssxm",
  	... // 3 identical fields
  }

Oct  3 23:52:13.402: INFO: Error updating pvc gcepdlx4pj: PersistentVolumeClaim "gcepdlx4pj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8058dssxm",
  	... // 3 identical fields
  }

Oct  3 23:52:15.403: INFO: Error updating pvc gcepdlx4pj: PersistentVolumeClaim "gcepdlx4pj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8058dssxm",
  	... // 3 identical fields
  }

Oct  3 23:52:17.400: INFO: Error updating pvc gcepdlx4pj: PersistentVolumeClaim "gcepdlx4pj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8058dssxm",
  	... // 3 identical fields
  }

Oct  3 23:52:19.399: INFO: Error updating pvc gcepdlx4pj: PersistentVolumeClaim "gcepdlx4pj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8058dssxm",
  	... // 3 identical fields
  }

Oct  3 23:52:21.406: INFO: Error updating pvc gcepdlx4pj: PersistentVolumeClaim "gcepdlx4pj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8058dssxm",
  	... // 3 identical fields
  }

Oct  3 23:52:23.536: INFO: Error updating pvc gcepdlx4pj: PersistentVolumeClaim "gcepdlx4pj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8058dssxm",
  	... // 3 identical fields
  }

Oct  3 23:52:25.461: INFO: Error updating pvc gcepdlx4pj: PersistentVolumeClaim "gcepdlx4pj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8058dssxm",
  	... // 3 identical fields
  }

Oct  3 23:52:27.518: INFO: Error updating pvc gcepdlx4pj: PersistentVolumeClaim "gcepdlx4pj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8058dssxm",
  	... // 3 identical fields
  }

Oct  3 23:52:29.417: INFO: Error updating pvc gcepdlx4pj: PersistentVolumeClaim "gcepdlx4pj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8058dssxm",
  	... // 3 identical fields
  }

Oct  3 23:52:31.457: INFO: Error updating pvc gcepdlx4pj: PersistentVolumeClaim "gcepdlx4pj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8058dssxm",
  	... // 3 identical fields
  }

Oct  3 23:52:31.512: INFO: Error updating pvc gcepdlx4pj: PersistentVolumeClaim "gcepdlx4pj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":11,"skipped":77,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:31.729: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 76 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":5,"skipped":35,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:34.150: INFO: Only supported for providers [aws] (not gce)
... skipping 159 lines ...
Oct  3 23:51:46.331: INFO: PersistentVolumeClaim pvc-bdmwv found but phase is Pending instead of Bound.
Oct  3 23:51:48.355: INFO: PersistentVolumeClaim pvc-bdmwv found and phase=Bound (2.054727781s)
STEP: Deleting the previously created pod
Oct  3 23:52:08.492: INFO: Deleting pod "pvc-volume-tester-tvsb4" in namespace "csi-mock-volumes-2197"
Oct  3 23:52:08.524: INFO: Wait up to 5m0s for pod "pvc-volume-tester-tvsb4" to be fully deleted
STEP: Checking CSI driver logs
Oct  3 23:52:12.626: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/ae709be8-356d-47a0-a373-bd58d3228b14/volumes/kubernetes.io~csi/pvc-a57e236e-8b5a-4f40-9849-a53a90e78a79/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-tvsb4
Oct  3 23:52:12.626: INFO: Deleting pod "pvc-volume-tester-tvsb4" in namespace "csi-mock-volumes-2197"
STEP: Deleting claim pvc-bdmwv
Oct  3 23:52:12.712: INFO: Waiting up to 2m0s for PersistentVolume pvc-a57e236e-8b5a-4f40-9849-a53a90e78a79 to get deleted
Oct  3 23:52:12.745: INFO: PersistentVolume pvc-a57e236e-8b5a-4f40-9849-a53a90e78a79 found and phase=Bound (33.296777ms)
Oct  3 23:52:14.779: INFO: PersistentVolume pvc-a57e236e-8b5a-4f40-9849-a53a90e78a79 found and phase=Released (2.067140815s)
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1497
    token should not be plumbed down when csiServiceAccountTokenEnabled=false
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1525
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":5,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
Oct  3 23:52:26.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
Oct  3 23:52:26.735: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  3 23:52:26.816: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5176" in namespace "provisioning-5176" to be "Succeeded or Failed"
Oct  3 23:52:26.849: INFO: Pod "hostpath-symlink-prep-provisioning-5176": Phase="Pending", Reason="", readiness=false. Elapsed: 32.710122ms
Oct  3 23:52:28.878: INFO: Pod "hostpath-symlink-prep-provisioning-5176": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061736411s
Oct  3 23:52:30.917: INFO: Pod "hostpath-symlink-prep-provisioning-5176": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100739042s
STEP: Saw pod success
Oct  3 23:52:30.917: INFO: Pod "hostpath-symlink-prep-provisioning-5176" satisfied condition "Succeeded or Failed"
Oct  3 23:52:30.917: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5176" in namespace "provisioning-5176"
Oct  3 23:52:31.024: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5176" to be fully deleted
Oct  3 23:52:31.077: INFO: Creating resource for inline volume
Oct  3 23:52:31.077: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
Oct  3 23:52:31.078: INFO: Deleting pod "pod-subpath-test-inlinevolume-nwrz" in namespace "provisioning-5176"
Oct  3 23:52:31.178: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5176" in namespace "provisioning-5176" to be "Succeeded or Failed"
Oct  3 23:52:31.218: INFO: Pod "hostpath-symlink-prep-provisioning-5176": Phase="Pending", Reason="", readiness=false. Elapsed: 39.785426ms
Oct  3 23:52:33.247: INFO: Pod "hostpath-symlink-prep-provisioning-5176": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069355047s
Oct  3 23:52:35.274: INFO: Pod "hostpath-symlink-prep-provisioning-5176": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096182416s
Oct  3 23:52:37.298: INFO: Pod "hostpath-symlink-prep-provisioning-5176": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120140169s
Oct  3 23:52:39.323: INFO: Pod "hostpath-symlink-prep-provisioning-5176": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.145582905s
STEP: Saw pod success
Oct  3 23:52:39.323: INFO: Pod "hostpath-symlink-prep-provisioning-5176" satisfied condition "Succeeded or Failed"
Oct  3 23:52:39.323: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5176" in namespace "provisioning-5176"
Oct  3 23:52:39.354: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5176" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:52:39.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-5176" for this suite.
... skipping 156 lines ...
• [SLOW TEST:22.448 seconds]
[sig-network] HostPort
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":12,"skipped":76,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:44.460: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 172 lines ...
Oct  3 23:52:31.345: INFO: PersistentVolumeClaim pvc-4m2gz found but phase is Pending instead of Bound.
Oct  3 23:52:33.372: INFO: PersistentVolumeClaim pvc-4m2gz found and phase=Bound (10.27188515s)
Oct  3 23:52:33.372: INFO: Waiting up to 3m0s for PersistentVolume local-5vvsn to have phase Bound
Oct  3 23:52:33.419: INFO: PersistentVolume local-5vvsn found and phase=Bound (46.665363ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zfwr
STEP: Creating a pod to test subpath
Oct  3 23:52:33.506: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zfwr" in namespace "provisioning-9138" to be "Succeeded or Failed"
Oct  3 23:52:33.534: INFO: Pod "pod-subpath-test-preprovisionedpv-zfwr": Phase="Pending", Reason="", readiness=false. Elapsed: 28.06574ms
Oct  3 23:52:35.563: INFO: Pod "pod-subpath-test-preprovisionedpv-zfwr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057185633s
Oct  3 23:52:37.588: INFO: Pod "pod-subpath-test-preprovisionedpv-zfwr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082564453s
Oct  3 23:52:39.616: INFO: Pod "pod-subpath-test-preprovisionedpv-zfwr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110545956s
Oct  3 23:52:41.679: INFO: Pod "pod-subpath-test-preprovisionedpv-zfwr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.17358321s
Oct  3 23:52:43.704: INFO: Pod "pod-subpath-test-preprovisionedpv-zfwr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.198047719s
STEP: Saw pod success
Oct  3 23:52:43.704: INFO: Pod "pod-subpath-test-preprovisionedpv-zfwr" satisfied condition "Succeeded or Failed"
Oct  3 23:52:43.729: INFO: Trying to get logs from node nodes-us-west3-a-chbv pod pod-subpath-test-preprovisionedpv-zfwr container test-container-volume-preprovisionedpv-zfwr: <nil>
STEP: delete the pod
Oct  3 23:52:43.792: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zfwr to disappear
Oct  3 23:52:43.816: INFO: Pod pod-subpath-test-preprovisionedpv-zfwr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zfwr
Oct  3 23:52:43.816: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zfwr" in namespace "provisioning-9138"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":11,"skipped":83,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:52:45.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3129" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":12,"skipped":85,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:45.135: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 149 lines ...
      Driver "csi-hostpath" does not support FsGroup - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":6,"failed":0}
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:51:53.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
• [SLOW TEST:53.092 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":5,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":15,"skipped":121,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:46.584: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 243 lines ...
• [SLOW TEST:30.951 seconds]
[sig-network] EndpointSlice
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":8,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:48.864: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 75 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":8,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:49.716: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 19 lines ...
      Only supported for providers [azure] (not gce)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for subresources","total":-1,"completed":6,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:51:28.079: INFO: >>> kubeConfig: /root/.kube/config
... skipping 52 lines ...
Oct  3 23:52:38.172: INFO: Pod gcepd-client still exists
Oct  3 23:52:40.149: INFO: Waiting for pod gcepd-client to disappear
Oct  3 23:52:40.175: INFO: Pod gcepd-client still exists
Oct  3 23:52:42.149: INFO: Waiting for pod gcepd-client to disappear
Oct  3 23:52:42.176: INFO: Pod gcepd-client no longer exists
STEP: cleaning the environment after gcepd
Oct  3 23:52:42.879: INFO: error deleting PD "e2e-9e3ff02f-5ea1-4745-b6c5-eaf5043d8634": googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-9e3ff02f-5ea1-4745-b6c5-eaf5043d8634' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-chbv', resourceInUseByAnotherResource
Oct  3 23:52:42.879: INFO: Couldn't delete PD "e2e-9e3ff02f-5ea1-4745-b6c5-eaf5043d8634", sleeping 5s: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-9e3ff02f-5ea1-4745-b6c5-eaf5043d8634' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-chbv', resourceInUseByAnotherResource
Oct  3 23:52:49.804: INFO: Successfully deleted PD "e2e-9e3ff02f-5ea1-4745-b6c5-eaf5043d8634".
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:52:49.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8938" for this suite.

... skipping 5 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (ext3)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext3)] volumes should store data","total":-1,"completed":7,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:49.869: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 42 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":87,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:50.020: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 121 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:52:50.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-515" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":14,"skipped":97,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:50.498: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 46 lines ...
Oct  3 23:52:46.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on node default medium
Oct  3 23:52:46.734: INFO: Waiting up to 5m0s for pod "pod-c963f33e-bdd6-4249-9751-df88c7ce4df7" in namespace "emptydir-5175" to be "Succeeded or Failed"
Oct  3 23:52:46.760: INFO: Pod "pod-c963f33e-bdd6-4249-9751-df88c7ce4df7": Phase="Pending", Reason="", readiness=false. Elapsed: 25.880214ms
Oct  3 23:52:48.784: INFO: Pod "pod-c963f33e-bdd6-4249-9751-df88c7ce4df7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05053988s
Oct  3 23:52:50.809: INFO: Pod "pod-c963f33e-bdd6-4249-9751-df88c7ce4df7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075574289s
STEP: Saw pod success
Oct  3 23:52:50.809: INFO: Pod "pod-c963f33e-bdd6-4249-9751-df88c7ce4df7" satisfied condition "Succeeded or Failed"
Oct  3 23:52:50.835: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod pod-c963f33e-bdd6-4249-9751-df88c7ce4df7 container test-container: <nil>
STEP: delete the pod
Oct  3 23:52:50.905: INFO: Waiting for pod pod-c963f33e-bdd6-4249-9751-df88c7ce4df7 to disappear
Oct  3 23:52:50.933: INFO: Pod pod-c963f33e-bdd6-4249-9751-df88c7ce4df7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:52:50.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5175" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 17 lines ...
Oct  3 23:52:46.295: INFO: PersistentVolumeClaim pvc-j8pfx found but phase is Pending instead of Bound.
Oct  3 23:52:48.318: INFO: PersistentVolumeClaim pvc-j8pfx found and phase=Bound (6.093179861s)
Oct  3 23:52:48.318: INFO: Waiting up to 3m0s for PersistentVolume local-sns9l to have phase Bound
Oct  3 23:52:48.343: INFO: PersistentVolume local-sns9l found and phase=Bound (24.365919ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-zngx
STEP: Creating a pod to test exec-volume-test
Oct  3 23:52:48.412: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-zngx" in namespace "volume-3131" to be "Succeeded or Failed"
Oct  3 23:52:48.439: INFO: Pod "exec-volume-test-preprovisionedpv-zngx": Phase="Pending", Reason="", readiness=false. Elapsed: 26.51762ms
Oct  3 23:52:50.463: INFO: Pod "exec-volume-test-preprovisionedpv-zngx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050837783s
Oct  3 23:52:52.489: INFO: Pod "exec-volume-test-preprovisionedpv-zngx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076315039s
STEP: Saw pod success
Oct  3 23:52:52.489: INFO: Pod "exec-volume-test-preprovisionedpv-zngx" satisfied condition "Succeeded or Failed"
Oct  3 23:52:52.512: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod exec-volume-test-preprovisionedpv-zngx container exec-container-preprovisionedpv-zngx: <nil>
STEP: delete the pod
Oct  3 23:52:52.571: INFO: Waiting for pod exec-volume-test-preprovisionedpv-zngx to disappear
Oct  3 23:52:52.595: INFO: Pod exec-volume-test-preprovisionedpv-zngx no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-zngx
Oct  3 23:52:52.595: INFO: Deleting pod "exec-volume-test-preprovisionedpv-zngx" in namespace "volume-3131"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":13,"skipped":169,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
Oct  3 23:52:46.491: INFO: PersistentVolumeClaim pvc-xh42s found but phase is Pending instead of Bound.
Oct  3 23:52:48.515: INFO: PersistentVolumeClaim pvc-xh42s found and phase=Bound (8.120247886s)
Oct  3 23:52:48.515: INFO: Waiting up to 3m0s for PersistentVolume local-mfjkn to have phase Bound
Oct  3 23:52:48.538: INFO: PersistentVolume local-mfjkn found and phase=Bound (22.96297ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-gzml
STEP: Creating a pod to test subpath
Oct  3 23:52:48.615: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gzml" in namespace "provisioning-5600" to be "Succeeded or Failed"
Oct  3 23:52:48.643: INFO: Pod "pod-subpath-test-preprovisionedpv-gzml": Phase="Pending", Reason="", readiness=false. Elapsed: 27.494854ms
Oct  3 23:52:50.668: INFO: Pod "pod-subpath-test-preprovisionedpv-gzml": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052740188s
Oct  3 23:52:52.692: INFO: Pod "pod-subpath-test-preprovisionedpv-gzml": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076915043s
STEP: Saw pod success
Oct  3 23:52:52.692: INFO: Pod "pod-subpath-test-preprovisionedpv-gzml" satisfied condition "Succeeded or Failed"
Oct  3 23:52:52.719: INFO: Trying to get logs from node nodes-us-west3-a-chbv pod pod-subpath-test-preprovisionedpv-gzml container test-container-volume-preprovisionedpv-gzml: <nil>
STEP: delete the pod
Oct  3 23:52:52.779: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gzml to disappear
Oct  3 23:52:52.803: INFO: Pod pod-subpath-test-preprovisionedpv-gzml no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-gzml
Oct  3 23:52:52.803: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gzml" in namespace "provisioning-5600"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":12,"skipped":79,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:53.232: INFO: Only supported for providers [vsphere] (not gce)
... skipping 57 lines ...
• [SLOW TEST:6.633 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:136
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":16,"skipped":170,"failed":0}
[BeforeEach] [sig-instrumentation] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:52:53.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:52:53.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-478" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":17,"skipped":170,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:53.859: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 47 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110
STEP: Creating configMap with name projected-configmap-test-volume-map-b141c0a8-1b34-4eb3-b90a-b40cb83882ff
STEP: Creating a pod to test consume configMaps
Oct  3 23:52:51.219: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ae6e4b4d-2223-414c-a814-a91451bbb6dd" in namespace "projected-3181" to be "Succeeded or Failed"
Oct  3 23:52:51.245: INFO: Pod "pod-projected-configmaps-ae6e4b4d-2223-414c-a814-a91451bbb6dd": Phase="Pending", Reason="", readiness=false. Elapsed: 25.045546ms
Oct  3 23:52:53.269: INFO: Pod "pod-projected-configmaps-ae6e4b4d-2223-414c-a814-a91451bbb6dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049259689s
Oct  3 23:52:55.293: INFO: Pod "pod-projected-configmaps-ae6e4b4d-2223-414c-a814-a91451bbb6dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073651138s
STEP: Saw pod success
Oct  3 23:52:55.293: INFO: Pod "pod-projected-configmaps-ae6e4b4d-2223-414c-a814-a91451bbb6dd" satisfied condition "Succeeded or Failed"
Oct  3 23:52:55.317: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod pod-projected-configmaps-ae6e4b4d-2223-414c-a814-a91451bbb6dd container agnhost-container: <nil>
STEP: delete the pod
Oct  3 23:52:55.378: INFO: Waiting for pod pod-projected-configmaps-ae6e4b4d-2223-414c-a814-a91451bbb6dd to disappear
Oct  3 23:52:55.401: INFO: Pod pod-projected-configmaps-ae6e4b4d-2223-414c-a814-a91451bbb6dd no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:52:55.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3181" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":7,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:55.475: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 46 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
Oct  3 23:52:49.866: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct  3 23:52:49.866: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-vtfk
STEP: Creating a pod to test subpath
Oct  3 23:52:49.899: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-vtfk" in namespace "provisioning-3673" to be "Succeeded or Failed"
Oct  3 23:52:49.928: INFO: Pod "pod-subpath-test-inlinevolume-vtfk": Phase="Pending", Reason="", readiness=false. Elapsed: 28.869705ms
Oct  3 23:52:51.953: INFO: Pod "pod-subpath-test-inlinevolume-vtfk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054400549s
Oct  3 23:52:53.984: INFO: Pod "pod-subpath-test-inlinevolume-vtfk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085510532s
Oct  3 23:52:56.011: INFO: Pod "pod-subpath-test-inlinevolume-vtfk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111607474s
STEP: Saw pod success
Oct  3 23:52:56.011: INFO: Pod "pod-subpath-test-inlinevolume-vtfk" satisfied condition "Succeeded or Failed"
Oct  3 23:52:56.043: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod pod-subpath-test-inlinevolume-vtfk container test-container-subpath-inlinevolume-vtfk: <nil>
STEP: delete the pod
Oct  3 23:52:56.112: INFO: Waiting for pod pod-subpath-test-inlinevolume-vtfk to disappear
Oct  3 23:52:56.150: INFO: Pod pod-subpath-test-inlinevolume-vtfk no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-vtfk
Oct  3 23:52:56.150: INFO: Deleting pod "pod-subpath-test-inlinevolume-vtfk" in namespace "provisioning-3673"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":52,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:56.271: INFO: Driver windows-gcepd doesn't support  -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 91 lines ...
• [SLOW TEST:9.013 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:52:58.907: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 717 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when the NodeLease feature is enabled
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49
    the kubelet should report node status infrequently
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":5,"skipped":59,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:01.224: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 369 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    should proxy through a service and a pod  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":15,"skipped":107,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:01.456: INFO: Driver "csi-hostpath" does not support topology - skipping
... skipping 38 lines ...
• [SLOW TEST:86.347 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should schedule multiple jobs concurrently [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":8,"skipped":76,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:01.873: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 41 lines ...
• [SLOW TEST:28.453 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":6,"skipped":49,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:02.687: INFO: Driver local doesn't support ext4 -- skipping
... skipping 174 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:03.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1626" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":7,"skipped":79,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:52:58.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Oct  3 23:52:59.101: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-81eb300b-8e9f-4aad-b27e-a9387a8ed0b6" in namespace "security-context-test-1778" to be "Succeeded or Failed"
Oct  3 23:52:59.140: INFO: Pod "busybox-privileged-false-81eb300b-8e9f-4aad-b27e-a9387a8ed0b6": Phase="Pending", Reason="", readiness=false. Elapsed: 38.713008ms
Oct  3 23:53:01.183: INFO: Pod "busybox-privileged-false-81eb300b-8e9f-4aad-b27e-a9387a8ed0b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081959947s
Oct  3 23:53:03.208: INFO: Pod "busybox-privileged-false-81eb300b-8e9f-4aad-b27e-a9387a8ed0b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106144997s
Oct  3 23:53:03.208: INFO: Pod "busybox-privileged-false-81eb300b-8e9f-4aad-b27e-a9387a8ed0b6" satisfied condition "Succeeded or Failed"
Oct  3 23:53:03.259: INFO: Got logs for pod "busybox-privileged-false-81eb300b-8e9f-4aad-b27e-a9387a8ed0b6": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:03.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1778" for this suite.

... skipping 111 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:04.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2145" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":8,"skipped":80,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:04.659: INFO: Driver windows-gcepd doesn't support  -- skipping
... skipping 135 lines ...
[BeforeEach] Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:478
[It] should not create extra sandbox if all containers are done
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:482
STEP: creating the pod that should always exit 0
STEP: submitting the pod to kubernetes
Oct  3 23:52:53.414: INFO: Waiting up to 5m0s for pod "pod-always-succeed0d5f2403-a554-47b6-94bb-0ab31406dcd8" in namespace "pods-7534" to be "Succeeded or Failed"
Oct  3 23:52:53.481: INFO: Pod "pod-always-succeed0d5f2403-a554-47b6-94bb-0ab31406dcd8": Phase="Pending", Reason="", readiness=false. Elapsed: 66.643092ms
Oct  3 23:52:55.505: INFO: Pod "pod-always-succeed0d5f2403-a554-47b6-94bb-0ab31406dcd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091148678s
Oct  3 23:52:57.529: INFO: Pod "pod-always-succeed0d5f2403-a554-47b6-94bb-0ab31406dcd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114849711s
Oct  3 23:52:59.557: INFO: Pod "pod-always-succeed0d5f2403-a554-47b6-94bb-0ab31406dcd8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142817118s
Oct  3 23:53:01.583: INFO: Pod "pod-always-succeed0d5f2403-a554-47b6-94bb-0ab31406dcd8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.169075652s
Oct  3 23:53:03.606: INFO: Pod "pod-always-succeed0d5f2403-a554-47b6-94bb-0ab31406dcd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.192048734s
STEP: Saw pod success
Oct  3 23:53:03.606: INFO: Pod "pod-always-succeed0d5f2403-a554-47b6-94bb-0ab31406dcd8" satisfied condition "Succeeded or Failed"
STEP: Getting events about the pod
STEP: Checking events about the pod
STEP: deleting the pod
[AfterEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:05.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:476
    should not create extra sandbox if all containers are done
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:482
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":13,"skipped":83,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 10 lines ...
Oct  3 23:51:46.448: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-1354m46px      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/gce-pd,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1354    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1354m46px,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1354    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1354m46px,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating a StorageClass
STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1354    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1354m46px,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-1354m46px    471714c1-2f3e-4540-ac16-00767f1a973b 7985 0 2021-10-03 23:51:46 +0000 UTC <nil> <nil> map[] map[] [] []  [{e2e.test Update storage.k8s.io/v1 2021-10-03 23:51:46 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}} }]},Provisioner:kubernetes.io/gce-pd,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[debug nouid32],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-nbffj pvc- provisioning-1354  daece670-ae34-43bc-97b9-1970ace90838 7989 0 2021-10-03 23:51:46 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-10-03 23:51:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}}} }]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1354m46px,VolumeMode:*Filesystem,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Deleting pod pod-10de7bf7-e43f-4de3-9190-711b305078ec in namespace provisioning-1354
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
Oct  3 23:52:04.738: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-xwz8c" in namespace "provisioning-1354" to be "Succeeded or Failed"
Oct  3 23:52:04.763: INFO: Pod "pvc-volume-tester-writer-xwz8c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.641997ms
Oct  3 23:52:06.789: INFO: Pod "pvc-volume-tester-writer-xwz8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050686684s
Oct  3 23:52:08.824: INFO: Pod "pvc-volume-tester-writer-xwz8c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086182779s
Oct  3 23:52:10.853: INFO: Pod "pvc-volume-tester-writer-xwz8c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11540925s
Oct  3 23:52:12.890: INFO: Pod "pvc-volume-tester-writer-xwz8c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.15203492s
Oct  3 23:52:14.926: INFO: Pod "pvc-volume-tester-writer-xwz8c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.187655353s
... skipping 8 lines ...
Oct  3 23:52:33.315: INFO: Pod "pvc-volume-tester-writer-xwz8c": Phase="Pending", Reason="", readiness=false. Elapsed: 28.577334698s
Oct  3 23:52:35.342: INFO: Pod "pvc-volume-tester-writer-xwz8c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.604274448s
Oct  3 23:52:37.368: INFO: Pod "pvc-volume-tester-writer-xwz8c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.630039368s
Oct  3 23:52:39.395: INFO: Pod "pvc-volume-tester-writer-xwz8c": Phase="Pending", Reason="", readiness=false. Elapsed: 34.656606819s
Oct  3 23:52:41.421: INFO: Pod "pvc-volume-tester-writer-xwz8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.683090528s
STEP: Saw pod success
Oct  3 23:52:41.421: INFO: Pod "pvc-volume-tester-writer-xwz8c" satisfied condition "Succeeded or Failed"
Oct  3 23:52:41.475: INFO: Pod pvc-volume-tester-writer-xwz8c has the following logs: 
Oct  3 23:52:41.475: INFO: Deleting pod "pvc-volume-tester-writer-xwz8c" in namespace "provisioning-1354"
Oct  3 23:52:41.506: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-xwz8c" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "nodes-us-west3-a-gn9g"
Oct  3 23:52:41.611: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-nqktj" in namespace "provisioning-1354" to be "Succeeded or Failed"
Oct  3 23:52:41.638: INFO: Pod "pvc-volume-tester-reader-nqktj": Phase="Pending", Reason="", readiness=false. Elapsed: 27.184377ms
Oct  3 23:52:43.664: INFO: Pod "pvc-volume-tester-reader-nqktj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053245507s
Oct  3 23:52:45.691: INFO: Pod "pvc-volume-tester-reader-nqktj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079428395s
STEP: Saw pod success
Oct  3 23:52:45.691: INFO: Pod "pvc-volume-tester-reader-nqktj" satisfied condition "Succeeded or Failed"
Oct  3 23:52:45.745: INFO: Pod pvc-volume-tester-reader-nqktj has the following logs: hello world

Oct  3 23:52:45.745: INFO: Deleting pod "pvc-volume-tester-reader-nqktj" in namespace "provisioning-1354"
Oct  3 23:52:45.781: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-nqktj" to be fully deleted
Oct  3 23:52:45.806: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-nbffj] to have phase Bound
Oct  3 23:52:45.836: INFO: PersistentVolumeClaim pvc-nbffj found and phase=Bound (29.56351ms)
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":9,"skipped":45,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:06.172: INFO: Only supported for providers [openstack] (not gce)
... skipping 118 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 30 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  3 23:53:01.442: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ae303b0-35d4-4892-b720-b937551047ac" in namespace "downward-api-9420" to be "Succeeded or Failed"
Oct  3 23:53:01.474: INFO: Pod "downwardapi-volume-6ae303b0-35d4-4892-b720-b937551047ac": Phase="Pending", Reason="", readiness=false. Elapsed: 32.146591ms
Oct  3 23:53:03.499: INFO: Pod "downwardapi-volume-6ae303b0-35d4-4892-b720-b937551047ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057362017s
Oct  3 23:53:05.524: INFO: Pod "downwardapi-volume-6ae303b0-35d4-4892-b720-b937551047ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082321646s
Oct  3 23:53:07.550: INFO: Pod "downwardapi-volume-6ae303b0-35d4-4892-b720-b937551047ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.107847081s
STEP: Saw pod success
Oct  3 23:53:07.550: INFO: Pod "downwardapi-volume-6ae303b0-35d4-4892-b720-b937551047ac" satisfied condition "Succeeded or Failed"
Oct  3 23:53:07.574: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod downwardapi-volume-6ae303b0-35d4-4892-b720-b937551047ac container client-container: <nil>
STEP: delete the pod
Oct  3 23:53:07.673: INFO: Waiting for pod downwardapi-volume-6ae303b0-35d4-4892-b720-b937551047ac to disappear
Oct  3 23:53:07.703: INFO: Pod downwardapi-volume-6ae303b0-35d4-4892-b720-b937551047ac no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:07.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6994" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":10,"skipped":82,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:07.786: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 71 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:08.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8844" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":11,"skipped":87,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Destroying namespace "apply-8823" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist","total":-1,"completed":12,"skipped":88,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:08.854: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 97 lines ...
Oct  3 23:52:55.567: INFO: Waiting for pod gcepd-client to disappear
Oct  3 23:52:55.596: INFO: Pod gcepd-client no longer exists
STEP: cleaning the environment after gcepd
STEP: Deleting pv and pvc
Oct  3 23:52:55.596: INFO: Deleting PersistentVolumeClaim "pvc-ncc7k"
Oct  3 23:52:55.631: INFO: Deleting PersistentVolume "gcepd-8fhsw"
Oct  3 23:52:56.282: INFO: error deleting PD "e2e-778a043e-25f5-4663-820c-42ef35b71be1": googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-778a043e-25f5-4663-820c-42ef35b71be1' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g', resourceInUseByAnotherResource
Oct  3 23:52:56.282: INFO: Couldn't delete PD "e2e-778a043e-25f5-4663-820c-42ef35b71be1", sleeping 5s: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-778a043e-25f5-4663-820c-42ef35b71be1' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g', resourceInUseByAnotherResource
Oct  3 23:53:01.968: INFO: error deleting PD "e2e-778a043e-25f5-4663-820c-42ef35b71be1": googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-778a043e-25f5-4663-820c-42ef35b71be1' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g', resourceInUseByAnotherResource
Oct  3 23:53:01.968: INFO: Couldn't delete PD "e2e-778a043e-25f5-4663-820c-42ef35b71be1", sleeping 5s: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-778a043e-25f5-4663-820c-42ef35b71be1' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g', resourceInUseByAnotherResource
Oct  3 23:53:08.901: INFO: Successfully deleted PD "e2e-778a043e-25f5-4663-820c-42ef35b71be1".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:08.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7515" for this suite.

... skipping 5 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":7,"skipped":82,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 51 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":55,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:09.434: INFO: Driver windows-gcepd doesn't support  -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 68 lines ...
• [SLOW TEST:13.221 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":10,"skipped":71,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":46,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:53:03.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 52 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":10,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:10.982: INFO: Driver windows-gcepd doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 41 lines ...
Oct  3 23:53:09.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's command
Oct  3 23:53:09.754: INFO: Waiting up to 5m0s for pod "var-expansion-db272989-0ad6-4ce6-96c3-c8756391ce4d" in namespace "var-expansion-7570" to be "Succeeded or Failed"
Oct  3 23:53:09.778: INFO: Pod "var-expansion-db272989-0ad6-4ce6-96c3-c8756391ce4d": Phase="Pending", Reason="", readiness=false. Elapsed: 23.319995ms
Oct  3 23:53:11.805: INFO: Pod "var-expansion-db272989-0ad6-4ce6-96c3-c8756391ce4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051039734s
Oct  3 23:53:13.829: INFO: Pod "var-expansion-db272989-0ad6-4ce6-96c3-c8756391ce4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074716803s
STEP: Saw pod success
Oct  3 23:53:13.829: INFO: Pod "var-expansion-db272989-0ad6-4ce6-96c3-c8756391ce4d" satisfied condition "Succeeded or Failed"
Oct  3 23:53:13.852: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod var-expansion-db272989-0ad6-4ce6-96c3-c8756391ce4d container dapi-container: <nil>
STEP: delete the pod
Oct  3 23:53:13.934: INFO: Waiting for pod var-expansion-db272989-0ad6-4ce6-96c3-c8756391ce4d to disappear
Oct  3 23:53:13.958: INFO: Pod var-expansion-db272989-0ad6-4ce6-96c3-c8756391ce4d no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:13.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7570" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":74,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:14.059: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 26 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:14.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8872" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":12,"skipped":88,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:14.361: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 64 lines ...
• [SLOW TEST:6.998 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":106,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 20 lines ...
Oct  3 23:53:01.923: INFO: PersistentVolumeClaim pvc-2cgpk found but phase is Pending instead of Bound.
Oct  3 23:53:03.950: INFO: PersistentVolumeClaim pvc-2cgpk found and phase=Bound (12.179721482s)
Oct  3 23:53:03.950: INFO: Waiting up to 3m0s for PersistentVolume local-fbvcp to have phase Bound
Oct  3 23:53:03.995: INFO: PersistentVolume local-fbvcp found and phase=Bound (45.082063ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4gx9
STEP: Creating a pod to test subpath
Oct  3 23:53:04.102: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4gx9" in namespace "provisioning-9054" to be "Succeeded or Failed"
Oct  3 23:53:04.133: INFO: Pod "pod-subpath-test-preprovisionedpv-4gx9": Phase="Pending", Reason="", readiness=false. Elapsed: 30.265587ms
Oct  3 23:53:06.158: INFO: Pod "pod-subpath-test-preprovisionedpv-4gx9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0554894s
Oct  3 23:53:08.195: INFO: Pod "pod-subpath-test-preprovisionedpv-4gx9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092427047s
Oct  3 23:53:10.221: INFO: Pod "pod-subpath-test-preprovisionedpv-4gx9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118298953s
Oct  3 23:53:12.249: INFO: Pod "pod-subpath-test-preprovisionedpv-4gx9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.146261903s
Oct  3 23:53:14.274: INFO: Pod "pod-subpath-test-preprovisionedpv-4gx9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.171466076s
Oct  3 23:53:16.300: INFO: Pod "pod-subpath-test-preprovisionedpv-4gx9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.197904707s
Oct  3 23:53:18.325: INFO: Pod "pod-subpath-test-preprovisionedpv-4gx9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.222790249s
STEP: Saw pod success
Oct  3 23:53:18.325: INFO: Pod "pod-subpath-test-preprovisionedpv-4gx9" satisfied condition "Succeeded or Failed"
Oct  3 23:53:18.349: INFO: Trying to get logs from node nodes-us-west3-a-gn9g pod pod-subpath-test-preprovisionedpv-4gx9 container test-container-subpath-preprovisionedpv-4gx9: <nil>
STEP: delete the pod
Oct  3 23:53:18.418: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4gx9 to disappear
Oct  3 23:53:18.443: INFO: Pod pod-subpath-test-preprovisionedpv-4gx9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4gx9
Oct  3 23:53:18.443: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4gx9" in namespace "provisioning-9054"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":13,"skipped":114,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:19.559: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 159 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":8,"skipped":51,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 86 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:20.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2994" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":108,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] Discovery
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 85 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:21.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-1032" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":15,"skipped":111,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:21.417: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 23 lines ...
Oct  3 23:53:09.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override command
Oct  3 23:53:09.625: INFO: Waiting up to 5m0s for pod "client-containers-e83042c2-6377-40ae-952f-c78a372d14df" in namespace "containers-3309" to be "Succeeded or Failed"
Oct  3 23:53:09.651: INFO: Pod "client-containers-e83042c2-6377-40ae-952f-c78a372d14df": Phase="Pending", Reason="", readiness=false. Elapsed: 26.035701ms
Oct  3 23:53:11.674: INFO: Pod "client-containers-e83042c2-6377-40ae-952f-c78a372d14df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049651854s
Oct  3 23:53:13.698: INFO: Pod "client-containers-e83042c2-6377-40ae-952f-c78a372d14df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073616434s
Oct  3 23:53:15.727: INFO: Pod "client-containers-e83042c2-6377-40ae-952f-c78a372d14df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102377803s
Oct  3 23:53:17.751: INFO: Pod "client-containers-e83042c2-6377-40ae-952f-c78a372d14df": Phase="Pending", Reason="", readiness=false. Elapsed: 8.125837839s
Oct  3 23:53:19.776: INFO: Pod "client-containers-e83042c2-6377-40ae-952f-c78a372d14df": Phase="Pending", Reason="", readiness=false. Elapsed: 10.151335038s
Oct  3 23:53:21.833: INFO: Pod "client-containers-e83042c2-6377-40ae-952f-c78a372d14df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.207812394s
STEP: Saw pod success
Oct  3 23:53:21.833: INFO: Pod "client-containers-e83042c2-6377-40ae-952f-c78a372d14df" satisfied condition "Succeeded or Failed"
Oct  3 23:53:21.883: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod client-containers-e83042c2-6377-40ae-952f-c78a372d14df container agnhost-container: <nil>
STEP: delete the pod
Oct  3 23:53:21.957: INFO: Waiting for pod client-containers-e83042c2-6377-40ae-952f-c78a372d14df to disappear
Oct  3 23:53:21.980: INFO: Pod client-containers-e83042c2-6377-40ae-952f-c78a372d14df no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.562 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":62,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-instrumentation] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:22.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1123" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":16,"skipped":114,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:22.093: INFO: Only supported for providers [aws] (not gce)
... skipping 105 lines ...
• [SLOW TEST:19.096 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":9,"skipped":100,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:23.878: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 28 lines ...
[It] should allow exec of files on the volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
Oct  3 23:53:11.149: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct  3 23:53:11.149: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-8b2w
STEP: Creating a pod to test exec-volume-test
Oct  3 23:53:11.178: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-8b2w" in namespace "volume-9102" to be "Succeeded or Failed"
Oct  3 23:53:11.230: INFO: Pod "exec-volume-test-inlinevolume-8b2w": Phase="Pending", Reason="", readiness=false. Elapsed: 51.304144ms
Oct  3 23:53:13.258: INFO: Pod "exec-volume-test-inlinevolume-8b2w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079113722s
Oct  3 23:53:15.282: INFO: Pod "exec-volume-test-inlinevolume-8b2w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103899032s
Oct  3 23:53:17.310: INFO: Pod "exec-volume-test-inlinevolume-8b2w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131285524s
Oct  3 23:53:19.357: INFO: Pod "exec-volume-test-inlinevolume-8b2w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.178681883s
Oct  3 23:53:21.386: INFO: Pod "exec-volume-test-inlinevolume-8b2w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.207584228s
Oct  3 23:53:23.413: INFO: Pod "exec-volume-test-inlinevolume-8b2w": Phase="Pending", Reason="", readiness=false. Elapsed: 12.234205839s
Oct  3 23:53:25.444: INFO: Pod "exec-volume-test-inlinevolume-8b2w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.265857998s
STEP: Saw pod success
Oct  3 23:53:25.444: INFO: Pod "exec-volume-test-inlinevolume-8b2w" satisfied condition "Succeeded or Failed"
Oct  3 23:53:25.478: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod exec-volume-test-inlinevolume-8b2w container exec-container-inlinevolume-8b2w: <nil>
STEP: delete the pod
Oct  3 23:53:25.584: INFO: Waiting for pod exec-volume-test-inlinevolume-8b2w to disappear
Oct  3 23:53:25.613: INFO: Pod exec-volume-test-inlinevolume-8b2w no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-8b2w
Oct  3 23:53:25.613: INFO: Deleting pod "exec-volume-test-inlinevolume-8b2w" in namespace "volume-9102"
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":11,"skipped":50,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":63,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:53:07.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":7,"skipped":63,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:26.763: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 37 lines ...
      Only supported for providers [openstack] (not gce)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller","total":-1,"completed":16,"skipped":117,"failed":0}
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:53:03.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":17,"skipped":117,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 17 lines ...
Oct  3 23:53:02.596: INFO: PersistentVolumeClaim pvc-qrnms found but phase is Pending instead of Bound.
Oct  3 23:53:04.621: INFO: PersistentVolumeClaim pvc-qrnms found and phase=Bound (6.107664096s)
Oct  3 23:53:04.621: INFO: Waiting up to 3m0s for PersistentVolume local-6wvjc to have phase Bound
Oct  3 23:53:04.646: INFO: PersistentVolume local-6wvjc found and phase=Bound (24.548709ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-vbxs
STEP: Creating a pod to test subpath
Oct  3 23:53:04.725: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vbxs" in namespace "provisioning-7187" to be "Succeeded or Failed"
Oct  3 23:53:04.750: INFO: Pod "pod-subpath-test-preprovisionedpv-vbxs": Phase="Pending", Reason="", readiness=false. Elapsed: 25.235437ms
Oct  3 23:53:06.778: INFO: Pod "pod-subpath-test-preprovisionedpv-vbxs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052756404s
Oct  3 23:53:08.846: INFO: Pod "pod-subpath-test-preprovisionedpv-vbxs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120452172s
Oct  3 23:53:10.874: INFO: Pod "pod-subpath-test-preprovisionedpv-vbxs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148949659s
Oct  3 23:53:12.904: INFO: Pod "pod-subpath-test-preprovisionedpv-vbxs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.179290553s
Oct  3 23:53:14.939: INFO: Pod "pod-subpath-test-preprovisionedpv-vbxs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.213736996s
Oct  3 23:53:16.964: INFO: Pod "pod-subpath-test-preprovisionedpv-vbxs": Phase="Pending", Reason="", readiness=false. Elapsed: 12.238644804s
Oct  3 23:53:19.034: INFO: Pod "pod-subpath-test-preprovisionedpv-vbxs": Phase="Pending", Reason="", readiness=false. Elapsed: 14.30903662s
Oct  3 23:53:21.072: INFO: Pod "pod-subpath-test-preprovisionedpv-vbxs": Phase="Pending", Reason="", readiness=false. Elapsed: 16.347127153s
Oct  3 23:53:23.097: INFO: Pod "pod-subpath-test-preprovisionedpv-vbxs": Phase="Pending", Reason="", readiness=false. Elapsed: 18.372281361s
Oct  3 23:53:25.131: INFO: Pod "pod-subpath-test-preprovisionedpv-vbxs": Phase="Pending", Reason="", readiness=false. Elapsed: 20.406302579s
Oct  3 23:53:27.169: INFO: Pod "pod-subpath-test-preprovisionedpv-vbxs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.444003183s
STEP: Saw pod success
Oct  3 23:53:27.169: INFO: Pod "pod-subpath-test-preprovisionedpv-vbxs" satisfied condition "Succeeded or Failed"
Oct  3 23:53:27.203: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod pod-subpath-test-preprovisionedpv-vbxs container test-container-subpath-preprovisionedpv-vbxs: <nil>
STEP: delete the pod
Oct  3 23:53:27.323: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vbxs to disappear
Oct  3 23:53:27.354: INFO: Pod pod-subpath-test-preprovisionedpv-vbxs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vbxs
Oct  3 23:53:27.354: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vbxs" in namespace "provisioning-7187"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":18,"skipped":182,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:53:23.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct  3 23:53:24.042: INFO: Waiting up to 5m0s for pod "downward-api-428e9049-b104-461b-8014-f4b1e4be8832" in namespace "downward-api-4849" to be "Succeeded or Failed"
Oct  3 23:53:24.069: INFO: Pod "downward-api-428e9049-b104-461b-8014-f4b1e4be8832": Phase="Pending", Reason="", readiness=false. Elapsed: 27.13353ms
Oct  3 23:53:26.094: INFO: Pod "downward-api-428e9049-b104-461b-8014-f4b1e4be8832": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05168784s
Oct  3 23:53:28.119: INFO: Pod "downward-api-428e9049-b104-461b-8014-f4b1e4be8832": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076877143s
STEP: Saw pod success
Oct  3 23:53:28.119: INFO: Pod "downward-api-428e9049-b104-461b-8014-f4b1e4be8832" satisfied condition "Succeeded or Failed"
Oct  3 23:53:28.143: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod downward-api-428e9049-b104-461b-8014-f4b1e4be8832 container dapi-container: <nil>
STEP: delete the pod
Oct  3 23:53:28.223: INFO: Waiting for pod downward-api-428e9049-b104-461b-8014-f4b1e4be8832 to disappear
Oct  3 23:53:28.247: INFO: Pod downward-api-428e9049-b104-461b-8014-f4b1e4be8832 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:28.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4849" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":105,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:53:19.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-5662/configmap-test-7ffcd757-99a6-465b-a1b1-aa7779292c98
STEP: Creating a pod to test consume configMaps
Oct  3 23:53:20.061: INFO: Waiting up to 5m0s for pod "pod-configmaps-f971fc56-5dd2-4fc0-9017-bf89b63a5e56" in namespace "configmap-5662" to be "Succeeded or Failed"
Oct  3 23:53:20.090: INFO: Pod "pod-configmaps-f971fc56-5dd2-4fc0-9017-bf89b63a5e56": Phase="Pending", Reason="", readiness=false. Elapsed: 28.202172ms
Oct  3 23:53:22.119: INFO: Pod "pod-configmaps-f971fc56-5dd2-4fc0-9017-bf89b63a5e56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057823632s
Oct  3 23:53:24.143: INFO: Pod "pod-configmaps-f971fc56-5dd2-4fc0-9017-bf89b63a5e56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081138206s
Oct  3 23:53:26.167: INFO: Pod "pod-configmaps-f971fc56-5dd2-4fc0-9017-bf89b63a5e56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105661273s
Oct  3 23:53:28.191: INFO: Pod "pod-configmaps-f971fc56-5dd2-4fc0-9017-bf89b63a5e56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.129614979s
STEP: Saw pod success
Oct  3 23:53:28.191: INFO: Pod "pod-configmaps-f971fc56-5dd2-4fc0-9017-bf89b63a5e56" satisfied condition "Succeeded or Failed"
Oct  3 23:53:28.215: INFO: Trying to get logs from node nodes-us-west3-a-chbv pod pod-configmaps-f971fc56-5dd2-4fc0-9017-bf89b63a5e56 container env-test: <nil>
STEP: delete the pod
Oct  3 23:53:28.285: INFO: Waiting for pod pod-configmaps-f971fc56-5dd2-4fc0-9017-bf89b63a5e56 to disappear
Oct  3 23:53:28.310: INFO: Pod pod-configmaps-f971fc56-5dd2-4fc0-9017-bf89b63a5e56 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.485 seconds]
[sig-node] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":55,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:28.385: INFO: Only supported for providers [vsphere] (not gce)
... skipping 24 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-620b4eb8-5de8-42cc-9429-772923aaa751
STEP: Creating a pod to test consume configMaps
Oct  3 23:53:22.255: INFO: Waiting up to 5m0s for pod "pod-configmaps-959309e0-5247-40e5-9510-89d6793f8690" in namespace "configmap-6909" to be "Succeeded or Failed"
Oct  3 23:53:22.277: INFO: Pod "pod-configmaps-959309e0-5247-40e5-9510-89d6793f8690": Phase="Pending", Reason="", readiness=false. Elapsed: 22.415399ms
Oct  3 23:53:24.302: INFO: Pod "pod-configmaps-959309e0-5247-40e5-9510-89d6793f8690": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046703276s
Oct  3 23:53:26.330: INFO: Pod "pod-configmaps-959309e0-5247-40e5-9510-89d6793f8690": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074578543s
Oct  3 23:53:28.356: INFO: Pod "pod-configmaps-959309e0-5247-40e5-9510-89d6793f8690": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.101394385s
STEP: Saw pod success
Oct  3 23:53:28.356: INFO: Pod "pod-configmaps-959309e0-5247-40e5-9510-89d6793f8690" satisfied condition "Succeeded or Failed"
Oct  3 23:53:28.400: INFO: Trying to get logs from node nodes-us-west3-a-chbv pod pod-configmaps-959309e0-5247-40e5-9510-89d6793f8690 container agnhost-container: <nil>
STEP: delete the pod
Oct  3 23:53:28.501: INFO: Waiting for pod pod-configmaps-959309e0-5247-40e5-9510-89d6793f8690 to disappear
Oct  3 23:53:28.531: INFO: Pod pod-configmaps-959309e0-5247-40e5-9510-89d6793f8690 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.545 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":65,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:28.626: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 159 lines ...
Oct  3 23:53:20.975: INFO: Waiting for pod gcepd-client to disappear
Oct  3 23:53:20.998: INFO: Pod gcepd-client no longer exists
STEP: cleaning the environment after gcepd
STEP: Deleting pv and pvc
Oct  3 23:53:20.998: INFO: Deleting PersistentVolumeClaim "pvc-t6gcf"
Oct  3 23:53:21.027: INFO: Deleting PersistentVolume "gcepd-vgm58"
Oct  3 23:53:21.696: INFO: error deleting PD "e2e-c90f3cc2-97e2-4c40-88bd-2397341ef052": googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-c90f3cc2-97e2-4c40-88bd-2397341ef052' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource
Oct  3 23:53:21.696: INFO: Couldn't delete PD "e2e-c90f3cc2-97e2-4c40-88bd-2397341ef052", sleeping 5s: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-c90f3cc2-97e2-4c40-88bd-2397341ef052' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource
Oct  3 23:53:28.766: INFO: Successfully deleted PD "e2e-c90f3cc2-97e2-4c40-88bd-2397341ef052".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:28.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6605" for this suite.

... skipping 5 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":6,"skipped":108,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:28.890: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 68 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:29.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-9947" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":11,"skipped":108,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:29.305: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:29.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-5613" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":12,"skipped":121,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:29.824: INFO: Driver "csi-hostpath" does not support topology - skipping
... skipping 5 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 82 lines ...
• [SLOW TEST:28.484 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:924
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":9,"skipped":87,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:30.383: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 18 lines ...
Oct  3 23:53:19.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Oct  3 23:53:20.085: INFO: Waiting up to 5m0s for pod "security-context-850acb37-eec8-41d5-bcb3-be89391b2481" in namespace "security-context-7964" to be "Succeeded or Failed"
Oct  3 23:53:20.109: INFO: Pod "security-context-850acb37-eec8-41d5-bcb3-be89391b2481": Phase="Pending", Reason="", readiness=false. Elapsed: 24.263889ms
Oct  3 23:53:22.138: INFO: Pod "security-context-850acb37-eec8-41d5-bcb3-be89391b2481": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052622163s
Oct  3 23:53:24.163: INFO: Pod "security-context-850acb37-eec8-41d5-bcb3-be89391b2481": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077369392s
Oct  3 23:53:26.187: INFO: Pod "security-context-850acb37-eec8-41d5-bcb3-be89391b2481": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101434893s
Oct  3 23:53:28.212: INFO: Pod "security-context-850acb37-eec8-41d5-bcb3-be89391b2481": Phase="Pending", Reason="", readiness=false. Elapsed: 8.126701992s
Oct  3 23:53:30.267: INFO: Pod "security-context-850acb37-eec8-41d5-bcb3-be89391b2481": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.181525407s
STEP: Saw pod success
Oct  3 23:53:30.267: INFO: Pod "security-context-850acb37-eec8-41d5-bcb3-be89391b2481" satisfied condition "Succeeded or Failed"
Oct  3 23:53:30.301: INFO: Trying to get logs from node nodes-us-west3-a-chbv pod security-context-850acb37-eec8-41d5-bcb3-be89391b2481 container test-container: <nil>
STEP: delete the pod
Oct  3 23:53:30.386: INFO: Waiting for pod security-context-850acb37-eec8-41d5-bcb3-be89391b2481 to disappear
Oct  3 23:53:30.424: INFO: Pod security-context-850acb37-eec8-41d5-bcb3-be89391b2481 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:30.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "health-9048" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] health handlers should contain necessary checks","total":-1,"completed":13,"skipped":126,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:30.537: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 59 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":13,"skipped":94,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":9,"skipped":42,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:53:30.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:30.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3580" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":10,"skipped":42,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 66 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on localhost
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:490
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":14,"skipped":121,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:32.199: INFO: Driver windows-gcepd doesn't support  -- skipping
... skipping 138 lines ...
Oct  3 23:53:30.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct  3 23:53:30.638: INFO: Waiting up to 5m0s for pod "downward-api-56272365-0e97-4529-b679-df5299f2f847" in namespace "downward-api-5781" to be "Succeeded or Failed"
Oct  3 23:53:30.689: INFO: Pod "downward-api-56272365-0e97-4529-b679-df5299f2f847": Phase="Pending", Reason="", readiness=false. Elapsed: 50.668907ms
Oct  3 23:53:32.721: INFO: Pod "downward-api-56272365-0e97-4529-b679-df5299f2f847": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082832211s
Oct  3 23:53:34.755: INFO: Pod "downward-api-56272365-0e97-4529-b679-df5299f2f847": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116799111s
STEP: Saw pod success
Oct  3 23:53:34.755: INFO: Pod "downward-api-56272365-0e97-4529-b679-df5299f2f847" satisfied condition "Succeeded or Failed"
Oct  3 23:53:34.805: INFO: Trying to get logs from node nodes-us-west3-a-h26h pod downward-api-56272365-0e97-4529-b679-df5299f2f847 container dapi-container: <nil>
STEP: delete the pod
Oct  3 23:53:34.895: INFO: Waiting for pod downward-api-56272365-0e97-4529-b679-df5299f2f847 to disappear
Oct  3 23:53:34.940: INFO: Pod downward-api-56272365-0e97-4529-b679-df5299f2f847 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:34.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5781" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":88,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:35.069: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65
[It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:121
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.619 seconds]
[sig-node] Sysctls [LinuxOnly] [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:121
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21]","total":-1,"completed":8,"skipped":71,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:37.431: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 63 lines ...
• [SLOW TEST:12.807 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":19,"skipped":183,"failed":0}
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:53:40.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:41.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4350" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":20,"skipped":183,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
STEP: Creating a pod to test hostPath mode
Oct  3 23:53:28.933: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6477" to be "Succeeded or Failed"
Oct  3 23:53:28.971: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 37.994689ms
Oct  3 23:53:31.036: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103066722s
Oct  3 23:53:33.094: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161122669s
Oct  3 23:53:35.134: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.201154938s
Oct  3 23:53:37.175: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.241828559s
Oct  3 23:53:39.208: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.275136664s
Oct  3 23:53:41.233: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.299489658s
STEP: Saw pod success
Oct  3 23:53:41.233: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Oct  3 23:53:41.259: INFO: Trying to get logs from node nodes-us-west3-a-zfb0 pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Oct  3 23:53:41.348: INFO: Waiting for pod pod-host-path-test to disappear
Oct  3 23:53:41.445: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.855 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":11,"skipped":74,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Oct  3 23:53:27.062: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  3 23:53:27.097: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-kmnf
STEP: Creating a pod to test subpath
Oct  3 23:53:27.200: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-kmnf" in namespace "provisioning-6149" to be "Succeeded or Failed"
Oct  3 23:53:27.241: INFO: Pod "pod-subpath-test-inlinevolume-kmnf": Phase="Pending", Reason="", readiness=false. Elapsed: 40.568938ms
Oct  3 23:53:29.274: INFO: Pod "pod-subpath-test-inlinevolume-kmnf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07410631s
Oct  3 23:53:31.363: INFO: Pod "pod-subpath-test-inlinevolume-kmnf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163028278s
Oct  3 23:53:33.406: INFO: Pod "pod-subpath-test-inlinevolume-kmnf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206012767s
Oct  3 23:53:35.464: INFO: Pod "pod-subpath-test-inlinevolume-kmnf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.263630953s
Oct  3 23:53:37.520: INFO: Pod "pod-subpath-test-inlinevolume-kmnf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.319707106s
Oct  3 23:53:39.550: INFO: Pod "pod-subpath-test-inlinevolume-kmnf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.349218895s
Oct  3 23:53:41.585: INFO: Pod "pod-subpath-test-inlinevolume-kmnf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.384938756s
Oct  3 23:53:43.613: INFO: Pod "pod-subpath-test-inlinevolume-kmnf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.41262216s
Oct  3 23:53:45.641: INFO: Pod "pod-subpath-test-inlinevolume-kmnf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.440477872s
Oct  3 23:53:47.671: INFO: Pod "pod-subpath-test-inlinevolume-kmnf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.470213771s
STEP: Saw pod success
Oct  3 23:53:47.671: INFO: Pod "pod-subpath-test-inlinevolume-kmnf" satisfied condition "Succeeded or Failed"
Oct  3 23:53:47.696: INFO: Trying to get logs from node nodes-us-west3-a-gn9g pod pod-subpath-test-inlinevolume-kmnf container test-container-volume-inlinevolume-kmnf: <nil>
STEP: delete the pod
Oct  3 23:53:47.779: INFO: Waiting for pod pod-subpath-test-inlinevolume-kmnf to disappear
Oct  3 23:53:47.822: INFO: Pod pod-subpath-test-inlinevolume-kmnf no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-kmnf
Oct  3 23:53:47.823: INFO: Deleting pod "pod-subpath-test-inlinevolume-kmnf" in namespace "provisioning-6149"
... skipping 26 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Oct  3 23:53:35.513: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  3 23:53:35.617: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-n8nv
STEP: Creating a pod to test subpath
Oct  3 23:53:35.689: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-n8nv" in namespace "provisioning-8305" to be "Succeeded or Failed"
Oct  3 23:53:35.825: INFO: Pod "pod-subpath-test-inlinevolume-n8nv": Phase="Pending", Reason="", readiness=false. Elapsed: 135.704162ms
Oct  3 23:53:37.886: INFO: Pod "pod-subpath-test-inlinevolume-n8nv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19661638s
Oct  3 23:53:39.917: INFO: Pod "pod-subpath-test-inlinevolume-n8nv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.227631175s
Oct  3 23:53:41.942: INFO: Pod "pod-subpath-test-inlinevolume-n8nv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.25330423s
Oct  3 23:53:43.968: INFO: Pod "pod-subpath-test-inlinevolume-n8nv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.279022888s
Oct  3 23:53:46.090: INFO: Pod "pod-subpath-test-inlinevolume-n8nv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.400700875s
Oct  3 23:53:48.194: INFO: Pod "pod-subpath-test-inlinevolume-n8nv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.50499086s
STEP: Saw pod success
Oct  3 23:53:48.194: INFO: Pod "pod-subpath-test-inlinevolume-n8nv" satisfied condition "Succeeded or Failed"
Oct  3 23:53:48.226: INFO: Trying to get logs from node nodes-us-west3-a-gn9g pod pod-subpath-test-inlinevolume-n8nv container test-container-volume-inlinevolume-n8nv: <nil>
STEP: delete the pod
Oct  3 23:53:48.363: INFO: Waiting for pod pod-subpath-test-inlinevolume-n8nv to disappear
Oct  3 23:53:48.388: INFO: Pod pod-subpath-test-inlinevolume-n8nv no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-n8nv
Oct  3 23:53:48.388: INFO: Deleting pod "pod-subpath-test-inlinevolume-n8nv" in namespace "provisioning-8305"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":11,"skipped":91,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:48.638: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 46 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 201 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:339
    should create and stop a working application  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":14,"skipped":136,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:49.378: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 59 lines ...
STEP: Deleting pod hostexec-nodes-us-west3-a-chbv-xnzf6 in namespace volumemode-869
Oct  3 23:53:35.375: INFO: Deleting pod "pod-d5d4df84-a9df-4c28-b4e5-f533a86851ef" in namespace "volumemode-869"
Oct  3 23:53:35.464: INFO: Wait up to 5m0s for pod "pod-d5d4df84-a9df-4c28-b4e5-f533a86851ef" to be fully deleted
STEP: Deleting pv and pvc
Oct  3 23:53:41.587: INFO: Deleting PersistentVolumeClaim "pvc-6b4qm"
Oct  3 23:53:41.619: INFO: Deleting PersistentVolume "gcepd-bmg4l"
Oct  3 23:53:42.347: INFO: error deleting PD "e2e-730c779a-7dba-4338-811a-e8b4d685aa2a": googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-730c779a-7dba-4338-811a-e8b4d685aa2a' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-chbv', resourceInUseByAnotherResource
Oct  3 23:53:42.347: INFO: Couldn't delete PD "e2e-730c779a-7dba-4338-811a-e8b4d685aa2a", sleeping 5s: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-730c779a-7dba-4338-811a-e8b4d685aa2a' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-chbv', resourceInUseByAnotherResource
Oct  3 23:53:49.335: INFO: Successfully deleted PD "e2e-730c779a-7dba-4338-811a-e8b4d685aa2a".
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:49.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumemode-869" for this suite.

... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":8,"skipped":86,"failed":0}

SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:49.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":15,"skipped":145,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:49.518: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 182 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":12,"skipped":132,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  3 23:53:54.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6555" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":13,"skipped":133,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:55.024: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 300 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":8,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  3 23:53:58.766: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 19 lines ...
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  3 23:53:58.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Oct  3 23:53:58.959: INFO: found topology map[topology.kubernetes.io/zone:us-west3-a]
Oct  3 23:53:58.959: INFO: In-tree plugin kuberne



... skipping 41937 lines ...






er.go:162] deletion of namespace kubectl-6798 failed: unexpected items still remain in namespace: kubectl-6798 for gvr: /v1, Resource=pods\nI1004 00:00:20.001285       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-1679/sample-webhook-deployment\"\nI1004 00:00:20.191646       1 gce_util.go:89] Successfully deleted GCE PD volume e2e-5690f240d7-0a91d-k-pvc-d3d9245c-be65-4230-9ffc-e02a1c5c1837\nI1004 00:00:20.191690       1 pv_controller.go:1435] volume \"pvc-d3d9245c-be65-4230-9ffc-e02a1c5c1837\" deleted\nI1004 00:00:20.212779       1 pv_controller_base.go:505] deletion of claim \"volume-3522/gcepdsxwbt\" was already processed\nI1004 00:00:20.371758       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-b09738fb-fe11-4a0b-acf5-b1a45bb9c9f4\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-b09738fb-fe11-4a0b-acf5-b1a45bb9c9f4\") on node \"nodes-us-west3-a-gn9g\" \nE1004 00:00:20.920844       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-8313/default: secrets \"default-token-k664t\" is forbidden: unable to create new content in namespace provisioning-8313 because it is being terminated\nI1004 00:00:21.200252       1 namespace_controller.go:185] Namespace has been deleted provisioning-4461\nI1004 00:00:21.512832       1 pv_controller.go:879] volume \"local-pvpjj78\" entered phase \"Available\"\nI1004 00:00:21.541767       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"pv-5864/pvc-hxhvg\"\nI1004 00:00:21.564028       1 pv_controller.go:930] claim \"persistent-local-volumes-test-3119/pvc-29hbw\" bound to volume \"local-pvpjj78\"\nI1004 00:00:21.606868       1 pv_controller.go:879] volume \"local-pvpjj78\" entered phase \"Bound\"\nI1004 00:00:21.606919       1 pv_controller.go:982] volume \"local-pvpjj78\" bound to claim \"persistent-local-volumes-test-3119/pvc-29hbw\"\nI1004 00:00:21.649726       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-5358/test-quota\nI1004 00:00:21.660550       1 pv_controller.go:823] claim \"persistent-local-volumes-test-3119/pvc-29hbw\" entered phase \"Bound\"\nI1004 00:00:21.683567       1 pv_controller.go:640] volume \"nfs-ncvg6\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:00:21.699221       1 pv_controller.go:879] volume \"nfs-ncvg6\" entered phase \"Released\"\nE1004 00:00:21.874572       1 tokens_controller.go:262] error synchronizing serviceaccount dns-6096/default: secrets \"default-token-jxpcp\" is forbidden: unable to create new content in namespace dns-6096 because it is being terminated\nI1004 00:00:21.947250       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:21.965732       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:21.970117       1 event.go:291] \"Event occurred\" object=\"job-7165/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-local--1-nkpsb\"\nI1004 00:00:21.980168       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:21.992247       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:21.996670       1 event.go:291] \"Event occurred\" object=\"job-7165/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-local--1-x8ncz\"\nI1004 00:00:22.010159       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:22.015124       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:22.040894       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:22.062063       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:22.064183       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:22.280304       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3248/e2e-test-webhook-k2mr7\" objectUID=bc0f67d7-ab5b-4bf2-bd0b-6700ebb46de3 kind=\"EndpointSlice\" virtual=false\nI1004 00:00:22.324004       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3248/sample-webhook-deployment-78988fc6cd\" objectUID=b7542044-a1e8-48cb-97c5-f19e7a7a6e27 kind=\"ReplicaSet\" virtual=false\nI1004 00:00:22.324859       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-3248/sample-webhook-deployment\"\nI1004 00:00:22.333120       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3248/sample-webhook-deployment-78988fc6cd\" objectUID=b7542044-a1e8-48cb-97c5-f19e7a7a6e27 kind=\"ReplicaSet\" propagationPolicy=Background\nI1004 00:00:22.333534       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3248/e2e-test-webhook-k2mr7\" objectUID=bc0f67d7-ab5b-4bf2-bd0b-6700ebb46de3 kind=\"EndpointSlice\" propagationPolicy=Background\nI1004 00:00:22.346836       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3248/sample-webhook-deployment-78988fc6cd-tn84r\" objectUID=5bb8dc68-b7b6-4c63-bcda-7e41ab077d51 kind=\"Pod\" virtual=false\nI1004 00:00:22.350270       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3248/sample-webhook-deployment-78988fc6cd-tn84r\" objectUID=5bb8dc68-b7b6-4c63-bcda-7e41ab077d51 kind=\"Pod\" propagationPolicy=Background\nI1004 00:00:22.604955       1 route_controller.go:294] set node nodes-us-west3-a-gn9g with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:00:22.605156       1 route_controller.go:294] set node master-us-west3-a-nkrg with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:00:22.605171       1 route_controller.go:294] set node nodes-us-west3-a-zfb0 with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:00:22.605183       1 route_controller.go:294] set node nodes-us-west3-a-chbv with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:00:22.605228       1 route_controller.go:294] set node nodes-us-west3-a-h26h with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:00:22.779441       1 namespace_controller.go:185] Namespace has been deleted port-forwarding-8834\nE1004 00:00:22.928556       1 namespace_controller.go:162] deletion of namespace svcaccounts-628 failed: unexpected items still remain in namespace: svcaccounts-628 for gvr: /v1, Resource=pods\nE1004 00:00:22.939044       1 namespace_controller.go:162] deletion of namespace kubectl-6798 failed: unexpected items still remain in namespace: kubectl-6798 for gvr: /v1, Resource=pods\nE1004 00:00:23.224289       1 tokens_controller.go:262] error synchronizing serviceaccount projected-7471/default: secrets \"default-token-pt79x\" is forbidden: unable to create new content in namespace projected-7471 because it is being terminated\nI1004 00:00:23.452129       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-3148\nI1004 00:00:23.719528       1 pv_controller_base.go:505] deletion of claim \"pv-5864/pvc-hxhvg\" was already processed\nI1004 00:00:23.948210       1 namespace_controller.go:185] Namespace has been deleted var-expansion-5810\nI1004 00:00:23.985253       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:24.482336       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:24.980394       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:24.989673       1 event.go:291] \"Event occurred\" object=\"job-7165/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-local--1-bptvf\"\nI1004 00:00:24.992810       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:24.998113       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:25.010040       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:25.016290       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-54943fe2-91a1-497e-95e4-4e7ae4710dce\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-54943fe2-91a1-497e-95e4-4e7ae4710dce\") on node \"nodes-us-west3-a-h26h\" \nI1004 00:00:25.016583       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:25.026639       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-54943fe2-91a1-497e-95e4-4e7ae4710dce\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-54943fe2-91a1-497e-95e4-4e7ae4710dce\") on node \"nodes-us-west3-a-h26h\" \nE1004 00:00:25.234087       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1004 00:00:25.283587       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:00:25.318336       1 event.go:291] \"Event occurred\" object=\"pv-5612/pvc-m775p\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"FailedBinding\" message=\"no persistent volumes available for this claim and no storage class is set\"\nI1004 00:00:25.356194       1 pv_controller.go:879] volume \"nfs-g9rs9\" entered phase \"Available\"\nE1004 00:00:25.957702       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-8682/pvc-nmw84: storageclass.storage.k8s.io \"volume-8682\" not found\nI1004 00:00:25.958376       1 event.go:291] \"Event occurred\" object=\"volume-8682/pvc-nmw84\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-8682\\\" not found\"\nE1004 00:00:25.980176       1 tokens_controller.go:262] error synchronizing serviceaccount conntrack-2339/default: secrets \"default-token-npb7t\" is forbidden: unable to create new content in namespace conntrack-2339 because it is being terminated\nI1004 00:00:26.006110       1 pv_controller.go:879] volume \"local-6v85r\" entered phase \"Available\"\nI1004 00:00:26.030919       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:26.439646       1 namespace_controller.go:185] Namespace has been deleted aggregator-8107\nI1004 00:00:26.493545       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"vol1\" (UniqueName: \"kubernetes.io/gce-pd/e2e-88724ad0-d504-4b6b-8a59-6bcf1f6dce60\") from node \"nodes-us-west3-a-chbv\" \nI1004 00:00:26.493982       1 event.go:291] \"Event occurred\" object=\"volume-5212/exec-volume-test-inlinevolume-fqnd\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"vol1\\\" \"\nI1004 00:00:26.836637       1 namespace_controller.go:185] Namespace has been deleted provisioning-8313\nE1004 00:00:26.853544       1 tokens_controller.go:262] error synchronizing serviceaccount svcaccounts-3450/default: secrets \"default-token-5mzv5\" is forbidden: unable to create new content in namespace svcaccounts-3450 because it is being terminated\nI1004 00:00:26.873306       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-1338/pvc-gdjw6\"\nI1004 00:00:26.902163       1 pv_controller.go:640] volume \"local-ljs25\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:00:26.940749       1 pv_controller.go:640] volume \"local-ljs25\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:00:26.956492       1 pv_controller.go:879] volume \"local-ljs25\" entered phase \"Released\"\nI1004 00:00:26.982826       1 pv_controller_base.go:505] deletion of claim \"volume-1338/pvc-gdjw6\" was already processed\nI1004 00:00:27.148053       1 namespace_controller.go:185] Namespace has been deleted resourcequota-5358\nI1004 00:00:27.193603       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:27.211416       1 event.go:291] \"Event occurred\" object=\"job-7165/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-local--1-hz686\"\nI1004 00:00:27.212346       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:27.228147       1 namespace_controller.go:185] Namespace has been deleted dns-6096\nI1004 00:00:27.231741       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:27.284421       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:27.310849       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-6575/pvc-c56f4\"\nE1004 00:00:27.317807       1 tokens_controller.go:262] error synchronizing serviceaccount volume-3522/default: secrets \"default-token-dthbh\" is forbidden: unable to create new content in namespace volume-3522 because it is being terminated\nI1004 00:00:27.318188       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:27.454285       1 pv_controller.go:640] volume \"local-tgkn4\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:00:27.524982       1 pv_controller.go:640] volume \"local-tgkn4\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:00:27.572096       1 pv_controller.go:879] volume \"local-tgkn4\" entered phase \"Released\"\nI1004 00:00:27.572853       1 pv_controller_base.go:505] deletion of claim \"provisioning-6575/pvc-c56f4\" was already processed\nE1004 00:00:27.974040       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-9898/default: secrets \"default-token-xcw4r\" is forbidden: unable to create new content in namespace provisioning-9898 because it is being terminated\nE1004 00:00:28.104283       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:00:28.377522       1 event.go:291] \"Event occurred\" object=\"volumemode-7899-1439/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI1004 00:00:28.560830       1 event.go:291] \"Event occurred\" object=\"volumemode-7899/csi-hostpath4xl4x\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volumemode-7899\\\" or manually created by system administrator\"\nI1004 00:00:28.563142       1 event.go:291] \"Event occurred\" object=\"volumemode-7899/csi-hostpath4xl4x\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volumemode-7899\\\" or manually created by system administrator\"\nE1004 00:00:28.780332       1 tokens_controller.go:262] error synchronizing serviceaccount volume-3063/default: secrets \"default-token-snd4b\" is forbidden: unable to create new content in namespace volume-3063 because it is being terminated\nI1004 00:00:28.995071       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3119/pod-20d24395-3a0f-448b-a1cd-e17b49c61051\" PVC=\"persistent-local-volumes-test-3119/pvc-29hbw\"\nI1004 00:00:28.995108       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3119/pvc-29hbw\"\nI1004 00:00:29.003899       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-2690/sample-webhook-deployment\"\nI1004 00:00:29.008528       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-7722/ss-696cb77d7d\" objectUID=1566595c-af03-473a-b615-9003c7db4c5b kind=\"ControllerRevision\" virtual=false\nI1004 00:00:29.008845       1 stateful_set.go:440] StatefulSet has been deleted statefulset-7722/ss\nI1004 00:00:29.028288       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-7722/ss-696cb77d7d\" objectUID=1566595c-af03-473a-b615-9003c7db4c5b kind=\"ControllerRevision\" propagationPolicy=Background\nE1004 00:00:29.060485       1 namespace_controller.go:162] deletion of namespace kubectl-5125 failed: unexpected items still remain in namespace: kubectl-5125 for gvr: /v1, Resource=pods\nI1004 00:00:29.132398       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"statefulset-7722/datadir-ss-0\"\nI1004 00:00:29.182342       1 pv_controller.go:640] volume \"pvc-54943fe2-91a1-497e-95e4-4e7ae4710dce\" is released and reclaim policy \"Delete\" will be executed\nI1004 00:00:29.199491       1 pv_controller.go:879] volume \"pvc-54943fe2-91a1-497e-95e4-4e7ae4710dce\" entered phase \"Released\"\nI1004 00:00:29.205995       1 pv_controller.go:1340] isVolumeReleased[pvc-54943fe2-91a1-497e-95e4-4e7ae4710dce]: volume is released\nI1004 00:00:29.233678       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-5898/gcepdcflks\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nE1004 00:00:29.311725       1 namespace_controller.go:162] deletion of namespace kubectl-6798 failed: unexpected items still remain in namespace: kubectl-6798 for gvr: /v1, Resource=pods\nI1004 00:00:29.718274       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:00:29.728763       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:00:29.733565       1 event.go:291] \"Event occurred\" object=\"job-7363/suspend-false-to-true\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: suspend-false-to-true--1-7pbbm\"\nI1004 00:00:29.738869       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:00:29.749774       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:00:29.750953       1 event.go:291] \"Event occurred\" object=\"job-7363/suspend-false-to-true\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: suspend-false-to-true--1-gpffj\"\nI1004 00:00:29.758118       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:00:29.768923       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:00:29.775511       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:00:29.799114       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:00:29.927379       1 stateful_set_control.go:555] StatefulSet statefulset-5081/ss terminating Pod ss-1 for update\nI1004 00:00:29.938318       1 event.go:291] \"Event occurred\" object=\"statefulset-5081/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nI1004 00:00:29.979102       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:30.010410       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:30.167263       1 gce_util.go:84] Error deleting GCE PD volume e2e-5690f240d7-0a91d-k-pvc-54943fe2-91a1-497e-95e4-4e7ae4710dce: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-54943fe2-91a1-497e-95e4-4e7ae4710dce' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource\nI1004 00:00:30.167527       1 event.go:291] \"Event occurred\" object=\"pvc-54943fe2-91a1-497e-95e4-4e7ae4710dce\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-54943fe2-91a1-497e-95e4-4e7ae4710dce' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource\"\nE1004 00:00:30.167923       1 goroutinemap.go:150] Operation for \"delete-pvc-54943fe2-91a1-497e-95e4-4e7ae4710dce[ae16e88f-b97c-4033-bc57-393e5f6963f0]\" failed. No retries permitted until 2021-10-04 00:00:30.667386265 +0000 UTC m=+972.069882128 (durationBeforeRetry 500ms). Error: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-54943fe2-91a1-497e-95e4-4e7ae4710dce' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource\nE1004 00:00:30.308721       1 tokens_controller.go:262] error synchronizing serviceaccount replicaset-876/default: secrets \"default-token-hpxcf\" is forbidden: unable to create new content in namespace replicaset-876 because it is being terminated\nI1004 00:00:30.350497       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-876/test-rs\" need=1 creating=1\nI1004 00:00:31.044453       1 gce_util.go:188] Successfully created single-zone GCE PD volume e2e-5690f240d7-0a91d-k-pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\nI1004 00:00:31.050472       1 pv_controller.go:1676] volume \"pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\" provisioned for claim \"fsgroupchangepolicy-5898/gcepdcflks\"\nI1004 00:00:31.050948       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-5898/gcepdcflks\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9 using kubernetes.io/gce-pd\"\nI1004 00:00:31.059967       1 pv_controller.go:879] volume \"pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\" entered phase \"Bound\"\nI1004 00:00:31.060039       1 pv_controller.go:982] volume \"pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\" bound to claim \"fsgroupchangepolicy-5898/gcepdcflks\"\nI1004 00:00:31.073924       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-1264/pvc-cm5fj\"\nI1004 00:00:31.075697       1 pv_controller.go:823] claim \"fsgroupchangepolicy-5898/gcepdcflks\" entered phase \"Bound\"\nI1004 00:00:31.087354       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:31.096288       1 pv_controller.go:640] volume \"local-dx7jm\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:00:31.102742       1 pv_controller.go:640] volume \"local-dx7jm\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:00:31.113255       1 pv_controller.go:879] volume \"local-dx7jm\" entered phase \"Released\"\nI1004 00:00:31.120714       1 pv_controller_base.go:505] deletion of claim \"provisioning-1264/pvc-cm5fj\" was already processed\nI1004 00:00:31.223963       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:00:31.242797       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3119/pod-20d24395-3a0f-448b-a1cd-e17b49c61051\" PVC=\"persistent-local-volumes-test-3119/pvc-29hbw\"\nI1004 00:00:31.242835       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3119/pvc-29hbw\"\nI1004 00:00:31.363155       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\") from node \"nodes-us-west3-a-zfb0\" \nE1004 00:00:31.524824       1 tokens_controller.go:262] error synchronizing serviceaccount port-forwarding-5034/default: secrets \"default-token-thhdc\" is forbidden: unable to create new content in namespace port-forwarding-5034 because it is being terminated\nI1004 00:00:31.657135       1 utils.go:366] couldn't find ipfamilies for headless service: services-7264/externalname-service likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI1004 00:00:31.828866       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-7264/externalname-service\" need=2 creating=2\nI1004 00:00:31.862270       1 event.go:291] \"Event occurred\" object=\"services-7264/externalname-service\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalname-service-hpnfs\"\nI1004 00:00:31.896588       1 event.go:291] \"Event occurred\" object=\"services-7264/externalname-service\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalname-service-qhdmv\"\nI1004 00:00:31.899744       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-54943fe2-91a1-497e-95e4-4e7ae4710dce\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-54943fe2-91a1-497e-95e4-4e7ae4710dce\") on node \"nodes-us-west3-a-h26h\" \nW1004 00:00:31.992663       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"statefulset-5081/test\", retrying. Error: EndpointSlice informer cache is out of date\nE1004 00:00:31.997904       1 namespace_controller.go:162] deletion of namespace apply-7557 failed: unexpected items still remain in namespace: apply-7557 for gvr: /v1, Resource=pods\nI1004 00:00:32.006630       1 event.go:291] \"Event occurred\" object=\"statefulset-5081/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nW1004 00:00:32.086232       1 reconciler.go:335] Multi-Attach error for volume \"pvc-b82c12ac-4303-4815-a826-0507bb0cca78\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-b82c12ac-4303-4815-a826-0507bb0cca78\") from node \"nodes-us-west3-a-gn9g\" Volume is already exclusively attached to node nodes-us-west3-a-h26h and can't be attached to another\nI1004 00:00:32.086929       1 event.go:291] \"Event occurred\" object=\"statefulset-5081/ss-1\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"pvc-b82c12ac-4303-4815-a826-0507bb0cca78\\\" Volume is already exclusively attached to one node and can't be attached to another\"\nI1004 00:00:32.113277       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"gc-697/simpletest.rc\" need=10 creating=10\nI1004 00:00:32.136038       1 event.go:291] \"Event occurred\" object=\"gc-697/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-w4hkf\"\nI1004 00:00:32.150091       1 event.go:291] \"Event occurred\" object=\"gc-697/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-2gxh4\"\nI1004 00:00:32.158037       1 event.go:291] \"Event occurred\" object=\"gc-697/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-vzh6h\"\nI1004 00:00:32.185432       1 event.go:291] \"Event occurred\" object=\"gc-697/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-7z4qc\"\nI1004 00:00:32.186066       1 event.go:291] \"Event occurred\" object=\"gc-697/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-jbd7c\"\nI1004 00:00:32.195115       1 event.go:291] \"Event occurred\" object=\"gc-697/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-86wvw\"\nI1004 00:00:32.206145       1 event.go:291] \"Event occurred\" object=\"gc-697/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-fpg4n\"\nI1004 00:00:32.222115       1 event.go:291] \"Event occurred\" object=\"gc-697/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-t5xf7\"\nI1004 00:00:32.228647       1 event.go:291] \"Event occurred\" object=\"gc-697/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-9fvjh\"\nI1004 00:00:32.243283       1 event.go:291] \"Event occurred\" object=\"gc-697/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-lmxkd\"\nI1004 00:00:32.357029       1 pv_controller.go:879] volume \"gce-wmklp\" entered phase \"Available\"\nI1004 00:00:32.383405       1 pv_controller.go:930] claim \"pv-7232/pvc-h855s\" bound to volume \"gce-wmklp\"\nI1004 00:00:32.395875       1 pv_controller.go:879] volume \"gce-wmklp\" entered phase \"Bound\"\nI1004 00:00:32.395930       1 pv_controller.go:982] volume \"gce-wmklp\" bound to claim \"pv-7232/pvc-h855s\"\nI1004 00:00:32.406260       1 pv_controller.go:823] claim \"pv-7232/pvc-h855s\" entered phase \"Bound\"\nE1004 00:00:32.441304       1 namespace_controller.go:162] deletion of namespace apply-7557 failed: unexpected items still remain in namespace: apply-7557 for gvr: /v1, Resource=pods\nI1004 00:00:32.596163       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"gce-wmklp\" (UniqueName: \"kubernetes.io/gce-pd/e2e-934de462-96b2-44b3-9db4-507f969ed2e1\") from node \"nodes-us-west3-a-zfb0\" \nI1004 00:00:32.596622       1 route_controller.go:294] set node nodes-us-west3-a-chbv with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:00:32.596836       1 route_controller.go:294] set node nodes-us-west3-a-h26h with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:00:32.596993       1 route_controller.go:294] set node nodes-us-west3-a-gn9g with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:00:32.597146       1 route_controller.go:294] set node master-us-west3-a-nkrg with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:00:32.597282       1 route_controller.go:294] set node nodes-us-west3-a-zfb0 with NodeNetworkUnavailable=false was canceled because it is already set\nE1004 00:00:32.675123       1 namespace_controller.go:162] deletion of namespace apply-7557 failed: unexpected items still remain in namespace: apply-7557 for gvr: /v1, Resource=pods\nI1004 00:00:32.853089       1 pv_controller.go:930] claim \"volume-8682/pvc-nmw84\" bound to volume \"local-6v85r\"\nI1004 00:00:32.861849       1 pv_controller.go:1340] isVolumeReleased[pvc-54943fe2-91a1-497e-95e4-4e7ae4710dce]: volume is released\nI1004 00:00:32.866203       1 pv_controller.go:1340] isVolumeReleased[pvc-b09738fb-fe11-4a0b-acf5-b1a45bb9c9f4]: volume is released\nI1004 00:00:32.868817       1 pv_controller.go:879] volume \"local-6v85r\" entered phase \"Bound\"\nI1004 00:00:32.868899       1 pv_controller.go:982] volume \"local-6v85r\" bound to claim \"volume-8682/pvc-nmw84\"\nI1004 00:00:32.878795       1 pv_controller.go:823] claim \"volume-8682/pvc-nmw84\" entered phase \"Bound\"\nI1004 00:00:32.879364       1 pv_controller.go:930] claim \"pv-5612/pvc-m775p\" bound to volume \"nfs-g9rs9\"\nI1004 00:00:32.894404       1 pv_controller.go:879] volume \"nfs-g9rs9\" entered phase \"Bound\"\nI1004 00:00:32.896101       1 pv_controller.go:982] volume \"nfs-g9rs9\" bound to claim \"pv-5612/pvc-m775p\"\nI1004 00:00:32.908379       1 pv_controller.go:823] claim \"pv-5612/pvc-m775p\" entered phase \"Bound\"\nI1004 00:00:32.909574       1 event.go:291] \"Event occurred\" object=\"volumemode-7899/csi-hostpath4xl4x\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volumemode-7899\\\" or manually created by system administrator\"\nE1004 00:00:32.944510       1 namespace_controller.go:162] deletion of namespace apply-7557 failed: unexpected items still remain in namespace: apply-7557 for gvr: /v1, Resource=pods\nE1004 00:00:33.088164       1 tokens_controller.go:262] error synchronizing serviceaccount crd-publish-openapi-9833/default: secrets \"default-token-j9qbr\" is forbidden: unable to create new content in namespace crd-publish-openapi-9833 because it is being terminated\nI1004 00:00:33.281684       1 event.go:291] \"Event occurred\" object=\"volume-expand-8644/gcepd5xs75\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nE1004 00:00:33.303597       1 namespace_controller.go:162] deletion of namespace apply-7557 failed: unexpected items still remain in namespace: apply-7557 for gvr: /v1, Resource=pods\nI1004 00:00:33.307452       1 namespace_controller.go:185] Namespace has been deleted volume-3522\nI1004 00:00:33.455090       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:33.464284       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:33.719477       1 namespace_controller.go:185] Namespace has been deleted svcaccounts-3450\nE1004 00:00:33.724898       1 namespace_controller.go:162] deletion of namespace apply-7557 failed: unexpected items still remain in namespace: apply-7557 for gvr: /v1, Resource=pods\nI1004 00:00:33.856408       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:33.858978       1 event.go:291] \"Event occurred\" object=\"job-7165/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI1004 00:00:33.870953       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:34.012121       1 namespace_controller.go:185] Namespace has been deleted webhook-3248-markers\nE1004 00:00:34.042580       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-6575/default: secrets \"default-token-8gbfl\" is forbidden: unable to create new content in namespace provisioning-6575 because it is being terminated\nI1004 00:00:34.133930       1 namespace_controller.go:185] Namespace has been deleted webhook-3248\nE1004 00:00:34.147185       1 tokens_controller.go:262] error synchronizing serviceaccount secrets-4303/default: secrets \"default-token-bz6k5\" is forbidden: unable to create new content in namespace secrets-4303 because it is being terminated\nI1004 00:00:34.258748       1 namespace_controller.go:185] Namespace has been deleted provisioning-9898\nI1004 00:00:34.328128       1 namespace_controller.go:185] Namespace has been deleted volume-3063\nE1004 00:00:34.411662       1 namespace_controller.go:162] deletion of namespace apply-7557 failed: unexpected items still remain in namespace: apply-7557 for gvr: /v1, Resource=pods\nE1004 00:00:34.592433       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-3119/default: secrets \"default-token-s5mvx\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-3119 because it is being terminated\nE1004 00:00:34.774276       1 namespace_controller.go:162] deletion of namespace persistent-local-volumes-test-3119 failed: unexpected items still remain in namespace: persistent-local-volumes-test-3119 for gvr: /v1, Resource=pods\nI1004 00:00:34.899734       1 namespace_controller.go:185] Namespace has been deleted projected-7471\nE1004 00:00:34.998316       1 namespace_controller.go:162] deletion of namespace apply-7557 failed: unexpected items still remain in namespace: apply-7557 for gvr: /v1, Resource=pods\nE1004 00:00:35.049645       1 namespace_controller.go:162] deletion of namespace persistent-local-volumes-test-3119 failed: unexpected items still remain in namespace: persistent-local-volumes-test-3119 for gvr: /v1, Resource=pods\nI1004 00:00:35.062298       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-b82c12ac-4303-4815-a826-0507bb0cca78\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-b82c12ac-4303-4815-a826-0507bb0cca78\") on node \"nodes-us-west3-a-h26h\" \nI1004 00:00:35.068322       1 namespace_controller.go:185] Namespace has been deleted init-container-9176\nI1004 00:00:35.068687       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-b82c12ac-4303-4815-a826-0507bb0cca78\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-b82c12ac-4303-4815-a826-0507bb0cca78\") on node \"nodes-us-west3-a-h26h\" \nI1004 00:00:35.217576       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:35.252361       1 gce_util.go:89] Successfully deleted GCE PD volume e2e-5690f240d7-0a91d-k-pvc-b09738fb-fe11-4a0b-acf5-b1a45bb9c9f4\nI1004 00:00:35.252408       1 pv_controller.go:1435] volume \"pvc-b09738fb-fe11-4a0b-acf5-b1a45bb9c9f4\" deleted\nI1004 00:00:35.269733       1 pv_controller_base.go:505] deletion of claim \"provisioning-240/gcepd5d7n4\" was already processed\nI1004 00:00:35.306405       1 gce_util.go:89] Successfully deleted GCE PD volume e2e-5690f240d7-0a91d-k-pvc-54943fe2-91a1-497e-95e4-4e7ae4710dce\nI1004 00:00:35.306761       1 pv_controller.go:1435] volume \"pvc-54943fe2-91a1-497e-95e4-4e7ae4710dce\" deleted\nI1004 00:00:35.330282       1 pv_controller_base.go:505] deletion of claim \"statefulset-7722/datadir-ss-0\" was already processed\nE1004 00:00:35.379030       1 namespace_controller.go:162] deletion of namespace persistent-local-volumes-test-3119 failed: unexpected items still remain in namespace: persistent-local-volumes-test-3119 for gvr: /v1, Resource=pods\nE1004 00:00:35.398038       1 tokens_controller.go:262] error synchronizing serviceaccount crd-publish-openapi-3246/default: secrets \"default-token-vlvzw\" is forbidden: unable to create new content in namespace crd-publish-openapi-3246 because it is being terminated\nI1004 00:00:35.452145       1 namespace_controller.go:185] Namespace has been deleted replicaset-876\nE1004 00:00:35.620823       1 namespace_controller.go:162] deletion of namespace persistent-local-volumes-test-3119 failed: unexpected items still remain in namespace: persistent-local-volumes-test-3119 for gvr: /v1, Resource=pods\nE1004 00:00:35.955816       1 namespace_controller.go:162] deletion of namespace apply-7557 failed: unexpected items still remain in namespace: apply-7557 for gvr: /v1, Resource=pods\nE1004 00:00:35.965155       1 namespace_controller.go:162] deletion of namespace persistent-local-volumes-test-3119 failed: unexpected items still remain in namespace: persistent-local-volumes-test-3119 for gvr: /v1, Resource=pods\nE1004 00:00:36.697104       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-1264/default: secrets \"default-token-vdhgb\" is forbidden: unable to create new content in namespace provisioning-1264 because it is being terminated\nE1004 00:00:36.723748       1 namespace_controller.go:162] deletion of namespace persistent-local-volumes-test-3119 failed: unexpected items still remain in namespace: persistent-local-volumes-test-3119 for gvr: /v1, Resource=pods\nE1004 00:00:36.727800       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nE1004 00:00:36.826552       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:00:37.114630       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\") from node \"nodes-us-west3-a-zfb0\" \nI1004 00:00:37.115144       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-5898/pod-f31beafc-2b57-48f0-aff7-6e592dd0f388\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\\\" \"\nI1004 00:00:37.228540       1 garbagecollector.go:471] \"Processing object\" object=\"gc-697/simpletest.rc\" objectUID=ab0b5d1d-f934-4f36-b6f4-1e5c43f4709a kind=\"ReplicationController\" virtual=false\nI1004 00:00:37.231164       1 garbagecollector.go:471] \"Processing object\" object=\"gc-697/simpletest.rc\" objectUID=ab0b5d1d-f934-4f36-b6f4-1e5c43f4709a kind=\"ReplicationController\" virtual=false\nI1004 00:00:37.231580       1 garbagecollector.go:471] \"Processing object\" object=\"gc-697/simpletest.rc\" objectUID=ab0b5d1d-f934-4f36-b6f4-1e5c43f4709a kind=\"ReplicationController\" virtual=false\nI1004 00:00:37.231871       1 garbagecollector.go:471] \"Processing object\" object=\"gc-697/simpletest.rc\" objectUID=ab0b5d1d-f934-4f36-b6f4-1e5c43f4709a kind=\"ReplicationController\" virtual=false\nI1004 00:00:37.246161       1 garbagecollector.go:471] \"Processing object\" object=\"gc-697/simpletest.rc\" objectUID=ab0b5d1d-f934-4f36-b6f4-1e5c43f4709a kind=\"ReplicationController\" virtual=false\nI1004 00:00:37.266995       1 garbagecollector.go:471] \"Processing object\" object=\"gc-697/simpletest.rc\" objectUID=ab0b5d1d-f934-4f36-b6f4-1e5c43f4709a kind=\"ReplicationController\" virtual=false\nI1004 00:00:37.274595       1 garbagecollector.go:471] \"Processing object\" object=\"gc-697/simpletest.rc\" objectUID=ab0b5d1d-f934-4f36-b6f4-1e5c43f4709a kind=\"ReplicationController\" virtual=false\nI1004 00:00:37.275643       1 garbagecollector.go:471] \"Processing object\" object=\"gc-697/simpletest.rc\" objectUID=ab0b5d1d-f934-4f36-b6f4-1e5c43f4709a kind=\"ReplicationController\" virtual=false\nI1004 00:00:37.276691       1 garbagecollector.go:471] \"Processing object\" object=\"gc-697/simpletest.rc\" objectUID=ab0b5d1d-f934-4f36-b6f4-1e5c43f4709a kind=\"ReplicationController\" virtual=false\nE1004 00:00:37.349113       1 replica_set.go:536] sync \"gc-697/simpletest.rc\" failed with Operation cannot be fulfilled on replicationcontrollers \"simpletest.rc\": StorageError: invalid object, Code: 4, Key: /registry/controllers/gc-697/simpletest.rc, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: ab0b5d1d-f934-4f36-b6f4-1e5c43f4709a, UID in object meta: \nI1004 00:00:37.568604       1 garbagecollector.go:471] \"Processing object\" object=\"services-6187/nodeport-update-service-759cc\" objectUID=0189f6ed-66e7-4c03-8357-300e97fb3e3d kind=\"EndpointSlice\" virtual=false\nI1004 00:00:37.610415       1 garbagecollector.go:580] \"Deleting object\" object=\"services-6187/nodeport-update-service-759cc\" objectUID=0189f6ed-66e7-4c03-8357-300e97fb3e3d kind=\"EndpointSlice\" propagationPolicy=Background\nE1004 00:00:37.678844       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:00:37.693102       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-3119/pvc-29hbw\"\nI1004 00:00:37.740831       1 pv_controller.go:640] volume \"local-pvpjj78\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:00:37.765747       1 pv_controller.go:879] volume \"local-pvpjj78\" entered phase \"Released\"\nI1004 00:00:37.808929       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-3119/pvc-29hbw\" was already processed\nE1004 00:00:38.055091       1 namespace_controller.go:162] deletion of namespace persistent-local-volumes-test-3119 failed: unexpected items still remain in namespace: persistent-local-volumes-test-3119 for gvr: /v1, Resource=pods\nI1004 00:00:38.074408       1 pv_controller.go:879] volume \"pvc-50acebae-6cf5-4723-951d-112842942075\" entered phase \"Bound\"\nI1004 00:00:38.079127       1 pv_controller.go:982] volume \"pvc-50acebae-6cf5-4723-951d-112842942075\" bound to claim \"volumemode-7899/csi-hostpath4xl4x\"\nI1004 00:00:38.110570       1 pv_controller.go:823] claim \"volumemode-7899/csi-hostpath4xl4x\" entered phase \"Bound\"\nE1004 00:00:38.208174       1 namespace_controller.go:162] deletion of namespace apply-7557 failed: unexpected items still remain in namespace: apply-7557 for gvr: /v1, Resource=pods\nI1004 00:00:38.333681       1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-9833\nI1004 00:00:38.792854       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-50acebae-6cf5-4723-951d-112842942075\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volumemode-7899^1b2683d8-24a6-11ec-b5f0-8ac524f0a97b\") from node \"nodes-us-west3-a-h26h\" \nI1004 00:00:39.079600       1 namespace_controller.go:185] Namespace has been deleted volume-1338\nI1004 00:00:39.158924       1 namespace_controller.go:185] Namespace has been deleted provisioning-6575\nI1004 00:00:39.222557       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:39.229932       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:39.232920       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:39.238359       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:39.244486       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:39.248055       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:39.254702       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:39.259138       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:39.278825       1 job_controller.go:406] enqueueing job job-7165/fail-once-local\nI1004 00:00:39.372415       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-50acebae-6cf5-4723-951d-112842942075\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volumemode-7899^1b2683d8-24a6-11ec-b5f0-8ac524f0a97b\") from node \"nodes-us-west3-a-h26h\" \nI1004 00:00:39.373173       1 event.go:291] \"Event occurred\" object=\"volumemode-7899/pod-55055527-38ca-47d2-b01b-7c51eca3bcc8\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-50acebae-6cf5-4723-951d-112842942075\\\" \"\nI1004 00:00:39.481373       1 namespace_controller.go:185] Namespace has been deleted secrets-4303\nE1004 00:00:39.714619       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1004 00:00:39.745937       1 namespace_controller.go:162] deletion of namespace kubectl-6798 failed: unexpected items still remain in namespace: kubectl-6798 for gvr: /v1, Resource=pods\nI1004 00:00:39.892987       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"pv-5612/pvc-m775p\"\nI1004 00:00:39.914970       1 pv_controller.go:640] volume \"nfs-g9rs9\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:00:39.922351       1 pv_controller.go:879] volume \"nfs-g9rs9\" entered phase \"Released\"\nI1004 00:00:40.484149       1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-3246\nI1004 00:00:40.769203       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-b82c12ac-4303-4815-a826-0507bb0cca78\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-b82c12ac-4303-4815-a826-0507bb0cca78\") on node \"nodes-us-west3-a-h26h\" \nI1004 00:00:40.851160       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-b82c12ac-4303-4815-a826-0507bb0cca78\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-b82c12ac-4303-4815-a826-0507bb0cca78\") from node \"nodes-us-west3-a-gn9g\" \nE1004 00:00:40.946173       1 namespace_controller.go:162] deletion of namespace apply-7557 failed: unexpected items still remain in namespace: apply-7557 for gvr: /v1, Resource=pods\nI1004 00:00:40.981016       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"gce-wmklp\" (UniqueName: \"kubernetes.io/gce-pd/e2e-934de462-96b2-44b3-9db4-507f969ed2e1\") from node \"nodes-us-west3-a-zfb0\" \nI1004 00:00:40.981552       1 event.go:291] \"Event occurred\" object=\"pv-7232/pvc-tester-m769w\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"gce-wmklp\\\" \"\nI1004 00:00:41.450610       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:00:41.798538       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:00:41.798719       1 controller_utils.go:592] \"Deleting pod\" controller=\"suspend-false-to-true\" pod=\"job-7363/suspend-false-to-true--1-gpffj\"\nI1004 00:00:41.798741       1 controller_utils.go:592] \"Deleting pod\" controller=\"suspend-false-to-true\" pod=\"job-7363/suspend-false-to-true--1-7pbbm\"\nI1004 00:00:41.810305       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:00:41.810354       1 event.go:291] \"Event occurred\" object=\"job-7363/suspend-false-to-true\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: suspend-false-to-true--1-7pbbm\"\nI1004 00:00:41.816541       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:00:41.816940       1 event.go:291] \"Event occurred\" object=\"job-7363/suspend-false-to-true\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: suspend-false-to-true--1-gpffj\"\nI1004 00:00:41.816958       1 event.go:291] \"Event occurred\" object=\"job-7363/suspend-false-to-true\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Suspended\" message=\"Job suspended\"\nI1004 00:00:41.837888       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:00:42.001627       1 pv_controller_base.go:505] deletion of claim \"pv-5612/pvc-m775p\" was already processed\nI1004 00:00:42.551117       1 route_controller.go:294] set node nodes-us-west3-a-chbv with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:00:42.551196       1 route_controller.go:294] set node nodes-us-west3-a-gn9g with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:00:42.551213       1 route_controller.go:294] set node nodes-us-west3-a-h26h with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:00:42.551267       1 route_controller.go:294] set node master-us-west3-a-nkrg with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:00:42.551286       1 route_controller.go:294] set node nodes-us-west3-a-zfb0 with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:00:42.899155       1 garbagecollector.go:471] \"Processing object\" object=\"services-6187/nodeport-update-service-ptb6g\" objectUID=0271f66a-f403-4e60-af1b-7d6ff369d241 kind=\"Pod\" virtual=false\nI1004 00:00:42.899479       1 garbagecollector.go:471] \"Processing object\" object=\"services-6187/nodeport-update-service-nzsqd\" objectUID=86910d4f-da7b-4f18-9cea-7d1e96898f06 kind=\"Pod\" virtual=false\nI1004 00:00:42.904330       1 garbagecollector.go:580] \"Deleting object\" object=\"services-6187/nodeport-update-service-nzsqd\" objectUID=86910d4f-da7b-4f18-9cea-7d1e96898f06 kind=\"Pod\" propagationPolicy=Background\nI1004 00:00:42.905350       1 garbagecollector.go:580] \"Deleting object\" object=\"services-6187/nodeport-update-service-ptb6g\" objectUID=0271f66a-f403-4e60-af1b-7d6ff369d241 kind=\"Pod\" propagationPolicy=Background\nI1004 00:00:43.029545       1 namespace_controller.go:185] Namespace has been deleted provisioning-1264\nI1004 00:00:43.093178       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volumemode-7934/pvc-w6rtz\"\nI1004 00:00:43.094654       1 namespace_controller.go:185] Namespace has been deleted projected-4669\nI1004 00:00:43.114414       1 pv_controller.go:640] volume \"local-8fkdf\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:00:43.121009       1 pv_controller.go:879] volume \"local-8fkdf\" entered phase \"Released\"\nI1004 00:00:43.132597       1 pv_controller_base.go:505] deletion of claim \"volumemode-7934/pvc-w6rtz\" was already processed\nI1004 00:00:43.154593       1 namespace_controller.go:185] Namespace has been deleted pv-5864\nE1004 00:00:43.159716       1 namespace_controller.go:162] deletion of namespace services-6187 failed: unexpected items still remain in namespace: services-6187 for gvr: /v1, Resource=pods\nI1004 00:00:43.352587       1 namespace_controller.go:185] Namespace has been deleted emptydir-5879\nE1004 00:00:43.362971       1 namespace_controller.go:162] deletion of namespace services-6187 failed: unexpected items still remain in namespace: services-6187 for gvr: /v1, Resource=pods\nI1004 00:00:43.552380       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-3119\nE1004 00:00:43.584535       1 namespace_controller.go:162] deletion of namespace services-6187 failed: unexpected items still remain in namespace: services-6187 for gvr: /v1, Resource=pods\nE1004 00:00:43.855498       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1004 00:00:44.020775       1 namespace_controller.go:162] deletion of namespace services-6187 failed: unexpected items still remain in namespace: services-6187 for gvr: /v1, Resource=pods\nI1004 00:00:44.104750       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-9621/gcepddnbtw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nE1004 00:00:44.109782       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-240/default: secrets \"default-token-nb9vk\" is forbidden: unable to create new content in namespace provisioning-240 because it is being terminated\nE1004 00:00:44.291867       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1004 00:00:44.320616       1 tokens_controller.go:262] error synchronizing serviceaccount statefulset-7722/default: secrets \"default-token-sr7wx\" is forbidden: unable to create new content in namespace statefulset-7722 because it is being terminated\nI1004 00:00:44.338967       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-7722/test-ttwrn\" objectUID=64ab0351-03f9-43f5-ad20-8a0f087ea809 kind=\"EndpointSlice\" virtual=false\nI1004 00:00:44.345629       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-7722/test-ttwrn\" objectUID=64ab0351-03f9-43f5-ad20-8a0f087ea809 kind=\"EndpointSlice\" propagationPolicy=Background\nI1004 00:00:44.386673       1 namespace_controller.go:185] Namespace has been deleted job-7165\nE1004 00:00:44.416362       1 namespace_controller.go:162] deletion of namespace services-6187 failed: unexpected items still remain in namespace: services-6187 for gvr: /v1, Resource=pods\nE1004 00:00:44.589204       1 tokens_controller.go:262] error synchronizing serviceaccount container-probe-1777/default: secrets \"default-token-mq8gw\" is forbidden: unable to create new content in namespace container-probe-1777 because it is being terminated\nE1004 00:00:44.821925       1 namespace_controller.go:162] deletion of namespace services-6187 failed: unexpected items still remain in namespace: services-6187 for gvr: /v1, Resource=pods\nE1004 00:00:45.153653       1 namespace_controller.go:162] deletion of namespace services-6187 failed: unexpected items still remain in namespace: services-6187 for gvr: /v1, Resource=pods\nE1004 00:00:45.219019       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1004 00:00:45.675843       1 namespace_controller.go:162] deletion of namespace services-6187 failed: unexpected items still remain in namespace: services-6187 for gvr: /v1, Resource=pods\nI1004 00:00:46.079690       1 gce_util.go:188] Successfully created single-zone GCE PD volume e2e-5690f240d7-0a91d-k-pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\nI1004 00:00:46.089244       1 pv_controller.go:1676] volume \"pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\" provisioned for claim \"fsgroupchangepolicy-9621/gcepddnbtw\"\nI1004 00:00:46.089732       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-9621/gcepddnbtw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a using kubernetes.io/gce-pd\"\nI1004 00:00:46.095066       1 pv_controller.go:879] volume \"pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\" entered phase \"Bound\"\nI1004 00:00:46.095114       1 pv_controller.go:982] volume \"pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\" bound to claim \"fsgroupchangepolicy-9621/gcepddnbtw\"\nI1004 00:00:46.106256       1 pv_controller.go:823] claim \"fsgroupchangepolicy-9621/gcepddnbtw\" entered phase \"Bound\"\nI1004 00:00:46.248884       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\") from node \"nodes-us-west3-a-chbv\" \nE1004 00:00:46.301825       1 namespace_controller.go:162] deletion of namespace apply-7557 failed: unexpected items still remain in namespace: apply-7557 for gvr: /v1, Resource=pods\nI1004 00:00:46.512464       1 namespace_controller.go:185] Namespace has been deleted gc-5324\nE1004 00:00:46.567171       1 namespace_controller.go:162] deletion of namespace services-6187 failed: unexpected items still remain in namespace: services-6187 for gvr: /v1, Resource=pods\nE1004 00:00:46.832168       1 pv_controller.go:1451] error finding provisioning plugin for claim volumemode-4458/pvc-bpx9z: storageclass.storage.k8s.io \"volumemode-4458\" not found\nI1004 00:00:46.832284       1 event.go:291] \"Event occurred\" object=\"volumemode-4458/pvc-bpx9z\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-4458\\\" not found\"\nI1004 00:00:46.867561       1 pv_controller.go:879] volume \"local-sgm8v\" entered phase \"Available\"\nI1004 00:00:46.914284       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"gc-9599/simpletest.rc\" need=10 creating=10\nI1004 00:00:46.925925       1 event.go:291] \"Event occurred\" object=\"gc-9599/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-f2rsv\"\nI1004 00:00:46.945351       1 event.go:291] \"Event occurred\" object=\"gc-9599/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-22rdm\"\nI1004 00:00:46.945387       1 event.go:291] \"Event occurred\" object=\"gc-9599/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-rfrd4\"\nI1004 00:00:46.962158       1 event.go:291] \"Event occurred\" object=\"gc-9599/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-qts99\"\nI1004 00:00:46.971556       1 event.go:291] \"Event occurred\" object=\"gc-9599/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-96b4h\"\nI1004 00:00:46.973407       1 event.go:291] \"Event occurred\" object=\"gc-9599/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-wgzgv\"\nI1004 00:00:46.975771       1 event.go:291] \"Event occurred\" object=\"gc-9599/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-987nh\"\nI1004 00:00:47.027101       1 event.go:291] \"Event occurred\" object=\"gc-9599/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-znnbv\"\nI1004 00:00:47.027520       1 event.go:291] \"Event occurred\" object=\"gc-9599/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-jzzm8\"\nI1004 00:00:47.033830       1 event.go:291] \"Event occurred\" object=\"gc-9599/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-q7shj\"\nI1004 00:00:47.381927       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-8636/pvc-cpl9k\"\nI1004 00:00:47.405092       1 pv_controller.go:640] volume \"pvc-b4dcf7a9-5431-4038-b589-c91af156ff9d\" is released and reclaim policy \"Delete\" will be executed\nI1004 00:00:47.417074       1 pv_controller.go:879] volume \"pvc-b4dcf7a9-5431-4038-b589-c91af156ff9d\" entered phase \"Released\"\nI1004 00:00:47.430448       1 pv_controller.go:1340] isVolumeReleased[pvc-b4dcf7a9-5431-4038-b589-c91af156ff9d]: volume is released\nI1004 00:00:47.444273       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-8636/pvc-cpl9k\" was already processed\nI1004 00:00:47.853957       1 pv_controller.go:930] claim \"volumemode-4458/pvc-bpx9z\" bound to volume \"local-sgm8v\"\nI1004 00:00:47.884842       1 pv_controller.go:879] volume \"local-sgm8v\" entered phase \"Bound\"\nI1004 00:00:47.885459       1 pv_controller.go:982] volume \"local-sgm8v\" bound to claim \"volumemode-4458/pvc-bpx9z\"\nI1004 00:00:47.899481       1 pv_controller.go:823] claim \"volumemode-4458/pvc-bpx9z\" entered phase \"Bound\"\nI1004 00:00:47.908494       1 event.go:291] \"Event occurred\" object=\"volume-expand-8644/gcepd5xs75\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1004 00:00:48.028362       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-b82c12ac-4303-4815-a826-0507bb0cca78\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-b82c12ac-4303-4815-a826-0507bb0cca78\") from node \"nodes-us-west3-a-gn9g\" \nI1004 00:00:48.028744       1 event.go:291] \"Event occurred\" object=\"statefulset-5081/ss-1\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-b82c12ac-4303-4815-a826-0507bb0cca78\\\" \"\nE1004 00:00:48.131446       1 namespace_controller.go:162] deletion of namespace services-6187 failed: unexpected items still remain in namespace: services-6187 for gvr: /v1, Resource=pods\nE1004 00:00:49.177684       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:00:49.382237       1 namespace_controller.go:185] Namespace has been deleted provisioning-240\nI1004 00:00:49.730314       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-50acebae-6cf5-4723-951d-112842942075\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volumemode-7899^1b2683d8-24a6-11ec-b5f0-8ac524f0a97b\") on node \"nodes-us-west3-a-h26h\" \nI1004 00:00:49.735291       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-50acebae-6cf5-4723-951d-112842942075\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volumemode-7899^1b2683d8-24a6-11ec-b5f0-8ac524f0a97b\") on node \"nodes-us-west3-a-h26h\" \nE1004 00:00:49.761358       1 namespace_controller.go:162] deletion of namespace kubectl-5125 failed: unexpected items still remain in namespace: kubectl-5125 for gvr: /v1, Resource=pods\nI1004 00:00:49.769973       1 namespace_controller.go:185] Namespace has been deleted statefulset-7722\nI1004 00:00:49.835825       1 namespace_controller.go:185] Namespace has been deleted container-probe-1777\nI1004 00:00:49.907657       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volumemode-7899/csi-hostpath4xl4x\"\nI1004 00:00:49.936401       1 pv_controller.go:640] volume \"pvc-50acebae-6cf5-4723-951d-112842942075\" is released and reclaim policy \"Delete\" will be executed\nI1004 00:00:49.946547       1 pv_controller.go:879] volume \"pvc-50acebae-6cf5-4723-951d-112842942075\" entered phase \"Released\"\nI1004 00:00:49.955822       1 pv_controller.go:1340] isVolumeReleased[pvc-50acebae-6cf5-4723-951d-112842942075]: volume is released\nI1004 00:00:50.005198       1 pv_controller_base.go:505] deletion of claim \"volumemode-7899/csi-hostpath4xl4x\" was already processed\nI1004 00:00:50.300979       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-50acebae-6cf5-4723-951d-112842942075\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volumemode-7899^1b2683d8-24a6-11ec-b5f0-8ac524f0a97b\") on node \"nodes-us-west3-a-h26h\" \nI1004 00:00:50.542457       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-7338/gcepdhmk94\"\nI1004 00:00:50.552301       1 pv_controller.go:640] volume \"pvc-e1b92df1-ac9e-45d3-b588-b774434e7bf8\" is released and reclaim policy \"Delete\" will be executed\nI1004 00:00:50.560387       1 pv_controller.go:879] volume \"pvc-e1b92df1-ac9e-45d3-b588-b774434e7bf8\" entered phase \"Released\"\nI1004 00:00:50.563748       1 pv_controller.go:1340] isVolumeReleased[pvc-e1b92df1-ac9e-45d3-b588-b774434e7bf8]: volume is released\nI1004 00:00:50.609219       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"pv-7232/pvc-tester-m769w\" PVC=\"pv-7232/pvc-h855s\"\nI1004 00:00:50.610532       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"pv-7232/pvc-h855s\"\nI1004 00:00:51.590700       1 gce_util.go:84] Error deleting GCE PD volume e2e-5690f240d7-0a91d-k-pvc-e1b92df1-ac9e-45d3-b588-b774434e7bf8: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-e1b92df1-ac9e-45d3-b588-b774434e7bf8' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g', resourceInUseByAnotherResource\nE1004 00:00:51.591087       1 goroutinemap.go:150] Operation for \"delete-pvc-e1b92df1-ac9e-45d3-b588-b774434e7bf8[32162f15-9d68-418f-9299-a6c47eab5137]\" failed. No retries permitted until 2021-10-04 00:00:52.091048093 +0000 UTC m=+993.493543945 (durationBeforeRetry 500ms). Error: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-e1b92df1-ac9e-45d3-b588-b774434e7bf8' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g', resourceInUseByAnotherResource\nI1004 00:00:51.591285       1 event.go:291] \"Event occurred\" object=\"pvc-e1b92df1-ac9e-45d3-b588-b774434e7bf8\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-e1b92df1-ac9e-45d3-b588-b774434e7bf8' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-gn9g', resourceInUseByAnotherResource\"\nE1004 00:00:51.928899       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:00:51.976729       1 graph_builder.go:587] add [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI1004 00:00:51.977140       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-wgzgv\" objectUID=cb72b008-6fb6-4d2f-8939-40aa23f5d49d kind=\"Pod\" virtual=false\nI1004 00:00:51.977665       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-znnbv\" objectUID=2d16932d-2e08-418c-9793-c8018277f87a kind=\"Pod\" virtual=false\nI1004 00:00:51.977698       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-f2rsv\" objectUID=d6dbc723-b9db-47f2-a452-fd00e4c49512 kind=\"Pod\" virtual=false\nI1004 00:00:51.977728       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-rfrd4\" objectUID=4c4f5b13-4749-4b1d-96bf-9c9aaaba3b34 kind=\"Pod\" virtual=false\nI1004 00:00:51.977759       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-22rdm\" objectUID=7aa2d3fe-0fa2-40c4-8f0c-9899171dca0a kind=\"Pod\" virtual=false\nI1004 00:00:51.977784       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-q7shj\" objectUID=fffe24cc-9c86-414d-84b4-dd1e2b0a9e10 kind=\"Pod\" virtual=false\nI1004 00:00:51.977811       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-qts99\" objectUID=c9dae693-eab6-4408-8a29-7c1d7cc7c149 kind=\"Pod\" virtual=false\nI1004 00:00:51.977839       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-987nh\" objectUID=9eb61002-fd9c-4a2d-9c8c-bab3d8d5d6be kind=\"Pod\" virtual=false\nI1004 00:00:51.977867       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-jzzm8\" objectUID=d8610fc1-d8c1-43bd-924a-eae67b9aca75 kind=\"Pod\" virtual=false\nI1004 00:00:51.977917       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc\" objectUID=e3ef3c85-b511-4307-a0ff-fc838c35ec60 kind=\"ReplicationController\" virtual=false\nI1004 00:00:51.977958       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-96b4h\" objectUID=251f89ad-e337-4541-9124-a8f1476c357e kind=\"Pod\" virtual=false\nI1004 00:00:52.015731       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9599, name: simpletest.rc-rfrd4, uid: 4c4f5b13-4749-4b1d-96bf-9c9aaaba3b34] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] is deletingDependents\nI1004 00:00:52.015779       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9599, name: simpletest.rc-22rdm, uid: 7aa2d3fe-0fa2-40c4-8f0c-9899171dca0a] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] is deletingDependents\nI1004 00:00:52.015793       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9599, name: simpletest.rc-96b4h, uid: 251f89ad-e337-4541-9124-a8f1476c357e] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] is deletingDependents\nI1004 00:00:52.015931       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9599, name: simpletest.rc-wgzgv, uid: cb72b008-6fb6-4d2f-8939-40aa23f5d49d] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] is deletingDependents\nI1004 00:00:52.015943       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9599, name: simpletest.rc-znnbv, uid: 2d16932d-2e08-418c-9793-c8018277f87a] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] is deletingDependents\nI1004 00:00:52.015953       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9599, name: simpletest.rc-f2rsv, uid: d6dbc723-b9db-47f2-a452-fd00e4c49512] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] is deletingDependents\nI1004 00:00:52.015965       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9599, name: simpletest.rc-987nh, uid: 9eb61002-fd9c-4a2d-9c8c-bab3d8d5d6be] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] is deletingDependents\nI1004 00:00:52.016015       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9599, name: simpletest.rc-jzzm8, uid: d8610fc1-d8c1-43bd-924a-eae67b9aca75] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] is deletingDependents\nI1004 00:00:52.016059       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9599, name: simpletest.rc-q7shj, uid: fffe24cc-9c86-414d-84b4-dd1e2b0a9e10] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] is deletingDependents\nI1004 00:00:52.016070       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9599, name: simpletest.rc-qts99, uid: c9dae693-eab6-4408-8a29-7c1d7cc7c149] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] is deletingDependents\nI1004 00:00:52.023564       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-9599/simpletest.rc-wgzgv\" objectUID=cb72b008-6fb6-4d2f-8939-40aa23f5d49d kind=\"Pod\" propagationPolicy=Background\nI1004 00:00:52.039307       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc\" objectUID=e3ef3c85-b511-4307-a0ff-fc838c35ec60 kind=\"ReplicationController\" virtual=false\nI1004 00:00:52.045046       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-9599/simpletest.rc-jzzm8\" objectUID=d8610fc1-d8c1-43bd-924a-eae67b9aca75 kind=\"Pod\" propagationPolicy=Background\nI1004 00:00:52.045593       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-9599/simpletest.rc-987nh\" objectUID=9eb61002-fd9c-4a2d-9c8c-bab3d8d5d6be kind=\"Pod\" propagationPolicy=Background\nI1004 00:00:52.046803       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-wgzgv\" objectUID=cb72b008-6fb6-4d2f-8939-40aa23f5d49d kind=\"Pod\" virtual=false\nI1004 00:00:52.047011       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-9599/simpletest.rc-22rdm\" objectUID=7aa2d3fe-0fa2-40c4-8f0c-9899171dca0a kind=\"Pod\" propagationPolicy=Background\nI1004 00:00:52.047400       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-9599/simpletest.rc-96b4h\" objectUID=251f89ad-e337-4541-9124-a8f1476c357e kind=\"Pod\" propagationPolicy=Background\nI1004 00:00:52.048628       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-9599/simpletest.rc-znnbv\" objectUID=2d16932d-2e08-418c-9793-c8018277f87a kind=\"Pod\" propagationPolicy=Background\nI1004 00:00:52.049289       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-9599/simpletest.rc-qts99\" objectUID=c9dae693-eab6-4408-8a29-7c1d7cc7c149 kind=\"Pod\" propagationPolicy=Background\nI1004 00:00:52.049786       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-9599/simpletest.rc-rfrd4\" objectUID=4c4f5b13-4749-4b1d-96bf-9c9aaaba3b34 kind=\"Pod\" propagationPolicy=Background\nI1004 00:00:52.050331       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9599, name: simpletest.rc-96b4h, uid: 251f89ad-e337-4541-9124-a8f1476c357e] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] is deletingDependents\nI1004 00:00:52.050465       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9599, name: simpletest.rc-znnbv, uid: 2d16932d-2e08-418c-9793-c8018277f87a] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] is deletingDependents\nI1004 00:00:52.050493       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-9599/simpletest.rc-f2rsv\" objectUID=d6dbc723-b9db-47f2-a452-fd00e4c49512 kind=\"Pod\" propagationPolicy=Background\nI1004 00:00:52.050476       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9599, name: simpletest.rc-f2rsv, uid: d6dbc723-b9db-47f2-a452-fd00e4c49512] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] is deletingDependents\nI1004 00:00:52.050678       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9599, name: simpletest.rc-rfrd4, uid: 4c4f5b13-4749-4b1d-96bf-9c9aaaba3b34] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] is deletingDependents\nI1004 00:00:52.050780       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9599, name: simpletest.rc-22rdm, uid: 7aa2d3fe-0fa2-40c4-8f0c-9899171dca0a] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] is deletingDependents\nI1004 00:00:52.050950       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9599, name: simpletest.rc-q7shj, uid: fffe24cc-9c86-414d-84b4-dd1e2b0a9e10] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] is deletingDependents\nI1004 00:00:52.051039       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-9599/simpletest.rc-q7shj\" objectUID=fffe24cc-9c86-414d-84b4-dd1e2b0a9e10 kind=\"Pod\" propagationPolicy=Background\nI1004 00:00:52.051014       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9599, name: simpletest.rc-qts99, uid: c9dae693-eab6-4408-8a29-7c1d7cc7c149] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] is deletingDependents\nI1004 00:00:52.051132       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9599, name: simpletest.rc-987nh, uid: 9eb61002-fd9c-4a2d-9c8c-bab3d8d5d6be] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] is deletingDependents\nI1004 00:00:52.051159       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9599, name: simpletest.rc-jzzm8, uid: d8610fc1-d8c1-43bd-924a-eae67b9aca75] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60] is deletingDependents\nI1004 00:00:52.072395       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-qts99\" objectUID=c9dae693-eab6-4408-8a29-7c1d7cc7c149 kind=\"Pod\" virtual=false\nI1004 00:00:52.087258       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc\" objectUID=e3ef3c85-b511-4307-a0ff-fc838c35ec60 kind=\"ReplicationController\" virtual=false\nI1004 00:00:52.095032       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-znnbv\" objectUID=2d16932d-2e08-418c-9793-c8018277f87a kind=\"Pod\" virtual=false\nI1004 00:00:52.103576       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-f2rsv\" objectUID=d6dbc723-b9db-47f2-a452-fd00e4c49512 kind=\"Pod\" virtual=false\nI1004 00:00:52.103813       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-22rdm\" objectUID=7aa2d3fe-0fa2-40c4-8f0c-9899171dca0a kind=\"Pod\" virtual=false\nI1004 00:00:52.103868       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-rfrd4\" objectUID=4c4f5b13-4749-4b1d-96bf-9c9aaaba3b34 kind=\"Pod\" virtual=false\nI1004 00:00:52.104005       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-96b4h\" objectUID=251f89ad-e337-4541-9124-a8f1476c357e kind=\"Pod\" virtual=false\nI1004 00:00:52.104045       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-jzzm8\" objectUID=d8610fc1-d8c1-43bd-924a-eae67b9aca75 kind=\"Pod\" virtual=false\nI1004 00:00:52.104750       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-987nh\" objectUID=9eb61002-fd9c-4a2d-9c8c-bab3d8d5d6be kind=\"Pod\" virtual=false\nI1004 00:00:52.137248       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc-q7shj\" objectUID=fffe24cc-9c86-414d-84b4-dd1e2b0a9e10 kind=\"Pod\" virtual=false\nI1004 00:00:52.168602       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/ReplicationController, namespace: gc-9599, name: simpletest.rc, uid: e3ef3c85-b511-4307-a0ff-fc838c35ec60]\nI1004 00:00:52.262372       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9599/simpletest.rc\" objectUID=e3ef3c85-b511-4307-a0ff-fc838c35ec60 kind=\"ReplicationController\" virtual=false\nE1004 00:00:52.362407       1 tokens_controller.go:262] error synchronizing serviceaccount emptydir-6396/default: secrets \"default-token-h48vp\" is forbidden: unable to create new content in namespace emptydir-6396 because it is being terminated\nI1004 00:00:52.572360       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\") on node \"nodes-us-west3-a-zfb0\" \nI1004 00:00:52.576609       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\") on node \"nodes-us-west3-a-zfb0\" \nI1004 00:00:52.705036       1 route_controller.go:294] set node nodes-us-west3-a-gn9g with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:00:52.705112       1 route_controller.go:294] set node master-us-west3-a-nkrg with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:00:52.705127       1 route_controller.go:294] set node nodes-us-west3-a-zfb0 with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:00:52.705150       1 route_controller.go:294] set node nodes-us-west3-a-chbv with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:00:52.705182       1 route_controller.go:294] set node nodes-us-west3-a-h26h with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:00:53.084411       1 namespace_controller.go:185] Namespace has been deleted dns-4660\nI1004 00:00:53.366616       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\") from node \"nodes-us-west3-a-chbv\" \nI1004 00:00:53.367029       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-9621/pod-2ac2148c-a73c-4b5b-bd64-7223ca4131de\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\\\" \"\nI1004 00:00:53.929335       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"pv-7232/pvc-tester-m769w\" PVC=\"pv-7232/pvc-h855s\"\nI1004 00:00:53.929712       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"pv-7232/pvc-h855s\"\nI1004 00:00:54.130778       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"pv-7232/pvc-tester-m769w\" PVC=\"pv-7232/pvc-h855s\"\nI1004 00:00:54.131112       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"pv-7232/pvc-h855s\"\nI1004 00:00:54.143945       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"pv-7232/pvc-h855s\"\nI1004 00:00:54.173442       1 pv_controller.go:640] volume \"gce-wmklp\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:00:54.187824       1 pv_controller.go:879] volume \"gce-wmklp\" entered phase \"Released\"\nI1004 00:00:54.196214       1 namespace_controller.go:185] Namespace has been deleted volumemode-7934\nE1004 00:00:54.243270       1 namespace_controller.go:162] deletion of namespace conntrack-2339 failed: unexpected items still remain in namespace: conntrack-2339 for gvr: /v1, Resource=pods\nE1004 00:00:54.530578       1 namespace_controller.go:162] deletion of namespace conntrack-2339 failed: unexpected items still remain in namespace: conntrack-2339 for gvr: /v1, Resource=pods\nI1004 00:00:54.824495       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-8682/pvc-nmw84\"\nI1004 00:00:54.848697       1 pv_controller.go:640] volume \"local-6v85r\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:00:54.862661       1 pv_controller.go:640] volume \"local-6v85r\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:00:54.874900       1 pv_controller.go:879] volume \"local-6v85r\" entered phase \"Released\"\nE1004 00:00:54.875257       1 namespace_controller.go:162] deletion of namespace conntrack-2339 failed: unexpected items still remain in namespace: conntrack-2339 for gvr: /v1, Resource=pods\nI1004 00:00:54.880602       1 pv_controller_base.go:505] deletion of claim \"volume-8682/pvc-nmw84\" was already processed\nI1004 00:00:54.950049       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-e1b92df1-ac9e-45d3-b588-b774434e7bf8\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-e1b92df1-ac9e-45d3-b588-b774434e7bf8\") on node \"nodes-us-west3-a-gn9g\" \nI1004 00:00:54.960206       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-e1b92df1-ac9e-45d3-b588-b774434e7bf8\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-e1b92df1-ac9e-45d3-b588-b774434e7bf8\") on node \"nodes-us-west3-a-gn9g\" \nE1004 00:00:55.158583       1 namespace_controller.go:162] deletion of namespace conntrack-2339 failed: unexpected items still remain in namespace: conntrack-2339 for gvr: /v1, Resource=pods\nI1004 00:00:55.297402       1 event.go:291] \"Event occurred\" object=\"volume-458-8211/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI1004 00:00:55.367474       1 event.go:291] \"Event occurred\" object=\"volume-458/csi-hostpathljwm8\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-458\\\" or manually created by system administrator\"\nI1004 00:00:55.367969       1 event.go:291] \"Event occurred\" object=\"volume-458/csi-hostpathljwm8\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-458\\\" or manually created by system administrator\"\nE1004 00:00:55.561181       1 namespace_controller.go:162] deletion of namespace conntrack-2339 failed: unexpected items still remain in namespace: conntrack-2339 for gvr: /v1, Resource=pods\nE1004 00:00:55.688547       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-5128/pvc-bzkzb: storageclass.storage.k8s.io \"provisioning-5128\" not found\nI1004 00:00:55.688605       1 event.go:291] \"Event occurred\" object=\"provisioning-5128/pvc-bzkzb\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-5128\\\" not found\"\nI1004 00:00:55.723379       1 pv_controller.go:879] volume \"local-5ct9n\" entered phase \"Available\"\nE1004 00:00:55.866410       1 namespace_controller.go:162] deletion of namespace conntrack-2339 failed: unexpected items still remain in namespace: conntrack-2339 for gvr: /v1, Resource=pods\nI1004 00:00:55.898148       1 namespace_controller.go:185] Namespace has been deleted services-6187\nE1004 00:00:56.211339       1 tokens_controller.go:262] error synchronizing serviceaccount dns-797/default: secrets \"default-token-dftss\" is forbidden: unable to create new content in namespace dns-797 because it is being terminated\nE1004 00:00:56.341449       1 namespace_controller.go:162] deletion of namespace conntrack-2339 failed: unexpected items still remain in namespace: conntrack-2339 for gvr: /v1, Resource=pods\nI1004 00:00:56.369058       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-8636-5837/csi-mockplugin\nI1004 00:00:56.369379       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8636-5837/csi-mockplugin-55bd6dfc74\" objectUID=15d23d60-db80-479b-8b6b-c14c163dbf3b kind=\"ControllerRevision\" virtual=false\nI1004 00:00:56.369715       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8636-5837/csi-mockplugin-0\" objectUID=66076eca-6ee2-4572-a3ce-a1535689bb32 kind=\"Pod\" virtual=false\nI1004 00:00:56.373082       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8636-5837/csi-mockplugin-0\" objectUID=66076eca-6ee2-4572-a3ce-a1535689bb32 kind=\"Pod\" propagationPolicy=Background\nI1004 00:00:56.373590       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8636-5837/csi-mockplugin-55bd6dfc74\" objectUID=15d23d60-db80-479b-8b6b-c14c163dbf3b kind=\"ControllerRevision\" propagationPolicy=Background\nI1004 00:00:56.501078       1 namespace_controller.go:185] Namespace has been deleted projected-888\nE1004 00:00:56.772330       1 namespace_controller.go:162] deletion of namespace apply-7557 failed: unexpected items still remain in namespace: apply-7557 for gvr: /v1, Resource=pods\nE1004 00:00:56.932552       1 namespace_controller.go:162] deletion of namespace conntrack-2339 failed: unexpected items still remain in namespace: conntrack-2339 for gvr: /v1, Resource=pods\nI1004 00:00:57.016165       1 namespace_controller.go:185] Namespace has been deleted emptydir-6131\nI1004 00:00:57.244081       1 job_controller.go:406] enqueueing job job-9731/all-pods-removed\nI1004 00:00:57.253679       1 job_controller.go:406] enqueueing job job-9731/all-pods-removed\nI1004 00:00:57.255164       1 event.go:291] \"Event occurred\" object=\"job-9731/all-pods-removed\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: all-pods-removed--1-7q6dw\"\nI1004 00:00:57.260968       1 job_controller.go:406] enqueueing job job-9731/all-pods-removed\nI1004 00:00:57.268306       1 event.go:291] \"Event occurred\" object=\"job-9731/all-pods-removed\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: all-pods-removed--1-bmvr8\"\nI1004 00:00:57.269760       1 job_controller.go:406] enqueueing job job-9731/all-pods-removed\nI1004 00:00:57.274347       1 job_controller.go:406] enqueueing job job-9731/all-pods-removed\nI1004 00:00:57.279336       1 job_controller.go:406] enqueueing job job-9731/all-pods-removed\nI1004 00:00:57.429108       1 namespace_controller.go:185] Namespace has been deleted emptydir-6396\nE1004 00:00:57.668632       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:00:57.975607       1 pv_controller.go:879] volume \"local-pvcgvc2\" entered phase \"Available\"\nI1004 00:00:57.996478       1 pv_controller.go:930] claim \"persistent-local-volumes-test-4306/pvc-j8wcn\" bound to volume \"local-pvcgvc2\"\nI1004 00:00:58.006905       1 pv_controller.go:879] volume \"local-pvcgvc2\" entered phase \"Bound\"\nI1004 00:00:58.006949       1 pv_controller.go:982] volume \"local-pvcgvc2\" bound to claim \"persistent-local-volumes-test-4306/pvc-j8wcn\"\nI1004 00:00:58.015915       1 pv_controller.go:823] claim \"persistent-local-volumes-test-4306/pvc-j8wcn\" entered phase \"Bound\"\nI1004 00:00:58.128869       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\") on node \"nodes-us-west3-a-zfb0\" \nI1004 00:00:58.192117       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\") from node \"nodes-us-west3-a-chbv\" \nI1004 00:00:58.329993       1 job_controller.go:406] enqueueing job job-9731/all-pods-removed\nE1004 00:00:58.554873       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1004 00:00:58.773601       1 tokens_controller.go:262] error synchronizing serviceaccount gc-9599/default: secrets \"default-token-5qq6t\" is forbidden: unable to create new content in namespace gc-9599 because it is being terminated\nI1004 00:00:59.260345       1 stateful_set_control.go:555] StatefulSet statefulset-5081/ss terminating Pod ss-0 for update\nI1004 00:00:59.275946       1 event.go:291] \"Event occurred\" object=\"statefulset-5081/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI1004 00:00:59.449188       1 namespace_controller.go:185] Namespace has been deleted port-forwarding-5034\nI1004 00:00:59.901449       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-8636\nE1004 00:01:00.290564       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1004 00:01:00.384430       1 tokens_controller.go:262] error synchronizing serviceaccount volume-8682/default: secrets \"default-token-xnlq8\" is forbidden: unable to create new content in namespace volume-8682 because it is being terminated\nI1004 00:01:00.577120       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-e1b92df1-ac9e-45d3-b588-b774434e7bf8\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-e1b92df1-ac9e-45d3-b588-b774434e7bf8\") on node \"nodes-us-west3-a-gn9g\" \nE1004 00:01:00.627102       1 namespace_controller.go:162] deletion of namespace kubectl-6798 failed: unexpected items still remain in namespace: kubectl-6798 for gvr: /v1, Resource=pods\nI1004 00:01:01.120098       1 event.go:291] \"Event occurred\" object=\"statefulset-5081/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nE1004 00:01:01.164166       1 tokens_controller.go:262] error synchronizing serviceaccount secrets-5595/default: secrets \"default-token-m6dbb\" is forbidden: unable to create new content in namespace secrets-5595 because it is being terminated\nE1004 00:01:01.165971       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:01:01.305565       1 namespace_controller.go:185] Namespace has been deleted dns-797\nE1004 00:01:01.489381       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-8636-5837/default: secrets \"default-token-vzjph\" is forbidden: unable to create new content in namespace csi-mock-volumes-8636-5837 because it is being terminated\nI1004 00:01:01.795567       1 garbagecollector.go:471] \"Processing object\" object=\"services-7264/externalname-service-qkqj6\" objectUID=5c6b7403-a820-4ca4-a59f-caf668c45431 kind=\"EndpointSlice\" virtual=false\nI1004 00:01:01.810195       1 garbagecollector.go:580] \"Deleting object\" object=\"services-7264/externalname-service-qkqj6\" objectUID=5c6b7403-a820-4ca4-a59f-caf668c45431 kind=\"EndpointSlice\" propagationPolicy=Background\nI1004 00:01:02.296638       1 stateful_set.go:440] StatefulSet has been deleted volumemode-7899-1439/csi-hostpathplugin\nI1004 00:01:02.296676       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-7899-1439/csi-hostpathplugin-0\" objectUID=0c011502-a73d-43a5-ba44-519783f1da82 kind=\"Pod\" virtual=false\nI1004 00:01:02.297085       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-7899-1439/csi-hostpathplugin-674f8d59db\" objectUID=be745525-0f28-486e-ae5f-3440712ffc9e kind=\"ControllerRevision\" virtual=false\nI1004 00:01:02.304679       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-7899-1439/csi-hostpathplugin-0\" objectUID=0c011502-a73d-43a5-ba44-519783f1da82 kind=\"Pod\" propagationPolicy=Background\nI1004 00:01:02.312335       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-7899-1439/csi-hostpathplugin-674f8d59db\" objectUID=be745525-0f28-486e-ae5f-3440712ffc9e kind=\"ControllerRevision\" propagationPolicy=Background\nI1004 00:01:02.339967       1 job_controller.go:406] enqueueing job job-9731/all-pods-removed\nI1004 00:01:02.514188       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"gce-wmklp\" (UniqueName: \"kubernetes.io/gce-pd/e2e-934de462-96b2-44b3-9db4-507f969ed2e1\") on node \"nodes-us-west3-a-zfb0\" \nI1004 00:01:02.526888       1 event.go:291] \"Event occurred\" object=\"volume-expand-523/gcepdswzhc\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1004 00:01:02.532953       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"gce-wmklp\" (UniqueName: \"kubernetes.io/gce-pd/e2e-934de462-96b2-44b3-9db4-507f969ed2e1\") on node \"nodes-us-west3-a-zfb0\" \nI1004 00:01:02.586665       1 route_controller.go:294] set node nodes-us-west3-a-zfb0 with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:02.586701       1 route_controller.go:294] set node master-us-west3-a-nkrg with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:02.586742       1 route_controller.go:294] set node nodes-us-west3-a-gn9g with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:02.586758       1 route_controller.go:294] set node nodes-us-west3-a-h26h with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:02.586716       1 route_controller.go:294] set node nodes-us-west3-a-chbv with NodeNetworkUnavailable=false was canceled because it is already set\nE1004 00:01:02.663238       1 tokens_controller.go:262] error synchronizing serviceaccount pv-7895/default: secrets \"default-token-qhxxf\" is forbidden: unable to create new content in namespace pv-7895 because it is being terminated\nI1004 00:01:02.798355       1 namespace_controller.go:185] Namespace has been deleted conntrack-2339\nE1004 00:01:02.803253       1 tokens_controller.go:262] error synchronizing serviceaccount init-container-5527/default: secrets \"default-token-qxthn\" is forbidden: unable to create new content in namespace init-container-5527 because it is being terminated\nI1004 00:01:02.857790       1 pv_controller.go:930] claim \"provisioning-5128/pvc-bzkzb\" bound to volume \"local-5ct9n\"\nI1004 00:01:02.858183       1 event.go:291] \"Event occurred\" object=\"volume-expand-8644/gcepd5xs75\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1004 00:01:02.859233       1 pv_controller.go:1340] isVolumeReleased[pvc-e1b92df1-ac9e-45d3-b588-b774434e7bf8]: volume is released\nI1004 00:01:02.873301       1 pv_controller.go:879] volume \"local-5ct9n\" entered phase \"Bound\"\nI1004 00:01:02.873351       1 pv_controller.go:982] volume \"local-5ct9n\" bound to claim \"provisioning-5128/pvc-bzkzb\"\nI1004 00:01:02.885420       1 pv_controller.go:823] claim \"provisioning-5128/pvc-bzkzb\" entered phase \"Bound\"\nI1004 00:01:02.886468       1 event.go:291] \"Event occurred\" object=\"volume-458/csi-hostpathljwm8\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-458\\\" or manually created by system administrator\"\nI1004 00:01:03.157032       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4306/pod-9dcf1f26-b7f7-4cf8-b764-18bde7442888\" PVC=\"persistent-local-volumes-test-4306/pvc-j8wcn\"\nI1004 00:01:03.157170       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4306/pvc-j8wcn\"\nI1004 00:01:03.258182       1 pv_controller.go:879] volume \"pvc-34671683-8554-4c2e-9ca0-3dadbbf268b1\" entered phase \"Bound\"\nI1004 00:01:03.258238       1 pv_controller.go:982] volume \"pvc-34671683-8554-4c2e-9ca0-3dadbbf268b1\" bound to claim \"volume-458/csi-hostpathljwm8\"\nI1004 00:01:03.284312       1 pv_controller.go:823] claim \"volume-458/csi-hostpathljwm8\" entered phase \"Bound\"\nI1004 00:01:03.518543       1 event.go:291] \"Event occurred\" object=\"volume-expand-8644/gcepd5xs75\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1004 00:01:03.523822       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-8644/gcepd5xs75\"\nI1004 00:01:03.630694       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-34671683-8554-4c2e-9ca0-3dadbbf268b1\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-458^2a27e524-24a6-11ec-a11e-7eae2341e105\") from node \"nodes-us-west3-a-gn9g\" \nI1004 00:01:04.061802       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"vol1\" (UniqueName: \"kubernetes.io/gce-pd/e2e-88724ad0-d504-4b6b-8a59-6bcf1f6dce60\") on node \"nodes-us-west3-a-chbv\" \nI1004 00:01:04.070219       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"vol1\" (UniqueName: \"kubernetes.io/gce-pd/e2e-88724ad0-d504-4b6b-8a59-6bcf1f6dce60\") on node \"nodes-us-west3-a-chbv\" \nI1004 00:01:04.182126       1 namespace_controller.go:185] Namespace has been deleted gc-9599\nI1004 00:01:04.193153       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-34671683-8554-4c2e-9ca0-3dadbbf268b1\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-458^2a27e524-24a6-11ec-a11e-7eae2341e105\") from node \"nodes-us-west3-a-gn9g\" \nI1004 00:01:04.193558       1 event.go:291] \"Event occurred\" object=\"volume-458/hostpath-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-34671683-8554-4c2e-9ca0-3dadbbf268b1\\\" \"\nE1004 00:01:04.233124       1 namespace_controller.go:162] deletion of namespace svcaccounts-628 failed: unexpected items still remain in namespace: svcaccounts-628 for gvr: /v1, Resource=pods\nI1004 00:01:04.461926       1 gce_util.go:188] Successfully created single-zone GCE PD volume e2e-5690f240d7-0a91d-k-pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\nI1004 00:01:04.467815       1 namespace_controller.go:185] Namespace has been deleted pv-5612\nI1004 00:01:04.471407       1 pv_controller.go:1676] volume \"pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\" provisioned for claim \"volume-expand-523/gcepdswzhc\"\nI1004 00:01:04.471839       1 event.go:291] \"Event occurred\" object=\"volume-expand-523/gcepdswzhc\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-923647c2-0450-4d55-ba0c-1595b8ef1853 using kubernetes.io/gce-pd\"\nI1004 00:01:04.476069       1 pv_controller.go:879] volume \"pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\" entered phase \"Bound\"\nI1004 00:01:04.476117       1 pv_controller.go:982] volume \"pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\" bound to claim \"volume-expand-523/gcepdswzhc\"\nI1004 00:01:04.486631       1 pv_controller.go:823] claim \"volume-expand-523/gcepdswzhc\" entered phase \"Bound\"\nI1004 00:01:04.679645       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\") from node \"nodes-us-west3-a-h26h\" \nE1004 00:01:04.715867       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-3799/pvc-9gpd8: storageclass.storage.k8s.io \"volume-3799\" not found\nI1004 00:01:04.716195       1 event.go:291] \"Event occurred\" object=\"volume-3799/pvc-9gpd8\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-3799\\\" not found\"\nI1004 00:01:04.744042       1 pv_controller.go:879] volume \"local-kbf7b\" entered phase \"Available\"\nI1004 00:01:05.061711       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [], removed: [mygroup.example.com/v1beta1, Resource=foo5wr7nas]\nI1004 00:01:05.062033       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI1004 00:01:05.062217       1 shared_informer.go:247] Caches are synced for garbage collector \nI1004 00:01:05.062233       1 garbagecollector.go:254] synced garbage collector\nI1004 00:01:05.246587       1 pv_controller_base.go:505] deletion of claim \"pv-7232/pvc-h855s\" was already processed\nI1004 00:01:05.329948       1 job_controller.go:406] enqueueing job job-9731/all-pods-removed\nI1004 00:01:05.347925       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\") from node \"nodes-us-west3-a-chbv\" \nI1004 00:01:05.348226       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-5898/pod-8b84e641-00f6-42eb-b36e-3117ca8e1bfe\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\\\" \"\nI1004 00:01:05.395079       1 gce_util.go:89] Successfully deleted GCE PD volume e2e-5690f240d7-0a91d-k-pvc-e1b92df1-ac9e-45d3-b588-b774434e7bf8\nI1004 00:01:05.395122       1 pv_controller.go:1435] volume \"pvc-e1b92df1-ac9e-45d3-b588-b774434e7bf8\" deleted\nI1004 00:01:05.421729       1 pv_controller_base.go:505] deletion of claim \"volume-7338/gcepdhmk94\" was already processed\nI1004 00:01:05.522928       1 namespace_controller.go:185] Namespace has been deleted volumemode-7899\nI1004 00:01:05.774142       1 namespace_controller.go:185] Namespace has been deleted volume-8682\nI1004 00:01:05.796981       1 namespace_controller.go:185] Namespace has been deleted downward-api-6053\nI1004 00:01:06.452832       1 namespace_controller.go:185] Namespace has been deleted secrets-5595\nI1004 00:01:06.792348       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-8636-5837\nI1004 00:01:07.006359       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-7264/externalname-service\" need=2 creating=1\nI1004 00:01:07.086155       1 garbagecollector.go:471] \"Processing object\" object=\"services-7264/externalname-service-qhdmv\" objectUID=a31eb887-9b63-4155-90df-2e6a1bdac6e9 kind=\"Pod\" virtual=false\nI1004 00:01:07.086193       1 garbagecollector.go:471] \"Processing object\" object=\"services-7264/externalname-service-hpnfs\" objectUID=22b94c52-49d1-45e3-95f3-3cfdcbcacb34 kind=\"Pod\" virtual=false\nE1004 00:01:07.194299       1 namespace_controller.go:162] deletion of namespace services-7264 failed: unexpected items still remain in namespace: services-7264 for gvr: /v1, Resource=pods\nE1004 00:01:07.371726       1 namespace_controller.go:162] deletion of namespace services-7264 failed: unexpected items still remain in namespace: services-7264 for gvr: /v1, Resource=pods\nE1004 00:01:07.615079       1 namespace_controller.go:162] deletion of namespace services-7264 failed: unexpected items still remain in namespace: services-7264 for gvr: /v1, Resource=pods\nI1004 00:01:07.703864       1 namespace_controller.go:185] Namespace has been deleted pv-7895\nE1004 00:01:07.895920       1 namespace_controller.go:162] deletion of namespace services-7264 failed: unexpected items still remain in namespace: services-7264 for gvr: /v1, Resource=pods\nI1004 00:01:07.958220       1 namespace_controller.go:185] Namespace has been deleted init-container-5527\nE1004 00:01:08.161227       1 namespace_controller.go:162] deletion of namespace services-7264 failed: unexpected items still remain in namespace: services-7264 for gvr: /v1, Resource=pods\nI1004 00:01:08.268559       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"gce-wmklp\" (UniqueName: \"kubernetes.io/gce-pd/e2e-934de462-96b2-44b3-9db4-507f969ed2e1\") on node \"nodes-us-west3-a-zfb0\" \nE1004 00:01:08.382492       1 namespace_controller.go:162] deletion of namespace services-7264 failed: unexpected items still remain in namespace: services-7264 for gvr: /v1, Resource=pods\nE1004 00:01:08.752490       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-4306/default: secrets \"default-token-k7xfj\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-4306 because it is being terminated\nE1004 00:01:08.933896       1 namespace_controller.go:162] deletion of namespace services-7264 failed: unexpected items still remain in namespace: services-7264 for gvr: /v1, Resource=pods\nI1004 00:01:09.284500       1 namespace_controller.go:185] Namespace has been deleted container-lifecycle-hook-1977\nI1004 00:01:09.321607       1 job_controller.go:406] enqueueing job job-9731/all-pods-removed\nI1004 00:01:09.322663       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-3417/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI1004 00:01:09.326402       1 event.go:291] \"Event occurred\" object=\"webhook-3417/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI1004 00:01:09.335901       1 event.go:291] \"Event occurred\" object=\"webhook-3417/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-dlplx\"\nI1004 00:01:09.355169       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-3417/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:01:09.395409       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4306/pod-9dcf1f26-b7f7-4cf8-b764-18bde7442888\" PVC=\"persistent-local-volumes-test-4306/pvc-j8wcn\"\nI1004 00:01:09.396218       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4306/pvc-j8wcn\"\nE1004 00:01:09.494994       1 namespace_controller.go:162] deletion of namespace services-7264 failed: unexpected items still remain in namespace: services-7264 for gvr: /v1, Resource=pods\nI1004 00:01:09.605834       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"vol1\" (UniqueName: \"kubernetes.io/gce-pd/e2e-88724ad0-d504-4b6b-8a59-6bcf1f6dce60\") on node \"nodes-us-west3-a-chbv\" \nE1004 00:01:09.897258       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-8092/pvc-lvlch: storageclass.storage.k8s.io \"volume-8092\" not found\nI1004 00:01:09.897606       1 event.go:291] \"Event occurred\" object=\"volume-8092/pvc-lvlch\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-8092\\\" not found\"\nI1004 00:01:09.932393       1 pv_controller.go:879] volume \"local-ggrjt\" entered phase \"Available\"\nI1004 00:01:09.993223       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4306/pod-9dcf1f26-b7f7-4cf8-b764-18bde7442888\" PVC=\"persistent-local-volumes-test-4306/pvc-j8wcn\"\nI1004 00:01:09.993560       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4306/pvc-j8wcn\"\nI1004 00:01:09.999727       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-4306/pvc-j8wcn\"\nI1004 00:01:10.009103       1 pv_controller.go:640] volume \"local-pvcgvc2\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:01:10.016994       1 pv_controller.go:879] volume \"local-pvcgvc2\" entered phase \"Released\"\nI1004 00:01:10.024159       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-4306/pvc-j8wcn\" was already processed\nI1004 00:01:10.102670       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volumemode-4458/pvc-bpx9z\"\nI1004 00:01:10.116164       1 pv_controller.go:640] volume \"local-sgm8v\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:01:10.122650       1 pv_controller.go:879] volume \"local-sgm8v\" entered phase \"Released\"\nI1004 00:01:10.139023       1 pv_controller_base.go:505] deletion of claim \"volumemode-4458/pvc-bpx9z\" was already processed\nE1004 00:01:10.486457       1 namespace_controller.go:162] deletion of namespace services-7264 failed: unexpected items still remain in namespace: services-7264 for gvr: /v1, Resource=pods\nI1004 00:01:10.524530       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\") from node \"nodes-us-west3-a-h26h\" \nI1004 00:01:10.525321       1 event.go:291] \"Event occurred\" object=\"volume-expand-523/pod-413cde9f-deaa-4756-9511-9d917e8443d5\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\\\" \"\nI1004 00:01:11.366452       1 garbagecollector.go:471] \"Processing object\" object=\"job-9731/all-pods-removed--1-7q6dw\" objectUID=41a86717-8d52-4b47-9c11-f0d5a528b99c kind=\"Pod\" virtual=false\nI1004 00:01:11.366753       1 job_controller.go:406] enqueueing job job-9731/all-pods-removed\nI1004 00:01:11.366968       1 garbagecollector.go:471] \"Processing object\" object=\"job-9731/all-pods-removed--1-bmvr8\" objectUID=573f7f0d-eb41-47ce-b4ab-b9f1625520ab kind=\"Pod\" virtual=false\nI1004 00:01:11.391929       1 garbagecollector.go:580] \"Deleting object\" object=\"job-9731/all-pods-removed--1-bmvr8\" objectUID=573f7f0d-eb41-47ce-b4ab-b9f1625520ab kind=\"Pod\" propagationPolicy=Background\nI1004 00:01:11.392135       1 garbagecollector.go:580] \"Deleting object\" object=\"job-9731/all-pods-removed--1-7q6dw\" objectUID=41a86717-8d52-4b47-9c11-f0d5a528b99c kind=\"Pod\" propagationPolicy=Background\nI1004 00:01:11.664492       1 stateful_set_control.go:521] StatefulSet statefulset-5081/ss terminating Pod ss-2 for scale down\nI1004 00:01:11.675007       1 event.go:291] \"Event occurred\" object=\"statefulset-5081/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-2 in StatefulSet ss successful\"\nE1004 00:01:11.966015       1 namespace_controller.go:162] deletion of namespace services-7264 failed: unexpected items still remain in namespace: services-7264 for gvr: /v1, Resource=pods\nI1004 00:01:12.017054       1 pv_controller.go:879] volume \"local-pvx5bbj\" entered phase \"Available\"\nI1004 00:01:12.027498       1 pv_controller.go:930] claim \"persistent-local-volumes-test-5458/pvc-xpdbz\" bound to volume \"local-pvx5bbj\"\nI1004 00:01:12.057676       1 pv_controller.go:879] volume \"local-pvx5bbj\" entered phase \"Bound\"\nI1004 00:01:12.057742       1 pv_controller.go:982] volume \"local-pvx5bbj\" bound to claim \"persistent-local-volumes-test-5458/pvc-xpdbz\"\nI1004 00:01:12.072747       1 pv_controller.go:823] claim \"persistent-local-volumes-test-5458/pvc-xpdbz\" entered phase \"Bound\"\nE1004 00:01:12.158721       1 tokens_controller.go:262] error synchronizing serviceaccount init-container-6932/default: secrets \"default-token-2m5jp\" is forbidden: unable to create new content in namespace init-container-6932 because it is being terminated\nI1004 00:01:12.678897       1 route_controller.go:294] set node nodes-us-west3-a-gn9g with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:12.678938       1 route_controller.go:294] set node master-us-west3-a-nkrg with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:12.678963       1 route_controller.go:294] set node nodes-us-west3-a-zfb0 with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:12.679009       1 route_controller.go:294] set node nodes-us-west3-a-chbv with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:12.679024       1 route_controller.go:294] set node nodes-us-west3-a-h26h with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:12.835611       1 namespace_controller.go:185] Namespace has been deleted volumemode-7899-1439\nI1004 00:01:12.960181       1 namespace_controller.go:185] Namespace has been deleted projected-8424\nI1004 00:01:13.971752       1 namespace_controller.go:185] Namespace has been deleted volume-expand-8644\nI1004 00:01:14.142405       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\") on node \"nodes-us-west3-a-chbv\" \nI1004 00:01:14.147855       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\") on node \"nodes-us-west3-a-chbv\" \nI1004 00:01:14.235079       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:01:14.246410       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nE1004 00:01:14.247445       1 job_controller.go:441] Error syncing job: failed pod(s) detected for job key \"job-7363/suspend-false-to-true\"\nI1004 00:01:14.394330       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-5128/pvc-bzkzb\"\nI1004 00:01:14.402305       1 pv_controller.go:640] volume \"local-5ct9n\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:01:14.406337       1 pv_controller.go:879] volume \"local-5ct9n\" entered phase \"Released\"\nI1004 00:01:14.421899       1 pv_controller_base.go:505] deletion of claim \"provisioning-5128/pvc-bzkzb\" was already processed\nI1004 00:01:14.429246       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:01:14.432019       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:01:14.445090       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:01:14.763153       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-3526/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI1004 00:01:14.763928       1 event.go:291] \"Event occurred\" object=\"webhook-3526/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI1004 00:01:14.780298       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-3526/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:01:14.798464       1 event.go:291] \"Event occurred\" object=\"webhook-3526/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-cq4x5\"\nE1004 00:01:15.281116       1 tokens_controller.go:262] error synchronizing serviceaccount secret-namespace-8954/default: secrets \"default-token-4tlfz\" is forbidden: unable to create new content in namespace secret-namespace-8954 because it is being terminated\nI1004 00:01:15.544458       1 namespace_controller.go:185] Namespace has been deleted pv-7232\nI1004 00:01:15.957561       1 namespace_controller.go:185] Namespace has been deleted volume-7338\nI1004 00:01:16.043858       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nE1004 00:01:16.068902       1 job_controller.go:441] Error syncing job: failed pod(s) detected for job key \"job-7363/suspend-false-to-true\"\nI1004 00:01:16.069107       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:01:16.249945       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:01:16.265795       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nI1004 00:01:16.274498       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nE1004 00:01:16.517850       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:01:17.015096       1 event.go:291] \"Event occurred\" object=\"volumemode-6465/gcepdnqxg5\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1004 00:01:17.317030       1 namespace_controller.go:185] Namespace has been deleted init-container-6932\nE1004 00:01:17.517528       1 namespace_controller.go:162] deletion of namespace apply-7557 failed: unexpected items still remain in namespace: apply-7557 for gvr: /v1, Resource=pods\nE1004 00:01:17.730076       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:01:17.854211       1 pv_controller.go:930] claim \"volume-3799/pvc-9gpd8\" bound to volume \"local-kbf7b\"\nI1004 00:01:17.867738       1 pv_controller.go:879] volume \"local-kbf7b\" entered phase \"Bound\"\nI1004 00:01:17.867823       1 pv_controller.go:982] volume \"local-kbf7b\" bound to claim \"volume-3799/pvc-9gpd8\"\nI1004 00:01:17.892632       1 pv_controller.go:823] claim \"volume-3799/pvc-9gpd8\" entered phase \"Bound\"\nI1004 00:01:17.892916       1 pv_controller.go:930] claim \"volume-8092/pvc-lvlch\" bound to volume \"local-ggrjt\"\nI1004 00:01:17.908455       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8416-7569/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI1004 00:01:17.948515       1 pv_controller.go:879] volume \"local-ggrjt\" entered phase \"Bound\"\nI1004 00:01:17.948809       1 pv_controller.go:982] volume \"local-ggrjt\" bound to claim \"volume-8092/pvc-lvlch\"\nI1004 00:01:17.982749       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8416-7569/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI1004 00:01:17.990268       1 pv_controller.go:823] claim \"volume-8092/pvc-lvlch\" entered phase \"Bound\"\nI1004 00:01:17.992177       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8416-7569/csi-mockplugin-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\"\nI1004 00:01:18.343195       1 stateful_set_control.go:521] StatefulSet statefulset-5081/ss terminating Pod ss-1 for scale down\nI1004 00:01:18.351753       1 event.go:291] \"Event occurred\" object=\"statefulset-5081/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nE1004 00:01:18.565599       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1004 00:01:18.741357       1 tokens_controller.go:262] error synchronizing serviceaccount gc-697/default: secrets \"default-token-rplxt\" is forbidden: unable to create new content in namespace gc-697 because it is being terminated\nI1004 00:01:18.857906       1 gce_util.go:188] Successfully created single-zone GCE PD volume e2e-5690f240d7-0a91d-k-pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\nI1004 00:01:18.872903       1 pv_controller.go:1676] volume \"pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\" provisioned for claim \"volumemode-6465/gcepdnqxg5\"\nI1004 00:01:18.873143       1 event.go:291] \"Event occurred\" object=\"volumemode-6465/gcepdnqxg5\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852 using kubernetes.io/gce-pd\"\nI1004 00:01:18.879242       1 pv_controller.go:879] volume \"pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\" entered phase \"Bound\"\nI1004 00:01:18.879297       1 pv_controller.go:982] volume \"pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\" bound to claim \"volumemode-6465/gcepdnqxg5\"\nI1004 00:01:18.911161       1 pv_controller.go:823] claim \"volumemode-6465/gcepdnqxg5\" entered phase \"Bound\"\nI1004 00:01:19.102211       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\") from node \"nodes-us-west3-a-zfb0\" \nI1004 00:01:19.235078       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9607-2908/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI1004 00:01:19.276018       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-4306\nE1004 00:01:19.341773       1 tokens_controller.go:262] error synchronizing serviceaccount emptydir-8655/default: secrets \"default-token-hskkd\" is forbidden: unable to create new content in namespace emptydir-8655 because it is being terminated\nI1004 00:01:19.708815       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\") on node \"nodes-us-west3-a-chbv\" \nI1004 00:01:19.713571       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\") from node \"nodes-us-west3-a-chbv\" \nI1004 00:01:19.748789       1 namespace_controller.go:185] Namespace has been deleted services-7264\nI1004 00:01:20.259175       1 stateful_set_control.go:521] StatefulSet statefulset-5081/ss terminating Pod ss-0 for scale down\nW1004 00:01:20.261350       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"statefulset-5081/test\", retrying. Error: EndpointSlice informer cache is out of date\nI1004 00:01:20.272803       1 event.go:291] \"Event occurred\" object=\"statefulset-5081/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI1004 00:01:20.394511       1 namespace_controller.go:185] Namespace has been deleted secret-namespace-8954\nI1004 00:01:20.410785       1 namespace_controller.go:185] Namespace has been deleted projected-2052\nI1004 00:01:20.761248       1 pv_controller.go:879] volume \"local-pvjbqtz\" entered phase \"Available\"\nI1004 00:01:20.781122       1 pv_controller.go:930] claim \"persistent-local-volumes-test-2963/pvc-5l9zb\" bound to volume \"local-pvjbqtz\"\nI1004 00:01:20.798062       1 pv_controller.go:879] volume \"local-pvjbqtz\" entered phase \"Bound\"\nI1004 00:01:20.798212       1 pv_controller.go:982] volume \"local-pvjbqtz\" bound to claim \"persistent-local-volumes-test-2963/pvc-5l9zb\"\nI1004 00:01:20.806639       1 pv_controller.go:823] claim \"persistent-local-volumes-test-2963/pvc-5l9zb\" entered phase \"Bound\"\nE1004 00:01:20.847629       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-5283/pvc-nj879: storageclass.storage.k8s.io \"volume-5283\" not found\nI1004 00:01:20.848373       1 event.go:291] \"Event occurred\" object=\"volume-5283/pvc-nj879\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-5283\\\" not found\"\nI1004 00:01:20.880729       1 pv_controller.go:879] volume \"gcepd-h4wgn\" entered phase \"Available\"\nE1004 00:01:21.079333       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-9832/default: secrets \"default-token-hdc9d\" is forbidden: unable to create new content in namespace provisioning-9832 because it is being terminated\nI1004 00:01:21.377820       1 namespace_controller.go:185] Namespace has been deleted volumemode-4458\nE1004 00:01:21.855210       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-7710/default: secrets \"default-token-clg74\" is forbidden: unable to create new content in namespace kubectl-7710 because it is being terminated\nI1004 00:01:22.642068       1 route_controller.go:294] set node master-us-west3-a-nkrg with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:22.642645       1 route_controller.go:294] set node nodes-us-west3-a-chbv with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:22.642661       1 route_controller.go:294] set node nodes-us-west3-a-zfb0 with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:22.642677       1 route_controller.go:294] set node nodes-us-west3-a-h26h with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:22.642693       1 route_controller.go:294] set node nodes-us-west3-a-gn9g with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:22.679268       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3417/e2e-test-webhook-ccg48\" objectUID=913ca645-e5aa-480a-88d3-6af889f215bb kind=\"EndpointSlice\" virtual=false\nI1004 00:01:22.726738       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3417/sample-webhook-deployment-78988fc6cd\" objectUID=3133e992-8836-437c-bcc2-c2a6c1f8a9b3 kind=\"ReplicaSet\" virtual=false\nI1004 00:01:22.727382       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-3417/sample-webhook-deployment\"\nI1004 00:01:22.733028       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3417/e2e-test-webhook-ccg48\" objectUID=913ca645-e5aa-480a-88d3-6af889f215bb kind=\"EndpointSlice\" propagationPolicy=Background\nI1004 00:01:22.733284       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3417/sample-webhook-deployment-78988fc6cd\" objectUID=3133e992-8836-437c-bcc2-c2a6c1f8a9b3 kind=\"ReplicaSet\" propagationPolicy=Background\nI1004 00:01:22.741121       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3417/sample-webhook-deployment-78988fc6cd-dlplx\" objectUID=9274e890-590d-4f53-9be6-f57e3350874e kind=\"Pod\" virtual=false\nI1004 00:01:22.746190       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3417/sample-webhook-deployment-78988fc6cd-dlplx\" objectUID=9274e890-590d-4f53-9be6-f57e3350874e kind=\"Pod\" propagationPolicy=Background\nI1004 00:01:23.099265       1 job_controller.go:406] enqueueing job job-7363/suspend-false-to-true\nE1004 00:01:23.117928       1 tokens_controller.go:262] error synchronizing serviceaccount job-7363/default: secrets \"default-token-79b6p\" is forbidden: unable to create new content in namespace job-7363 because it is being terminated\nE1004 00:01:23.781891       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-9640/default: secrets \"default-token-986sm\" is forbidden: unable to create new content in namespace kubectl-9640 because it is being terminated\nI1004 00:01:23.974709       1 namespace_controller.go:185] Namespace has been deleted gc-697\nI1004 00:01:24.179693       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-8f50ebbe-108a-4074-a2a0-c0631d9d4f38\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-8f50ebbe-108a-4074-a2a0-c0631d9d4f38\") on node \"nodes-us-west3-a-chbv\" \nI1004 00:01:24.192453       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-8f50ebbe-108a-4074-a2a0-c0631d9d4f38\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-8f50ebbe-108a-4074-a2a0-c0631d9d4f38\") on node \"nodes-us-west3-a-chbv\" \nI1004 00:01:24.436722       1 namespace_controller.go:185] Namespace has been deleted emptydir-8655\nI1004 00:01:24.996751       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-b82c12ac-4303-4815-a826-0507bb0cca78\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-b82c12ac-4303-4815-a826-0507bb0cca78\") on node \"nodes-us-west3-a-gn9g\" \nI1004 00:01:25.000372       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-b82c12ac-4303-4815-a826-0507bb0cca78\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-b82c12ac-4303-4815-a826-0507bb0cca78\") on node \"nodes-us-west3-a-gn9g\" \nI1004 00:01:25.062943       1 namespace_controller.go:185] Namespace has been deleted provisioning-5128\nE1004 00:01:25.103446       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:01:25.168659       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\") from node \"nodes-us-west3-a-zfb0\" \nI1004 00:01:25.168905       1 event.go:291] \"Event occurred\" object=\"volumemode-6465/pod-8b3f885f-185c-45b1-b976-7872a3eb2ecf\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\\\" \"\nI1004 00:01:25.218340       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-5458/pod-5b1858f5-979e-4c60-a614-e8f1c269aaa4\" PVC=\"persistent-local-volumes-test-5458/pvc-xpdbz\"\nI1004 00:01:25.218573       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-5458/pvc-xpdbz\"\nI1004 00:01:25.716416       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\") from node \"nodes-us-west3-a-chbv\" \nI1004 00:01:25.716975       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-9621/pod-dd3d556b-9871-4e08-a716-d8c6b37aff22\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\\\" \"\nI1004 00:01:26.020627       1 namespace_controller.go:185] Namespace has been deleted volume-5212\nI1004 00:01:26.186994       1 namespace_controller.go:185] Namespace has been deleted provisioning-9832\nE1004 00:01:26.821417       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:01:26.863204       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-3e9a7405-1873-46a9-b56d-547b71a90bc3\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-3e9a7405-1873-46a9-b56d-547b71a90bc3\") on node \"nodes-us-west3-a-zfb0\" \nI1004 00:01:26.871881       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-3e9a7405-1873-46a9-b56d-547b71a90bc3\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-3e9a7405-1873-46a9-b56d-547b71a90bc3\") on node \"nodes-us-west3-a-zfb0\" \nI1004 00:01:26.897814       1 namespace_controller.go:185] Namespace has been deleted kubectl-7710\nE1004 00:01:26.957519       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:01:27.018984       1 namespace_controller.go:185] Namespace has been deleted apf-3588\nI1004 00:01:27.683480       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8416/pvc-g77b4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-8416\\\" or manually created by system administrator\"\nI1004 00:01:27.732928       1 pv_controller.go:879] volume \"pvc-bf4fda1c-7ec6-4c04-b71e-97ac1e780a7f\" entered phase \"Bound\"\nI1004 00:01:27.734687       1 pv_controller.go:982] volume \"pvc-bf4fda1c-7ec6-4c04-b71e-97ac1e780a7f\" bound to claim \"csi-mock-volumes-8416/pvc-g77b4\"\nI1004 00:01:27.749664       1 pv_controller.go:823] claim \"csi-mock-volumes-8416/pvc-g77b4\" entered phase \"Bound\"\nE1004 00:01:27.880136       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-3417-markers/default: secrets \"default-token-tc589\" is forbidden: unable to create new content in namespace webhook-3417-markers because it is being terminated\nE1004 00:01:27.948694       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-3417/default: secrets \"default-token-8khv5\" is forbidden: unable to create new content in namespace webhook-3417 because it is being terminated\nI1004 00:01:28.048959       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-5458/pod-5b1858f5-979e-4c60-a614-e8f1c269aaa4\" PVC=\"persistent-local-volumes-test-5458/pvc-xpdbz\"\nI1004 00:01:28.048998       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-5458/pvc-xpdbz\"\nI1004 00:01:28.242131       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-5458/pod-5b1858f5-979e-4c60-a614-e8f1c269aaa4\" PVC=\"persistent-local-volumes-test-5458/pvc-xpdbz\"\nI1004 00:01:28.242414       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-5458/pvc-xpdbz\"\nI1004 00:01:28.251043       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-5458/pvc-xpdbz\"\nI1004 00:01:28.259021       1 pv_controller.go:640] volume \"local-pvx5bbj\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:01:28.264720       1 pv_controller.go:879] volume \"local-pvx5bbj\" entered phase \"Released\"\nI1004 00:01:28.269873       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-5458/pvc-xpdbz\" was already processed\nI1004 00:01:28.384689       1 namespace_controller.go:185] Namespace has been deleted job-7363\nI1004 00:01:29.019294       1 namespace_controller.go:185] Namespace has been deleted kubectl-9640\nE1004 00:01:29.521534       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-2319/default: secrets \"default-token-xl78k\" is forbidden: unable to create new content in namespace provisioning-2319 because it is being terminated\nE1004 00:01:29.723667       1 tokens_controller.go:262] error synchronizing serviceaccount tables-9647/default: secrets \"default-token-gc5dv\" is forbidden: unable to create new content in namespace tables-9647 because it is being terminated\nI1004 00:01:29.886289       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-bf4fda1c-7ec6-4c04-b71e-97ac1e780a7f\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8416^4\") from node \"nodes-us-west3-a-zfb0\" \nI1004 00:01:30.000924       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-7033/sample-webhook-deployment\"\nI1004 00:01:30.302352       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3526/e2e-test-webhook-h86sq\" objectUID=4075c5c1-bf66-4e5d-bcd3-b59643e525d0 kind=\"EndpointSlice\" virtual=false\nI1004 00:01:30.312483       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3526/e2e-test-webhook-h86sq\" objectUID=4075c5c1-bf66-4e5d-bcd3-b59643e525d0 kind=\"EndpointSlice\" propagationPolicy=Background\nI1004 00:01:30.358780       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3526/sample-webhook-deployment-78988fc6cd\" objectUID=88664712-8a3d-4b10-be52-926cbc99d673 kind=\"ReplicaSet\" virtual=false\nI1004 00:01:30.359348       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-3526/sample-webhook-deployment\"\nI1004 00:01:30.362622       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3526/sample-webhook-deployment-78988fc6cd\" objectUID=88664712-8a3d-4b10-be52-926cbc99d673 kind=\"ReplicaSet\" propagationPolicy=Background\nI1004 00:01:30.370903       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3526/sample-webhook-deployment-78988fc6cd-cq4x5\" objectUID=fa7e56d4-f5e2-42b3-baa5-ceec981a865e kind=\"Pod\" virtual=false\nI1004 00:01:30.374770       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3526/sample-webhook-deployment-78988fc6cd-cq4x5\" objectUID=fa7e56d4-f5e2-42b3-baa5-ceec981a865e kind=\"Pod\" propagationPolicy=Background\nI1004 00:01:30.410019       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-b82c12ac-4303-4815-a826-0507bb0cca78\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-b82c12ac-4303-4815-a826-0507bb0cca78\") on node \"nodes-us-west3-a-gn9g\" \nI1004 00:01:30.462518       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-bf4fda1c-7ec6-4c04-b71e-97ac1e780a7f\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8416^4\") from node \"nodes-us-west3-a-zfb0\" \nI1004 00:01:30.463300       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8416/pvc-volume-tester-5pvh2\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-bf4fda1c-7ec6-4c04-b71e-97ac1e780a7f\\\" \"\nI1004 00:01:30.539506       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-8092/pvc-lvlch\"\nI1004 00:01:30.555728       1 pv_controller.go:640] volume \"local-ggrjt\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:01:30.566670       1 pv_controller.go:640] volume \"local-ggrjt\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:01:30.571572       1 pv_controller.go:879] volume \"local-ggrjt\" entered phase \"Released\"\nI1004 00:01:30.579484       1 pv_controller_base.go:505] deletion of claim \"volume-8092/pvc-lvlch\" was already processed\nE1004 00:01:30.977620       1 namespace_controller.go:162] deletion of namespace kubectl-5125 failed: unexpected items still remain in namespace: kubectl-5125 for gvr: /v1, Resource=pods\nI1004 00:01:31.123966       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-8f50ebbe-108a-4074-a2a0-c0631d9d4f38\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-8f50ebbe-108a-4074-a2a0-c0631d9d4f38\") on node \"nodes-us-west3-a-chbv\" \nI1004 00:01:31.783716       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-5081/ss-5b7957f447\" objectUID=9c24c4ec-9b55-4ea8-84a6-9d86d2a5fd1a kind=\"ControllerRevision\" virtual=false\nI1004 00:01:31.784160       1 stateful_set.go:440] StatefulSet has been deleted statefulset-5081/ss\nI1004 00:01:31.784236       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-5081/ss-7bd6c98d8\" objectUID=4291cb58-d7e9-4ebd-8360-231223b8466f kind=\"ControllerRevision\" virtual=false\nI1004 00:01:31.789012       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-5081/ss-7bd6c98d8\" objectUID=4291cb58-d7e9-4ebd-8360-231223b8466f kind=\"ControllerRevision\" propagationPolicy=Background\nI1004 00:01:31.789632       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-5081/ss-5b7957f447\" objectUID=9c24c4ec-9b55-4ea8-84a6-9d86d2a5fd1a kind=\"ControllerRevision\" propagationPolicy=Background\nI1004 00:01:31.867305       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"statefulset-5081/datadir-ss-0\"\nE1004 00:01:31.876498       1 tokens_controller.go:262] error synchronizing serviceaccount projected-7202/default: secrets \"default-token-kpx6r\" is forbidden: unable to create new content in namespace projected-7202 because it is being terminated\nI1004 00:01:31.892413       1 event.go:291] \"Event occurred\" object=\"provisioning-8651-9245/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI1004 00:01:31.929109       1 pv_controller.go:640] volume \"pvc-3e9a7405-1873-46a9-b56d-547b71a90bc3\" is released and reclaim policy \"Delete\" will be executed\nI1004 00:01:31.931426       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"statefulset-5081/datadir-ss-1\"\nI1004 00:01:31.972600       1 pv_controller.go:879] volume \"pvc-3e9a7405-1873-46a9-b56d-547b71a90bc3\" entered phase \"Released\"\nI1004 00:01:31.993100       1 pv_controller.go:1340] isVolumeReleased[pvc-3e9a7405-1873-46a9-b56d-547b71a90bc3]: volume is released\nI1004 00:01:32.005423       1 pv_controller.go:640] volume \"pvc-b82c12ac-4303-4815-a826-0507bb0cca78\" is released and reclaim policy \"Delete\" will be executed\nI1004 00:01:32.006365       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"statefulset-5081/datadir-ss-2\"\nI1004 00:01:32.007922       1 event.go:291] \"Event occurred\" object=\"provisioning-8651/csi-hostpathwsjd8\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-8651\\\" or manually created by system administrator\"\nI1004 00:01:32.044667       1 pv_controller.go:879] volume \"pvc-b82c12ac-4303-4815-a826-0507bb0cca78\" entered phase \"Released\"\nI1004 00:01:32.052525       1 pv_controller.go:1340] isVolumeReleased[pvc-b82c12ac-4303-4815-a826-0507bb0cca78]: volume is released\nI1004 00:01:32.061934       1 pv_controller.go:640] volume \"pvc-8f50ebbe-108a-4074-a2a0-c0631d9d4f38\" is released and reclaim policy \"Delete\" will be executed\nI1004 00:01:32.067372       1 pv_controller.go:879] volume \"pvc-8f50ebbe-108a-4074-a2a0-c0631d9d4f38\" entered phase \"Released\"\nI1004 00:01:32.070803       1 pv_controller.go:1340] isVolumeReleased[pvc-8f50ebbe-108a-4074-a2a0-c0631d9d4f38]: volume is released\nE1004 00:01:32.178849       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:01:32.406764       1 operation_generator.go:300] VerifyVolumesAreAttached.BulkVerifyVolumes failed for node \"nodes-us-west3-a-zfb0\" and volume \"pvc-3e9a7405-1873-46a9-b56d-547b71a90bc3\"\nI1004 00:01:32.541956       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-3e9a7405-1873-46a9-b56d-547b71a90bc3\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-3e9a7405-1873-46a9-b56d-547b71a90bc3\") on node \"nodes-us-west3-a-zfb0\" \nI1004 00:01:32.584559       1 route_controller.go:294] set node nodes-us-west3-a-gn9g with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:32.584824       1 route_controller.go:294] set node master-us-west3-a-nkrg with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:32.584966       1 route_controller.go:294] set node nodes-us-west3-a-zfb0 with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:32.585198       1 route_controller.go:294] set node nodes-us-west3-a-chbv with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:32.585223       1 route_controller.go:294] set node nodes-us-west3-a-h26h with NodeNetworkUnavailable=false was canceled because it is already set\nE1004 00:01:32.727870       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:01:32.861279       1 pv_controller.go:930] claim \"volume-5283/pvc-nj879\" bound to volume \"gcepd-h4wgn\"\nI1004 00:01:32.862024       1 event.go:291] \"Event occurred\" object=\"provisioning-8651/csi-hostpathwsjd8\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-8651\\\" or manually created by system administrator\"\nI1004 00:01:32.877312       1 pv_controller.go:879] volume \"gcepd-h4wgn\" entered phase \"Bound\"\nI1004 00:01:32.877539       1 pv_controller.go:982] volume \"gcepd-h4wgn\" bound to claim \"volume-5283/pvc-nj879\"\nI1004 00:01:32.890764       1 pv_controller.go:823] claim \"volume-5283/pvc-nj879\" entered phase \"Bound\"\nI1004 00:01:32.996841       1 namespace_controller.go:185] Namespace has been deleted webhook-3417-markers\nI1004 00:01:33.006574       1 namespace_controller.go:185] Namespace has been deleted webhook-3417\nI1004 00:01:33.148678       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7180-6341/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI1004 00:01:33.233818       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7180-6341/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI1004 00:01:33.240028       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7180-6341/csi-mockplugin-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\"\nI1004 00:01:33.259222       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"gcepd-h4wgn\" (UniqueName: \"kubernetes.io/gce-pd/e2e-c9bd0c44-b196-462e-94f0-f0f3d5ecc6f5\") from node \"nodes-us-west3-a-chbv\" \nI1004 00:01:33.899016       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-2963/pod-56d0aeb6-efd0-44a2-b74a-7cc6197d9191\" PVC=\"persistent-local-volumes-test-2963/pvc-5l9zb\"\nI1004 00:01:33.899691       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-2963/pvc-5l9zb\"\nI1004 00:01:34.371332       1 gce_util.go:89] Successfully deleted GCE PD volume e2e-5690f240d7-0a91d-k-pvc-8f50ebbe-108a-4074-a2a0-c0631d9d4f38\nI1004 00:01:34.371373       1 pv_controller.go:1435] volume \"pvc-8f50ebbe-108a-4074-a2a0-c0631d9d4f38\" deleted\nI1004 00:01:34.387952       1 pv_controller_base.go:505] deletion of claim \"statefulset-5081/datadir-ss-2\" was already processed\nI1004 00:01:34.391896       1 gce_util.go:89] Successfully deleted GCE PD volume e2e-5690f240d7-0a91d-k-pvc-b82c12ac-4303-4815-a826-0507bb0cca78\nI1004 00:01:34.392017       1 pv_controller.go:1435] volume \"pvc-b82c12ac-4303-4815-a826-0507bb0cca78\" deleted\nI1004 00:01:34.417419       1 pv_controller_base.go:505] deletion of claim \"statefulset-5081/datadir-ss-1\" was already processed\nI1004 00:01:34.500371       1 gce_util.go:89] Successfully deleted GCE PD volume e2e-5690f240d7-0a91d-k-pvc-3e9a7405-1873-46a9-b56d-547b71a90bc3\nI1004 00:01:34.500609       1 pv_controller.go:1435] volume \"pvc-3e9a7405-1873-46a9-b56d-547b71a90bc3\" deleted\nI1004 00:01:34.549177       1 pv_controller_base.go:505] deletion of claim \"statefulset-5081/datadir-ss-0\" was already processed\nI1004 00:01:34.613260       1 namespace_controller.go:185] Namespace has been deleted provisioning-2319\nI1004 00:01:34.793620       1 namespace_controller.go:185] Namespace has been deleted tables-9647\nI1004 00:01:34.990721       1 event.go:291] \"Event occurred\" object=\"provisioning-5165/gcepdczng6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1004 00:01:36.066134       1 event.go:291] \"Event occurred\" object=\"provisioning-3815-1990/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI1004 00:01:36.152739       1 event.go:291] \"Event occurred\" object=\"provisioning-3815/csi-hostpathhj76d\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-3815\\\" or manually created by system administrator\"\nI1004 00:01:36.192747       1 namespace_controller.go:185] Namespace has been deleted projected-4852\nI1004 00:01:36.664772       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"disruption-9642/rs\" need=10 creating=10\nI1004 00:01:36.674813       1 event.go:291] \"Event occurred\" object=\"disruption-9642/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-w8s27\"\nI1004 00:01:36.692396       1 event.go:291] \"Event occurred\" object=\"disruption-9642/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-82r99\"\nI1004 00:01:36.706389       1 event.go:291] \"Event occurred\" object=\"disruption-9642/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-2qvxf\"\nI1004 00:01:36.734857       1 event.go:291] \"Event occurred\" object=\"disruption-9642/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-pr45n\"\nI1004 00:01:36.735173       1 event.go:291] \"Event occurred\" object=\"disruption-9642/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-ngx5c\"\nI1004 00:01:36.752504       1 event.go:291] \"Event occurred\" object=\"disruption-9642/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-gqngf\"\nI1004 00:01:36.759703       1 event.go:291] \"Event occurred\" object=\"disruption-9642/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-l7kkb\"\nI1004 00:01:36.789759       1 event.go:291] \"Event occurred\" object=\"disruption-9642/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-24krk\"\nI1004 00:01:36.790140       1 event.go:291] \"Event occurred\" object=\"disruption-9642/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-svhsn\"\nI1004 00:01:36.790273       1 event.go:291] \"Event occurred\" object=\"disruption-9642/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-jp9sm\"\nI1004 00:01:36.842728       1 gce_util.go:188] Successfully created single-zone GCE PD volume e2e-5690f240d7-0a91d-k-pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\nI1004 00:01:36.858200       1 pv_controller.go:1676] volume \"pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\" provisioned for claim \"provisioning-5165/gcepdczng6\"\nI1004 00:01:36.858905       1 event.go:291] \"Event occurred\" object=\"provisioning-5165/gcepdczng6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5 using kubernetes.io/gce-pd\"\nI1004 00:01:36.864993       1 pv_controller.go:879] volume \"pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\" entered phase \"Bound\"\nI1004 00:01:36.865042       1 pv_controller.go:982] volume \"pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\" bound to claim \"provisioning-5165/gcepdczng6\"\nI1004 00:01:36.887222       1 pv_controller.go:823] claim \"provisioning-5165/gcepdczng6\" entered phase \"Bound\"\nI1004 00:01:36.917358       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-5458\nI1004 00:01:37.027727       1 namespace_controller.go:185] Namespace has been deleted projected-7202\nI1004 00:01:37.135749       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\") from node \"nodes-us-west3-a-h26h\" \nI1004 00:01:37.216570       1 garbagecollector.go:471] \"Processing object\" object=\"services-1548/nodeport-reuse-zmsm7\" objectUID=0735e446-4d29-49c3-819a-9eb0f39b3142 kind=\"EndpointSlice\" virtual=false\nI1004 00:01:37.231017       1 garbagecollector.go:580] \"Deleting object\" object=\"services-1548/nodeport-reuse-zmsm7\" objectUID=0735e446-4d29-49c3-819a-9eb0f39b3142 kind=\"EndpointSlice\" propagationPolicy=Background\nI1004 00:01:37.882440       1 namespace_controller.go:185] Namespace has been deleted configmap-8652\nI1004 00:01:37.916084       1 pv_controller.go:879] volume \"pvc-01e53778-4593-4978-bf78-49dc8057299b\" entered phase \"Bound\"\nI1004 00:01:37.917508       1 pv_controller.go:982] volume \"pvc-01e53778-4593-4978-bf78-49dc8057299b\" bound to claim \"provisioning-8651/csi-hostpathwsjd8\"\nI1004 00:01:37.932906       1 pv_controller.go:823] claim \"provisioning-8651/csi-hostpathwsjd8\" entered phase \"Bound\"\nI1004 00:01:38.252213       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-01e53778-4593-4978-bf78-49dc8057299b\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-8651^3ed55172-24a6-11ec-8770-b6fbeada2d5a\") from node \"nodes-us-west3-a-zfb0\" \nI1004 00:01:38.569221       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\") on node \"nodes-us-west3-a-chbv\" \nI1004 00:01:38.572537       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\") on node \"nodes-us-west3-a-chbv\" \nI1004 00:01:38.829917       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-01e53778-4593-4978-bf78-49dc8057299b\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-8651^3ed55172-24a6-11ec-8770-b6fbeada2d5a\") from node \"nodes-us-west3-a-zfb0\" \nI1004 00:01:38.830251       1 event.go:291] \"Event occurred\" object=\"provisioning-8651/pod-subpath-test-dynamicpv-6jtj\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-01e53778-4593-4978-bf78-49dc8057299b\\\" \"\nI1004 00:01:39.300174       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\") on node \"nodes-us-west3-a-zfb0\" \nI1004 00:01:39.304858       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\") on node \"nodes-us-west3-a-zfb0\" \nI1004 00:01:39.340543       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"fsgroupchangepolicy-5898/gcepdcflks\"\nI1004 00:01:39.355846       1 pv_controller.go:640] volume \"pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\" is released and reclaim policy \"Delete\" will be executed\nI1004 00:01:39.364196       1 pv_controller.go:879] volume \"pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\" entered phase \"Released\"\nI1004 00:01:39.367861       1 pv_controller.go:1340] isVolumeReleased[pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9]: volume is released\nI1004 00:01:40.287699       1 gce_util.go:84] Error deleting GCE PD volume e2e-5690f240d7-0a91d-k-pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-chbv', resourceInUseByAnotherResource\nE1004 00:01:40.287787       1 goroutinemap.go:150] Operation for \"delete-pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9[b3f72a6a-1052-4340-87ee-6b01a50b0fa6]\" failed. No retries permitted until 2021-10-04 00:01:40.787762295 +0000 UTC m=+1042.190258152 (durationBeforeRetry 500ms). Error: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-chbv', resourceInUseByAnotherResource\nI1004 00:01:40.288102       1 event.go:291] \"Event occurred\" object=\"pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-chbv', resourceInUseByAnotherResource\"\nI1004 00:01:40.318165       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-2963/pod-56d0aeb6-efd0-44a2-b74a-7cc6197d9191\" PVC=\"persistent-local-volumes-test-2963/pvc-5l9zb\"\nI1004 00:01:40.318211       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-2963/pvc-5l9zb\"\nI1004 00:01:40.337994       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volumemode-6465/gcepdnqxg5\"\nI1004 00:01:40.346816       1 pv_controller.go:640] volume \"pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\" is released and reclaim policy \"Delete\" will be executed\nI1004 00:01:40.351653       1 pv_controller.go:879] volume \"pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\" entered phase \"Released\"\nI1004 00:01:40.355055       1 pv_controller.go:1340] isVolumeReleased[pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852]: volume is released\nI1004 00:01:40.430060       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"gcepd-h4wgn\" (UniqueName: \"kubernetes.io/gce-pd/e2e-c9bd0c44-b196-462e-94f0-f0f3d5ecc6f5\") from node \"nodes-us-west3-a-chbv\" \nI1004 00:01:40.430621       1 event.go:291] \"Event occurred\" object=\"volume-5283/gcepd-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"gcepd-h4wgn\\\" \"\nI1004 00:01:40.519289       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-2963/pod-56d0aeb6-efd0-44a2-b74a-7cc6197d9191\" PVC=\"persistent-local-volumes-test-2963/pvc-5l9zb\"\nI1004 00:01:40.519628       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-2963/pvc-5l9zb\"\nI1004 00:01:40.575734       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-2963/pvc-5l9zb\"\nI1004 00:01:40.600884       1 pv_controller.go:640] volume \"local-pvjbqtz\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:01:40.609828       1 pv_controller.go:879] volume \"local-pvjbqtz\" entered phase \"Released\"\nI1004 00:01:40.611425       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-2963/pvc-5l9zb\" was already processed\nI1004 00:01:40.614493       1 namespace_controller.go:185] Namespace has been deleted webhook-3526-markers\nI1004 00:01:40.639478       1 namespace_controller.go:185] Namespace has been deleted webhook-3526\nI1004 00:01:40.726713       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9607/pvc-cqs6c\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-9607\\\" or manually created by system administrator\"\nI1004 00:01:40.746308       1 pv_controller.go:879] volume \"pvc-65cc08e0-bf00-4c2d-aa94-3164ec3865ec\" entered phase \"Bound\"\nI1004 00:01:40.746353       1 pv_controller.go:982] volume \"pvc-65cc08e0-bf00-4c2d-aa94-3164ec3865ec\" bound to claim \"csi-mock-volumes-9607/pvc-cqs6c\"\nI1004 00:01:40.756874       1 pv_controller.go:823] claim \"csi-mock-volumes-9607/pvc-cqs6c\" entered phase \"Bound\"\nI1004 00:01:40.975023       1 operation_generator.go:1613] ExpandVolume succeeded for volume volume-expand-523/gcepdswzhc\nI1004 00:01:40.985338       1 operation_generator.go:1625] ExpandVolume.UpdatePV succeeded for volume volume-expand-523/gcepdswzhc\nI1004 00:01:40.991035       1 event.go:291] \"Event occurred\" object=\"volume-expand-523/gcepdswzhc\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeResizeSuccessful\" message=\"ExpandVolume succeeded for volume volume-expand-523/gcepdswzhc\"\nI1004 00:01:41.155115       1 pv_controller.go:879] volume \"pvc-2d79bf72-d3ef-43f1-b1b0-ad29f05b2f84\" entered phase \"Bound\"\nI1004 00:01:41.155167       1 pv_controller.go:982] volume \"pvc-2d79bf72-d3ef-43f1-b1b0-ad29f05b2f84\" bound to claim \"provisioning-3815/csi-hostpathhj76d\"\nI1004 00:01:41.169513       1 pv_controller.go:823] claim \"provisioning-3815/csi-hostpathhj76d\" entered phase \"Bound\"\nI1004 00:01:41.332772       1 gce_util.go:84] Error deleting GCE PD volume e2e-5690f240d7-0a91d-k-pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-zfb0', resourceInUseByAnotherResource\nE1004 00:01:41.333038       1 goroutinemap.go:150] Operation for \"delete-pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852[306b4fb8-9850-485b-a3b1-c9d85098dc03]\" failed. No retries permitted until 2021-10-04 00:01:41.833011301 +0000 UTC m=+1043.235507154 (durationBeforeRetry 500ms). Error: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-zfb0', resourceInUseByAnotherResource\nI1004 00:01:41.333200       1 event.go:291] \"Event occurred\" object=\"pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-zfb0', resourceInUseByAnotherResource\"\nE1004 00:01:41.640228       1 tokens_controller.go:262] error synchronizing serviceaccount prestop-9797/default: secrets \"default-token-zrcls\" is forbidden: unable to create new content in namespace prestop-9797 because it is being terminated\nI1004 00:01:41.741045       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-3799/pvc-9gpd8\"\nI1004 00:01:41.757120       1 pv_controller.go:640] volume \"local-kbf7b\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:01:41.775746       1 pv_controller.go:640] volume \"local-kbf7b\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:01:41.786974       1 pv_controller.go:879] volume \"local-kbf7b\" entered phase \"Released\"\nI1004 00:01:41.800821       1 pv_controller_base.go:505] deletion of claim \"volume-3799/pvc-9gpd8\" was already processed\nE1004 00:01:41.869086       1 namespace_controller.go:162] deletion of namespace kubectl-6798 failed: unexpected items still remain in namespace: kubectl-6798 for gvr: /v1, Resource=pods\nI1004 00:01:41.902369       1 namespace_controller.go:185] Namespace has been deleted volume-8092\nE1004 00:01:42.307048       1 tokens_controller.go:262] error synchronizing serviceaccount emptydir-375/default: secrets \"default-token-9p9k8\" is forbidden: unable to create new content in namespace emptydir-375 because it is being terminated\nI1004 00:01:42.357170       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-2d79bf72-d3ef-43f1-b1b0-ad29f05b2f84\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-3815^40c37df9-24a6-11ec-bb42-ee6d406726ba\") from node \"nodes-us-west3-a-gn9g\" \nI1004 00:01:42.448130       1 namespace_controller.go:185] Namespace has been deleted pods-921\nI1004 00:01:42.582206       1 route_controller.go:294] set node nodes-us-west3-a-gn9g with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:42.582247       1 route_controller.go:294] set node master-us-west3-a-nkrg with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:42.582336       1 route_controller.go:294] set node nodes-us-west3-a-chbv with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:42.582377       1 route_controller.go:294] set node nodes-us-west3-a-h26h with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:42.582392       1 route_controller.go:294] set node nodes-us-west3-a-zfb0 with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:42.695710       1 garbagecollector.go:471] \"Processing object\" object=\"proxy-2452/test-service-xdlsc\" objectUID=9d015c51-f6ba-438b-8754-26ef6086623d kind=\"EndpointSlice\" virtual=false\nI1004 00:01:42.701502       1 garbagecollector.go:580] \"Deleting object\" object=\"proxy-2452/test-service-xdlsc\" objectUID=9d015c51-f6ba-438b-8754-26ef6086623d kind=\"EndpointSlice\" propagationPolicy=Background\nI1004 00:01:42.904880       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-2d79bf72-d3ef-43f1-b1b0-ad29f05b2f84\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-3815^40c37df9-24a6-11ec-bb42-ee6d406726ba\") from node \"nodes-us-west3-a-gn9g\" \nI1004 00:01:42.905108       1 event.go:291] \"Event occurred\" object=\"provisioning-3815/pod-subpath-test-dynamicpv-9pg6\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-2d79bf72-d3ef-43f1-b1b0-ad29f05b2f84\\\" \"\nI1004 00:01:42.906476       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7180/pvc-hkkmf\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-7180\\\" or manually created by system administrator\"\nI1004 00:01:42.910943       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7180/pvc-hkkmf\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-7180\\\" or manually created by system administrator\"\nI1004 00:01:42.934708       1 pv_controller.go:879] volume \"pvc-80b14a12-5332-4ef7-8886-0d1144222759\" entered phase \"Bound\"\nI1004 00:01:42.934759       1 pv_controller.go:982] volume \"pvc-80b14a12-5332-4ef7-8886-0d1144222759\" bound to claim \"csi-mock-volumes-7180/pvc-hkkmf\"\nE1004 00:01:42.956665       1 tokens_controller.go:262] error synchronizing serviceaccount proxy-2452/default: secrets \"default-token-z8mph\" is forbidden: unable to create new content in namespace proxy-2452 because it is being terminated\nI1004 00:01:42.971373       1 pv_controller.go:823] claim \"csi-mock-volumes-7180/pvc-hkkmf\" entered phase \"Bound\"\nI1004 00:01:43.023618       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\") from node \"nodes-us-west3-a-h26h\" \nI1004 00:01:43.023713       1 event.go:291] \"Event occurred\" object=\"provisioning-5165/pod-subpath-test-dynamicpv-fvr5\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\\\" \"\nI1004 00:01:43.948263       1 expand_controller.go:289] Ignoring the PVC \"csi-mock-volumes-8416/pvc-g77b4\" (uid: \"bf4fda1c-7ec6-4c04-b71e-97ac1e780a7f\") : didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nI1004 00:01:43.948735       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8416/pvc-g77b4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ExternalExpanding\" message=\"Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\"\nI1004 00:01:44.776526       1 namespace_controller.go:185] Namespace has been deleted volume-6920\nI1004 00:01:44.887376       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\") on node \"nodes-us-west3-a-zfb0\" \nI1004 00:01:45.109055       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-80b14a12-5332-4ef7-8886-0d1144222759\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7180^4\") from node \"nodes-us-west3-a-gn9g\" \nI1004 00:01:45.389504       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\") on node \"nodes-us-west3-a-chbv\" \nI1004 00:01:45.664172       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-80b14a12-5332-4ef7-8886-0d1144222759\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7180^4\") from node \"nodes-us-west3-a-gn9g\" \nI1004 00:01:45.664597       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7180/pvc-volume-tester-6l6t5\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-80b14a12-5332-4ef7-8886-0d1144222759\\\" \"\nI1004 00:01:46.452482       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\") on node \"nodes-us-west3-a-h26h\" \nI1004 00:01:46.456860       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\") on node \"nodes-us-west3-a-h26h\" \nI1004 00:01:46.981837       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-523/gcepdswzhc\"\nI1004 00:01:46.996281       1 pv_controller.go:640] volume \"pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\" is released and reclaim policy \"Delete\" will be executed\nI1004 00:01:47.007897       1 pv_controller.go:879] volume \"pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\" entered phase \"Released\"\nI1004 00:01:47.011296       1 pv_controller.go:1340] isVolumeReleased[pvc-923647c2-0450-4d55-ba0c-1595b8ef1853]: volume is released\nI1004 00:01:47.210509       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\") on node \"nodes-us-west3-a-chbv\" \nI1004 00:01:47.221878       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\") on node \"nodes-us-west3-a-chbv\" \nE1004 00:01:47.281723       1 tokens_controller.go:262] error synchronizing serviceaccount volume-3799/default: secrets \"default-token-tgpzl\" is forbidden: unable to create new content in namespace volume-3799 because it is being terminated\nI1004 00:01:47.406004       1 namespace_controller.go:185] Namespace has been deleted emptydir-6399\nI1004 00:01:47.445797       1 namespace_controller.go:185] Namespace has been deleted emptydir-375\nE1004 00:01:47.594227       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:01:47.799610       1 garbagecollector.go:471] \"Processing object\" object=\"services-1548/nodeport-reuse-wql8f\" objectUID=14b27f48-1420-4eda-b78c-c9d2fad77781 kind=\"EndpointSlice\" virtual=false\nI1004 00:01:47.812357       1 garbagecollector.go:580] \"Deleting object\" object=\"services-1548/nodeport-reuse-wql8f\" objectUID=14b27f48-1420-4eda-b78c-c9d2fad77781 kind=\"EndpointSlice\" propagationPolicy=Background\nI1004 00:01:47.868151       1 pv_controller.go:1340] isVolumeReleased[pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9]: volume is released\nI1004 00:01:47.869866       1 pv_controller.go:1340] isVolumeReleased[pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852]: volume is released\nI1004 00:01:48.076948       1 gce_util.go:84] Error deleting GCE PD volume e2e-5690f240d7-0a91d-k-pvc-923647c2-0450-4d55-ba0c-1595b8ef1853: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-923647c2-0450-4d55-ba0c-1595b8ef1853' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource\nE1004 00:01:48.079492       1 goroutinemap.go:150] Operation for \"delete-pvc-923647c2-0450-4d55-ba0c-1595b8ef1853[97252868-0b3b-4b46-9c3a-d04cc773d313]\" failed. No retries permitted until 2021-10-04 00:01:48.579451362 +0000 UTC m=+1049.981947221 (durationBeforeRetry 500ms). Error: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-923647c2-0450-4d55-ba0c-1595b8ef1853' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource\nI1004 00:01:48.080514       1 event.go:291] \"Event occurred\" object=\"pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-923647c2-0450-4d55-ba0c-1595b8ef1853' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource\"\nI1004 00:01:48.335294       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"fsgroupchangepolicy-9621/gcepddnbtw\"\nI1004 00:01:48.384724       1 pv_controller.go:640] volume \"pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\" is released and reclaim policy \"Delete\" will be executed\nI1004 00:01:48.400257       1 pv_controller.go:879] volume \"pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\" entered phase \"Released\"\nI1004 00:01:48.403858       1 pv_controller.go:1340] isVolumeReleased[pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a]: volume is released\nE1004 00:01:48.703183       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-8210/pvc-kh7j6: storageclass.storage.k8s.io \"provisioning-8210\" not found\nI1004 00:01:48.703757       1 event.go:291] \"Event occurred\" object=\"provisioning-8210/pvc-kh7j6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-8210\\\" not found\"\nI1004 00:01:48.744493       1 pv_controller.go:879] volume \"local-8mrwb\" entered phase \"Available\"\nI1004 00:01:48.868125       1 pv_controller.go:879] volume \"hostpath-n77fp\" entered phase \"Available\"\nI1004 00:01:48.941003       1 pv_controller.go:930] claim \"pv-protection-6481/pvc-qnb6z\" bound to volume \"hostpath-n77fp\"\nI1004 00:01:48.952881       1 pv_controller.go:879] volume \"hostpath-n77fp\" entered phase \"Bound\"\nI1004 00:01:48.953049       1 pv_controller.go:982] volume \"hostpath-n77fp\" bound to claim \"pv-protection-6481/pvc-qnb6z\"\nI1004 00:01:48.970544       1 pv_controller.go:823] claim \"pv-protection-6481/pvc-qnb6z\" entered phase \"Bound\"\nI1004 00:01:49.349574       1 gce_util.go:84] Error deleting GCE PD volume e2e-5690f240d7-0a91d-k-pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-chbv', resourceInUseByAnotherResource\nE1004 00:01:49.349884       1 goroutinemap.go:150] Operation for \"delete-pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a[89acddb6-5e6b-4eab-9d44-e162efe337c8]\" failed. No retries permitted until 2021-10-04 00:01:49.84985614 +0000 UTC m=+1051.252352005 (durationBeforeRetry 500ms). Error: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-chbv', resourceInUseByAnotherResource\nI1004 00:01:49.350631       1 event.go:291] \"Event occurred\" object=\"pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-chbv', resourceInUseByAnotherResource\"\nI1004 00:01:50.252108       1 gce_util.go:89] Successfully deleted GCE PD volume e2e-5690f240d7-0a91d-k-pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\nI1004 00:01:50.252148       1 pv_controller.go:1435] volume \"pvc-5c208186-c66b-4b68-a5b7-cd143cee12e9\" deleted\nI1004 00:01:50.273044       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-2963\nI1004 00:01:50.273956       1 pv_controller_base.go:505] deletion of claim \"fsgroupchangepolicy-5898/gcepdcflks\" was already processed\nI1004 00:01:50.472706       1 gce_util.go:89] Successfully deleted GCE PD volume e2e-5690f240d7-0a91d-k-pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\nI1004 00:01:50.472964       1 pv_controller.go:1435] volume \"pvc-fe377d71-9b34-4ad4-acf2-090d8eec3852\" deleted\nI1004 00:01:50.490219       1 pv_controller_base.go:505] deletion of claim \"volumemode-6465/gcepdnqxg5\" was already processed\nI1004 00:01:51.066720       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"pv-protection-6481/pvc-qnb6z\"\nI1004 00:01:51.076296       1 pv_controller.go:640] volume \"hostpath-n77fp\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:01:51.084310       1 pv_controller.go:879] volume \"hostpath-n77fp\" entered phase \"Released\"\nI1004 00:01:51.091444       1 pv_controller_base.go:505] deletion of claim \"pv-protection-6481/pvc-qnb6z\" was already processed\nI1004 00:01:51.945079       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\") on node \"nodes-us-west3-a-h26h\" \nI1004 00:01:52.578437       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-8651/csi-hostpathwsjd8\"\nI1004 00:01:52.587493       1 pv_controller.go:640] volume \"pvc-01e53778-4593-4978-bf78-49dc8057299b\" is released and reclaim policy \"Delete\" will be executed\nI1004 00:01:52.600617       1 pv_controller.go:879] volume \"pvc-01e53778-4593-4978-bf78-49dc8057299b\" entered phase \"Released\"\nI1004 00:01:52.601466       1 route_controller.go:294] set node master-us-west3-a-nkrg with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:52.601717       1 route_controller.go:294] set node nodes-us-west3-a-zfb0 with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:52.601746       1 route_controller.go:294] set node nodes-us-west3-a-chbv with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:52.601759       1 route_controller.go:294] set node nodes-us-west3-a-h26h with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:52.601901       1 route_controller.go:294] set node nodes-us-west3-a-gn9g with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:01:52.606472       1 pv_controller.go:1340] isVolumeReleased[pvc-01e53778-4593-4978-bf78-49dc8057299b]: volume is released\nI1004 00:01:52.624783       1 pv_controller_base.go:505] deletion of claim \"provisioning-8651/csi-hostpathwsjd8\" was already processed\nI1004 00:01:52.700008       1 namespace_controller.go:185] Namespace has been deleted volume-3799\nI1004 00:01:52.825156       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\") on node \"nodes-us-west3-a-chbv\" \nE1004 00:01:52.858098       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:01:52.943696       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"disruption-9642/rs\" need=10 creating=1\nE1004 00:01:52.957170       1 disruption.go:534] Error syncing PodDisruptionBudget disruption-9642/foo, requeuing: Operation cannot be fulfilled on poddisruptionbudgets.policy \"foo\": the object has been modified; please apply your changes to the latest version and try again\nI1004 00:01:52.958201       1 event.go:291] \"Event occurred\" object=\"disruption-9642/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-flq4w\"\nI1004 00:01:53.322170       1 namespace_controller.go:185] Namespace has been deleted statefulset-5081\nE1004 00:01:53.838742       1 tokens_controller.go:262] error synchronizing serviceaccount job-9731/default: secrets \"default-token-86cq5\" is forbidden: unable to create new content in namespace job-9731 because it is being terminated\nE1004 00:01:54.039483       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-2324/default: secrets \"default-token-68j6w\" is forbidden: unable to create new content in namespace kubectl-2324 because it is being terminated\nI1004 00:01:54.340136       1 namespace_controller.go:185] Namespace has been deleted security-context-2688\nI1004 00:01:54.436357       1 event.go:291] \"Event occurred\" object=\"ephemeral-8048-1548/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI1004 00:01:54.761334       1 event.go:291] \"Event occurred\" object=\"statefulset-6453/test-ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod test-ss-0 in StatefulSet test-ss successful\"\nE1004 00:01:55.116880       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:01:55.217084       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-34671683-8554-4c2e-9ca0-3dadbbf268b1\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-458^2a27e524-24a6-11ec-a11e-7eae2341e105\") on node \"nodes-us-west3-a-gn9g\" \nI1004 00:01:55.223506       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-34671683-8554-4c2e-9ca0-3dadbbf268b1\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-458^2a27e524-24a6-11ec-a11e-7eae2341e105\") on node \"nodes-us-west3-a-gn9g\" \nE1004 00:01:55.717899       1 pv_controller.go:1451] error finding provisioning plugin for claim ephemeral-8184/inline-volume-pg6tk-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI1004 00:01:55.719186       1 event.go:291] \"Event occurred\" object=\"ephemeral-8184/inline-volume-pg6tk-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI1004 00:01:55.756119       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-34671683-8554-4c2e-9ca0-3dadbbf268b1\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-458^2a27e524-24a6-11ec-a11e-7eae2341e105\") on node \"nodes-us-west3-a-gn9g\" \nI1004 00:01:55.788564       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-8184, name: inline-volume-pg6tk, uid: 88e3ed9f-74cb-4ffe-92e4-be040da228ed] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI1004 00:01:55.789029       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8184/inline-volume-pg6tk\" objectUID=88e3ed9f-74cb-4ffe-92e4-be040da228ed kind=\"Pod\" virtual=false\nI1004 00:01:55.789405       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8184/inline-volume-pg6tk-my-volume\" objectUID=1fc94a38-c870-4426-a13b-2f3c3d6b0c33 kind=\"PersistentVolumeClaim\" virtual=false\nI1004 00:01:55.806491       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-8184, name: inline-volume-pg6tk-my-volume, uid: 1fc94a38-c870-4426-a13b-2f3c3d6b0c33] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-8184, name: inline-volume-pg6tk, uid: 88e3ed9f-74cb-4ffe-92e4-be040da228ed] is deletingDependents\nI1004 00:01:55.808530       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-8184/inline-volume-pg6tk-my-volume\" objectUID=1fc94a38-c870-4426-a13b-2f3c3d6b0c33 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nE1004 00:01:55.814618       1 pv_controller.go:1451] error finding provisioning plugin for claim ephemeral-8184/inline-volume-pg6tk-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI1004 00:01:55.815190       1 event.go:291] \"Event occurred\" object=\"ephemeral-8184/inline-volume-pg6tk-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI1004 00:01:55.815534       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8184/inline-volume-pg6tk-my-volume\" objectUID=1fc94a38-c870-4426-a13b-2f3c3d6b0c33 kind=\"PersistentVolumeClaim\" virtual=false\nI1004 00:01:55.819408       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-8184/inline-volume-pg6tk-my-volume\"\nI1004 00:01:55.824402       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8184/inline-volume-pg6tk\" objectUID=88e3ed9f-74cb-4ffe-92e4-be040da228ed kind=\"Pod\" virtual=false\nI1004 00:01:55.827362       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-8184, name: inline-volume-pg6tk, uid: 88e3ed9f-74cb-4ffe-92e4-be040da228ed]\nI1004 00:01:56.000854       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-4528/test-rollover-deployment\"\nE1004 00:01:56.066112       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:01:56.119412       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-458/csi-hostpathljwm8\"\nI1004 00:01:56.130199       1 pv_controller.go:640] volume \"pvc-34671683-8554-4c2e-9ca0-3dadbbf268b1\" is released and reclaim policy \"Delete\" will be executed\nI1004 00:01:56.134851       1 pv_controller.go:879] volume \"pvc-34671683-8554-4c2e-9ca0-3dadbbf268b1\" entered phase \"Released\"\nI1004 00:01:56.138387       1 pv_controller.go:1340] isVolumeReleased[pvc-34671683-8554-4c2e-9ca0-3dadbbf268b1]: volume is released\nI1004 00:01:56.167093       1 pv_controller_base.go:505] deletion of claim \"volume-458/csi-hostpathljwm8\" was already processed\nI1004 00:01:57.911040       1 namespace_controller.go:185] Namespace has been deleted configmap-5387\nI1004 00:01:58.065291       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"disruption-9642/rs\" need=10 creating=1\nI1004 00:01:58.372144       1 namespace_controller.go:185] Namespace has been deleted services-1548\nI1004 00:01:58.431709       1 namespace_controller.go:185] Namespace has been deleted provisioning-7721\nI1004 00:01:58.464420       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-8416/pvc-g77b4\"\nI1004 00:01:58.524407       1 pv_controller.go:640] volume \"pvc-bf4fda1c-7ec6-4c04-b71e-97ac1e780a7f\" is released and reclaim policy \"Delete\" will be executed\nI1004 00:01:58.552335       1 pv_controller.go:879] volume \"pvc-bf4fda1c-7ec6-4c04-b71e-97ac1e780a7f\" entered phase \"Released\"\nI1004 00:01:58.596615       1 pv_controller.go:1340] isVolumeReleased[pvc-bf4fda1c-7ec6-4c04-b71e-97ac1e780a7f]: volume is released\nE1004 00:01:58.718271       1 tokens_controller.go:262] error synchronizing serviceaccount pv-protection-6481/default: secrets \"default-token-n89tn\" is forbidden: unable to create new content in namespace pv-protection-6481 because it is being terminated\nE1004 00:01:58.948798       1 tokens_controller.go:262] error synchronizing serviceaccount clientset-5388/default: secrets \"default-token-zsqng\" is forbidden: unable to create new content in namespace clientset-5388 because it is being terminated\nI1004 00:01:59.057102       1 namespace_controller.go:185] Namespace has been deleted job-9731\nE1004 00:01:59.093887       1 namespace_controller.go:162] deletion of namespace apply-7557 failed: unexpected items still remain in namespace: apply-7557 for gvr: /v1, Resource=pods\nI1004 00:01:59.149298       1 namespace_controller.go:185] Namespace has been deleted kubectl-2324\nI1004 00:01:59.214774       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-9642/rs-2qvxf\" objectUID=e964bd98-0091-4e4e-833d-01e7889d5b8b kind=\"Pod\" virtual=false\nI1004 00:01:59.215322       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-9642/rs-flq4w\" objectUID=6e81037e-0f27-4751-9f7a-eb57f01e6d15 kind=\"Pod\" virtual=false\nI1004 00:01:59.215353       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-9642/rs-82r99\" objectUID=22ff4b20-cce8-49f1-b1a8-11210a1d063f kind=\"Pod\" virtual=false\nI1004 00:01:59.215369       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-9642/rs-gqngf\" objectUID=8a21055d-e756-4a98-ab3d-639324ad1b63 kind=\"Pod\" virtual=false\nI1004 00:01:59.215412       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-9642/rs-w8s27\" objectUID=62f47533-2080-49b7-9b8a-264a7993244b kind=\"Pod\" virtual=false\nI1004 00:01:59.215518       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-9642/rs-jp9sm\" objectUID=95b76fea-9914-4b52-b07e-9cde18beb1bd kind=\"Pod\" virtual=false\nI1004 00:01:59.215588       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-9642/rs-24krk\" objectUID=996e9a39-fa40-4e9d-afbd-7d5f581288b1 kind=\"Pod\" virtual=false\nI1004 00:01:59.215204       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-9642/rs-pr45n\" objectUID=81132dab-c834-4533-853e-66d694e40244 kind=\"Pod\" virtual=false\nI1004 00:01:59.215737       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-9642/rs-svhsn\" objectUID=38d3b314-c57a-4bbc-9641-1e0d5a1b8e9a kind=\"Pod\" virtual=false\nI1004 00:01:59.381110       1 event.go:291] \"Event occurred\" object=\"ephemeral-8184-6671/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI1004 00:01:59.472864       1 event.go:291] \"Event occurred\" object=\"ephemeral-8184/inline-volume-tester-c4xv2-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod inline-volume-tester-c4xv2 to be scheduled\"\nE1004 00:01:59.656052       1 tokens_controller.go:262] error synchronizing serviceaccount fsgroupchangepolicy-5898/default: secrets \"default-token-pt5q8\" is forbidden: unable to create new content in namespace fsgroupchangepolicy-5898 because it is being terminated\nE1004 00:01:59.856259       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-9034/pvc-5wzgv: storageclass.storage.k8s.io \"volume-9034\" not found\nI1004 00:01:59.856668       1 event.go:291] \"Event occurred\" object=\"volume-9034/pvc-5wzgv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-9034\\\" not found\"\nI1004 00:01:59.891025       1 pv_controller.go:879] volume \"local-ng5m5\" entered phase \"Available\"\nI1004 00:02:00.126376       1 event.go:291] \"Event occurred\" object=\"cronjob-770/forbid\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job forbid-27221762\"\nI1004 00:02:00.126799       1 job_controller.go:406] enqueueing job cronjob-770/forbid-27221762\nI1004 00:02:00.138446       1 job_controller.go:406] enqueueing job cronjob-770/forbid-27221762\nI1004 00:02:00.139932       1 event.go:291] \"Event occurred\" object=\"cronjob-770/forbid-27221762\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: forbid-27221762--1-dplq7\"\nI1004 00:02:00.147014       1 cronjob_controllerv2.go:193] \"Error cleaning up jobs\" cronjob=\"cronjob-770/forbid\" resourceVersion=\"34262\" err=\"Operation cannot be fulfilled on cronjobs.batch \\\"forbid\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE1004 00:02:00.147052       1 cronjob_controllerv2.go:154] error syncing CronJobController cronjob-770/forbid, requeuing: Operation cannot be fulfilled on cronjobs.batch \"forbid\": the object has been modified; please apply your changes to the latest version and try again\nI1004 00:02:00.156332       1 job_controller.go:406] enqueueing job cronjob-770/forbid-27221762\nI1004 00:02:00.161336       1 job_controller.go:406] enqueueing job cronjob-770/forbid-27221762\nI1004 00:02:00.323281       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-847dcfb7fb\" need=6 creating=6\nI1004 00:02:00.323571       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-847dcfb7fb to 6\"\nI1004 00:02:00.332636       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:00.340287       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-njjqh\"\nI1004 00:02:00.357787       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-664js\"\nI1004 00:02:00.363494       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-wxn7q\"\nI1004 00:02:00.381103       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-hrn5n\"\nI1004 00:02:00.389031       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-2f922\"\nI1004 00:02:00.389973       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-pvx82\"\nI1004 00:02:00.408848       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-847dcfb7fb to 5\"\nI1004 00:02:00.444672       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" need=2 creating=2\nI1004 00:02:00.444677       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-59b8b5fbf to 2\"\nI1004 00:02:00.449496       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-2042/webserver-847dcfb7fb\" need=5 deleting=1\nI1004 00:02:00.449545       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-2042/webserver-847dcfb7fb\" relatedReplicaSets=[webserver-847dcfb7fb webserver-59b8b5fbf]\nI1004 00:02:00.449672       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-847dcfb7fb\" pod=\"deployment-2042/webserver-847dcfb7fb-wxn7q\"\nI1004 00:02:00.482450       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-59b8b5fbf-vpzr5\"\nI1004 00:02:00.492624       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-59b8b5fbf-vnxks\"\nI1004 00:02:00.502683       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:00.504222       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-847dcfb7fb-wxn7q\"\nI1004 00:02:00.542270       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-847dcfb7fb to 4\"\nI1004 00:02:00.592244       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-847dcfb7fb\" need=4 creating=1\nI1004 00:02:00.593180       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-59b8b5fbf to 3\"\nI1004 00:02:00.593792       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" need=3 creating=1\nI1004 00:02:00.603596       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-rd2lf\"\nI1004 00:02:00.626063       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-59b8b5fbf-n96qh\"\nI1004 00:02:00.680547       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-847dcfb7fb\" need=4 creating=1\nI1004 00:02:00.695458       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-rvwdn\"\nI1004 00:02:00.832902       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-847dcfb7fb\" need=4 creating=1\nI1004 00:02:00.878257       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-d9shp\"\nI1004 00:02:01.027205       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-847dcfb7fb\" need=5 creating=1\nI1004 00:02:01.028471       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-847dcfb7fb to 5\"\nI1004 00:02:01.038765       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-hfgtv\"\nI1004 00:02:01.066630       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" need=0 deleting=3\nI1004 00:02:01.067190       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" relatedReplicaSets=[webserver-847dcfb7fb webserver-59b8b5fbf webserver-8f87946bf]\nI1004 00:02:01.067529       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-59b8b5fbf\" pod=\"deployment-2042/webserver-59b8b5fbf-vnxks\"\nI1004 00:02:01.067734       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-59b8b5fbf\" pod=\"deployment-2042/webserver-59b8b5fbf-n96qh\"\nI1004 00:02:01.067781       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-59b8b5fbf\" pod=\"deployment-2042/webserver-59b8b5fbf-vpzr5\"\nI1004 00:02:01.068347       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-59b8b5fbf to 0\"\nI1004 00:02:01.075643       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:01.084470       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-8f87946bf to 3\"\nI1004 00:02:01.139617       1 event.go:291] \"Event occurred\" object=\"ephemeral-8184/inline-volume-tester-c4xv2-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-8184\\\" or manually created by system administrator\"\nI1004 00:02:01.139943       1 event.go:291] \"Event occurred\" object=\"ephemeral-8184/inline-volume-tester-c4xv2-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-8184\\\" or manually created by system administrator\"\nI1004 00:02:01.238048       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-59b8b5fbf-vnxks\"\nI1004 00:02:01.283510       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-59b8b5fbf-n96qh\"\nI1004 00:02:01.337721       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-59b8b5fbf-vpzr5\"\nI1004 00:02:01.590762       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-8f87946bf\" need=3 creating=3\nI1004 00:02:01.730534       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-8f87946bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-8f87946bf-74jlz\"\nI1004 00:02:01.880694       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-8f87946bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-8f87946bf-pbrt5\"\nI1004 00:02:01.929520       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-8f87946bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-8f87946bf-w5496\"\nI1004 00:02:01.961678       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-770/forbid-27221762--1-dplq7\" objectUID=1076edea-f31a-402f-b532-20ef9af20036 kind=\"Pod\" virtual=false\nI1004 00:02:01.962416       1 job_controller.go:406] enqueueing job cronjob-770/forbid-27221762\nI1004 00:02:01.965926       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-770/forbid-27221762--1-dplq7\" objectUID=1076edea-f31a-402f-b532-20ef9af20036 kind=\"Pod\" propagationPolicy=Background\nI1004 00:02:01.966322       1 event.go:291] \"Event occurred\" object=\"cronjob-770/forbid\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"MissingJob\" message=\"Active job went missing: forbid-27221762\"\nI1004 00:02:02.622483       1 route_controller.go:294] set node nodes-us-west3-a-gn9g with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:02:02.622783       1 route_controller.go:294] set node master-us-west3-a-nkrg with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:02:02.622889       1 route_controller.go:294] set node nodes-us-west3-a-zfb0 with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:02:02.622986       1 route_controller.go:294] set node nodes-us-west3-a-chbv with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:02:02.623086       1 route_controller.go:294] set node nodes-us-west3-a-h26h with NodeNetworkUnavailable=false was canceled because it is already set\nE1004 00:02:02.852820       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-8651/default: secrets \"default-token-8xqlj\" is forbidden: unable to create new content in namespace provisioning-8651 because it is being terminated\nI1004 00:02:02.861721       1 pv_controller.go:930] claim \"provisioning-8210/pvc-kh7j6\" bound to volume \"local-8mrwb\"\nI1004 00:02:02.868521       1 pv_controller.go:1340] isVolumeReleased[pvc-bf4fda1c-7ec6-4c04-b71e-97ac1e780a7f]: volume is released\nI1004 00:02:02.868864       1 pv_controller.go:1340] isVolumeReleased[pvc-923647c2-0450-4d55-ba0c-1595b8ef1853]: volume is released\nI1004 00:02:02.878958       1 pv_controller.go:1340] isVolumeReleased[pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a]: volume is released\nI1004 00:02:02.881803       1 pv_controller.go:879] volume \"local-8mrwb\" entered phase \"Bound\"\nI1004 00:02:02.881850       1 pv_controller.go:982] volume \"local-8mrwb\" bound to claim \"provisioning-8210/pvc-kh7j6\"\nI1004 00:02:02.896356       1 pv_controller.go:823] claim \"provisioning-8210/pvc-kh7j6\" entered phase \"Bound\"\nI1004 00:02:02.897086       1 pv_controller.go:930] claim \"volume-9034/pvc-5wzgv\" bound to volume \"local-ng5m5\"\nI1004 00:02:02.897353       1 event.go:291] \"Event occurred\" object=\"ephemeral-8184/inline-volume-tester-c4xv2-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-8184\\\" or manually created by system administrator\"\nI1004 00:02:02.915104       1 pv_controller.go:879] volume \"local-ng5m5\" entered phase \"Bound\"\nI1004 00:02:02.915151       1 pv_controller.go:982] volume \"local-ng5m5\" bound to claim \"volume-9034/pvc-5wzgv\"\nI1004 00:02:02.931222       1 pv_controller.go:823] claim \"volume-9034/pvc-5wzgv\" entered phase \"Bound\"\nI1004 00:02:03.034283       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-bf4fda1c-7ec6-4c04-b71e-97ac1e780a7f\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8416^4\") on node \"nodes-us-west3-a-zfb0\" \nI1004 00:02:03.039252       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-bf4fda1c-7ec6-4c04-b71e-97ac1e780a7f\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8416^4\") on node \"nodes-us-west3-a-zfb0\" \nI1004 00:02:03.608138       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-bf4fda1c-7ec6-4c04-b71e-97ac1e780a7f\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8416^4\") on node \"nodes-us-west3-a-zfb0\" \nI1004 00:02:03.747648       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 2\"\nI1004 00:02:03.757649       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-59b8b5fbf\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:03.772147       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-2042/webserver-847dcfb7fb\" need=2 deleting=3\nI1004 00:02:03.772750       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-2042/webserver-847dcfb7fb\" relatedReplicaSets=[webserver-847dcfb7fb webserver-59b8b5fbf webserver-8f87946bf]\nI1004 00:02:03.773012       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-847dcfb7fb\" pod=\"deployment-2042/webserver-847dcfb7fb-664js\"\nI1004 00:02:03.772592       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-847dcfb7fb to 2\"\nI1004 00:02:03.775576       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-847dcfb7fb\" pod=\"deployment-2042/webserver-847dcfb7fb-rvwdn\"\nI1004 00:02:03.775601       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-847dcfb7fb\" pod=\"deployment-2042/webserver-847dcfb7fb-hfgtv\"\nI1004 00:02:03.775805       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:03.802028       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-847dcfb7fb-664js\"\nI1004 00:02:03.812393       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" need=3 creating=3\nI1004 00:02:03.816764       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-59b8b5fbf to 3\"\nI1004 00:02:03.830880       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-59b8b5fbf-64xt5\"\nI1004 00:02:03.841170       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-847dcfb7fb-rvwdn\"\nI1004 00:02:03.844808       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-847dcfb7fb-hfgtv\"\nI1004 00:02:03.879701       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-59b8b5fbf-jz44j\"\nI1004 00:02:03.881139       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-59b8b5fbf-mrh6n\"\nI1004 00:02:03.899142       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:03.952083       1 namespace_controller.go:185] Namespace has been deleted pv-protection-6481\nI1004 00:02:03.960244       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:03.980566       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 4\"\nI1004 00:02:04.052866       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-59b8b5fbf\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:04.067643       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:04.072167       1 namespace_controller.go:185] Namespace has been deleted clientset-5388\nE1004 00:02:04.242944       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1004 00:02:04.486415       1 tokens_controller.go:262] error synchronizing serviceaccount nettest-8424/default: secrets \"default-token-cjn2d\" is forbidden: unable to create new content in namespace nettest-8424 because it is being terminated\nI1004 00:02:04.920187       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-8651-9245/csi-hostpathplugin-5f6748c57\" objectUID=726d63d4-a798-42e9-80a7-e70eceae83df kind=\"ControllerRevision\" virtual=false\nI1004 00:02:04.920661       1 stateful_set.go:440] StatefulSet has been deleted provisioning-8651-9245/csi-hostpathplugin\nI1004 00:02:04.920844       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-8651-9245/csi-hostpathplugin-0\" objectUID=a466ec50-9b2d-40fd-b83c-95a596ee12bb kind=\"Pod\" virtual=false\nI1004 00:02:04.923709       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-8651-9245/csi-hostpathplugin-0\" objectUID=a466ec50-9b2d-40fd-b83c-95a596ee12bb kind=\"Pod\" propagationPolicy=Background\nI1004 00:02:04.924446       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-8651-9245/csi-hostpathplugin-5f6748c57\" objectUID=726d63d4-a798-42e9-80a7-e70eceae83df kind=\"ControllerRevision\" propagationPolicy=Background\nI1004 00:02:05.048652       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-3815/csi-hostpathhj76d\"\nI1004 00:02:05.058825       1 pv_controller.go:640] volume \"pvc-2d79bf72-d3ef-43f1-b1b0-ad29f05b2f84\" is released and reclaim policy \"Delete\" will be executed\nI1004 00:02:05.064317       1 pv_controller.go:879] volume \"pvc-2d79bf72-d3ef-43f1-b1b0-ad29f05b2f84\" entered phase \"Released\"\nI1004 00:02:05.066828       1 pv_controller.go:1340] isVolumeReleased[pvc-2d79bf72-d3ef-43f1-b1b0-ad29f05b2f84]: volume is released\nI1004 00:02:05.078651       1 namespace_controller.go:185] Namespace has been deleted fsgroupchangepolicy-5898\nI1004 00:02:05.149710       1 pv_controller_base.go:505] deletion of claim \"provisioning-3815/csi-hostpathhj76d\" was already processed\nI1004 00:02:05.253232       1 gce_util.go:89] Successfully deleted GCE PD volume e2e-5690f240d7-0a91d-k-pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\nI1004 00:02:05.253588       1 pv_controller.go:1435] volume \"pvc-923647c2-0450-4d55-ba0c-1595b8ef1853\" deleted\nI1004 00:02:05.270265       1 pv_controller_base.go:505] deletion of claim \"volume-expand-523/gcepdswzhc\" was already processed\nI1004 00:02:05.436739       1 gce_util.go:89] Successfully deleted GCE PD volume e2e-5690f240d7-0a91d-k-pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\nI1004 00:02:05.436797       1 pv_controller.go:1435] volume \"pvc-d3749ff6-d479-4061-9cf1-d9019fa6c91a\" deleted\nI1004 00:02:05.449311       1 pv_controller_base.go:505] deletion of claim \"fsgroupchangepolicy-9621/gcepddnbtw\" was already processed\nI1004 00:02:05.586603       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-8416/pvc-g77b4\" was already processed\nI1004 00:02:05.843056       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" need=3 creating=1\nI1004 00:02:05.852657       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-59b8b5fbf-wx4lv\"\nI1004 00:02:05.879288       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-8f87946bf\" need=3 creating=1\nI1004 00:02:05.893790       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-8f87946bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-8f87946bf-fw8z6\"\nI1004 00:02:05.937611       1 namespace_controller.go:185] Namespace has been deleted volumemode-6465\nI1004 00:02:07.799379       1 namespace_controller.go:185] Namespace has been deleted pods-4199\nI1004 00:02:08.008979       1 namespace_controller.go:185] Namespace has been deleted provisioning-8651\nI1004 00:02:08.012426       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" need=2 deleting=1\nI1004 00:02:08.012740       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" relatedReplicaSets=[webserver-59b8b5fbf webserver-8f87946bf webserver-78ddffb785 webserver-847dcfb7fb]\nI1004 00:02:08.012841       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-59b8b5fbf to 2\"\nI1004 00:02:08.013145       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-59b8b5fbf\" pod=\"deployment-2042/webserver-59b8b5fbf-wx4lv\"\nI1004 00:02:08.040922       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-59b8b5fbf-wx4lv\"\nI1004 00:02:08.057315       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-2042/webserver-847dcfb7fb\" need=1 deleting=1\nI1004 00:02:08.058179       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-2042/webserver-847dcfb7fb\" relatedReplicaSets=[webserver-8f87946bf webserver-78ddffb785 webserver-847dcfb7fb webserver-59b8b5fbf]\nI1004 00:02:08.059163       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-847dcfb7fb to 1\"\nI1004 00:02:08.058600       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-847dcfb7fb\" pod=\"deployment-2042/webserver-847dcfb7fb-rd2lf\"\nI1004 00:02:08.092565       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" need=3 creating=1\nI1004 00:02:08.093799       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-59b8b5fbf to 3\"\nI1004 00:02:08.109421       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:08.110183       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-59b8b5fbf-g7sfp\"\nI1004 00:02:08.111973       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-847dcfb7fb-rd2lf\"\nI1004 00:02:08.150583       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:08.335456       1 garbagecollector.go:471] \"Processing object\" object=\"volume-458-8211/csi-hostpathplugin-84875d6d5d\" objectUID=c1a978a1-53e1-4658-98f4-ea435d30267e kind=\"ControllerRevision\" virtual=false\nI1004 00:02:08.335772       1 stateful_set.go:440] StatefulSet has been deleted volume-458-8211/csi-hostpathplugin\nI1004 00:02:08.335852       1 garbagecollector.go:471] \"Processing object\" object=\"volume-458-8211/csi-hostpathplugin-0\" objectUID=ff7cf69f-6290-4c4f-8583-7e51fd560745 kind=\"Pod\" virtual=false\nI1004 00:02:08.338777       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-458-8211/csi-hostpathplugin-0\" objectUID=ff7cf69f-6290-4c4f-8583-7e51fd560745 kind=\"Pod\" propagationPolicy=Background\nI1004 00:02:08.339901       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-458-8211/csi-hostpathplugin-84875d6d5d\" objectUID=c1a978a1-53e1-4658-98f4-ea435d30267e kind=\"ControllerRevision\" propagationPolicy=Background\nI1004 00:02:09.443434       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" need=3 creating=1\nI1004 00:02:09.451586       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-59b8b5fbf-x577r\"\nI1004 00:02:09.482253       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" need=3 creating=1\nI1004 00:02:09.491425       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-59b8b5fbf-mq8vl\"\nI1004 00:02:09.520386       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-847dcfb7fb\" need=1 creating=1\nI1004 00:02:09.536785       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-sffhw\"\nI1004 00:02:09.667382       1 namespace_controller.go:185] Namespace has been deleted svcaccounts-628\nI1004 00:02:09.858416       1 namespace_controller.go:185] Namespace has been deleted proxy-2452\nI1004 00:02:10.377654       1 namespace_controller.go:185] Namespace has been deleted nettest-6874\nI1004 00:02:10.448319       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\") on node \"nodes-us-west3-a-h26h\" \nI1004 00:02:10.451911       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\") on node \"nodes-us-west3-a-h26h\" \nI1004 00:02:11.480681       1 namespace_controller.go:185] Namespace has been deleted volume-458\nI1004 00:02:11.815022       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-5165/gcepdczng6\"\nI1004 00:02:11.827614       1 pv_controller.go:640] volume \"pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\" is released and reclaim policy \"Delete\" will be executed\nI1004 00:02:11.835865       1 pv_controller.go:879] volume \"pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\" entered phase \"Released\"\nI1004 00:02:11.840473       1 pv_controller.go:1340] isVolumeReleased[pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5]: volume is released\nE1004 00:02:11.975133       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-8416/default: serviceaccounts \"default\" not found\nI1004 00:02:12.611084       1 route_controller.go:294] set node nodes-us-west3-a-zfb0 with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:02:12.611128       1 route_controller.go:294] set node nodes-us-west3-a-chbv with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:02:12.611167       1 route_controller.go:294] set node nodes-us-west3-a-h26h with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:02:12.611182       1 route_controller.go:294] set node nodes-us-west3-a-gn9g with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:02:12.611197       1 route_controller.go:294] set node master-us-west3-a-nkrg with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:02:12.878300       1 gce_util.go:84] Error deleting GCE PD volume e2e-5690f240d7-0a91d-k-pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource\nE1004 00:02:12.878639       1 goroutinemap.go:150] Operation for \"delete-pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5[dbe06355-628b-4cdb-a0c8-e16e2d009b64]\" failed. No retries permitted until 2021-10-04 00:02:13.378581148 +0000 UTC m=+1074.781077004 (durationBeforeRetry 500ms). Error: googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource\nI1004 00:02:12.879257       1 event.go:291] \"Event occurred\" object=\"pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"googleapi: Error 400: The disk resource 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/disks/e2e-5690f240d7-0a91d-k-pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5' is already being used by 'projects/gce-gci-upg-lat-1-3-ctl-skew/zones/us-west3-a/instances/nodes-us-west3-a-h26h', resourceInUseByAnotherResource\"\nI1004 00:02:13.139439       1 expand_controller.go:289] Ignoring the PVC \"csi-mock-volumes-7180/pvc-hkkmf\" (uid: \"80b14a12-5332-4ef7-8886-0d1144222759\") : didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nI1004 00:02:13.140287       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7180/pvc-hkkmf\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ExternalExpanding\" message=\"Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\"\nE1004 00:02:13.466181       1 tokens_controller.go:262] error synchronizing serviceaccount volume-458-8211/default: secrets \"default-token-bp9wz\" is forbidden: unable to create new content in namespace volume-458-8211 because it is being terminated\nI1004 00:02:13.553569       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8416-7569/csi-mockplugin-595d5697f6\" objectUID=655ad5fb-08b3-4a0f-9fef-b021591155b3 kind=\"ControllerRevision\" virtual=false\nI1004 00:02:13.554445       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-8416-7569/csi-mockplugin\nI1004 00:02:13.554631       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8416-7569/csi-mockplugin-0\" objectUID=ff6db90b-6dce-4b4c-9f0c-b0fdfaaf7b4b kind=\"Pod\" virtual=false\nI1004 00:02:13.559901       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8416-7569/csi-mockplugin-595d5697f6\" objectUID=655ad5fb-08b3-4a0f-9fef-b021591155b3 kind=\"ControllerRevision\" propagationPolicy=Background\nI1004 00:02:13.569942       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8416-7569/csi-mockplugin-0\" objectUID=ff6db90b-6dce-4b4c-9f0c-b0fdfaaf7b4b kind=\"Pod\" propagationPolicy=Background\nI1004 00:02:13.604440       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8416-7569/csi-mockplugin-attacher-6c757d7886\" objectUID=223f3e2b-604b-49e9-b55c-fb376c23e7d5 kind=\"ControllerRevision\" virtual=false\nI1004 00:02:13.604689       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-8416-7569/csi-mockplugin-attacher\nI1004 00:02:13.604763       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8416-7569/csi-mockplugin-attacher-0\" objectUID=6a034136-cbc8-4cdd-94b7-1680267e47a3 kind=\"Pod\" virtual=false\nI1004 00:02:13.612328       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 5\"\nI1004 00:02:13.620948       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8416-7569/csi-mockplugin-attacher-6c757d7886\" objectUID=223f3e2b-604b-49e9-b55c-fb376c23e7d5 kind=\"ControllerRevision\" propagationPolicy=Background\nI1004 00:02:13.632621       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8416-7569/csi-mockplugin-attacher-0\" objectUID=6a034136-cbc8-4cdd-94b7-1680267e47a3 kind=\"Pod\" propagationPolicy=Background\nE1004 00:02:13.653869       1 tokens_controller.go:262] error synchronizing serviceaccount fsgroupchangepolicy-9621/default: secrets \"default-token-t4ph9\" is forbidden: unable to create new content in namespace fsgroupchangepolicy-9621 because it is being terminated\nE1004 00:02:13.659291       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:02:13.659974       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8416-7569/csi-mockplugin-resizer-7d86d65f97\" objectUID=4e04c5b5-cfbe-43f4-888e-7a1611d299ab kind=\"ControllerRevision\" virtual=false\nI1004 00:02:13.661097       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-78ddffb785\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:13.663674       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-8416-7569/csi-mockplugin-resizer\nI1004 00:02:13.663894       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8416-7569/csi-mockplugin-resizer-0\" objectUID=fce9df06-17dc-452e-80bd-8576a76c9718 kind=\"Pod\" virtual=false\nI1004 00:02:13.671260       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8416-7569/csi-mockplugin-resizer-0\" objectUID=fce9df06-17dc-452e-80bd-8576a76c9718 kind=\"Pod\" propagationPolicy=Background\nI1004 00:02:13.672631       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8416-7569/csi-mockplugin-resizer-7d86d65f97\" objectUID=4e04c5b5-cfbe-43f4-888e-7a1611d299ab kind=\"ControllerRevision\" propagationPolicy=Background\nI1004 00:02:13.709274       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" need=0 deleting=3\nI1004 00:02:13.709331       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" relatedReplicaSets=[webserver-78ddffb785 webserver-847dcfb7fb webserver-59b8b5fbf webserver-8f87946bf]\nI1004 00:02:13.710720       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-59b8b5fbf\" pod=\"deployment-2042/webserver-59b8b5fbf-x577r\"\nI1004 00:02:13.711010       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-59b8b5fbf\" pod=\"deployment-2042/webserver-59b8b5fbf-mq8vl\"\nI1004 00:02:13.711334       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-59b8b5fbf\" pod=\"deployment-2042/webserver-59b8b5fbf-mrh6n\"\nI1004 00:02:13.712738       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-59b8b5fbf to 0\"\nI1004 00:02:13.739565       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:13.777369       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-59b8b5fbf-x577r\"\nI1004 00:02:13.777423       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-59b8b5fbf-mrh6n\"\nI1004 00:02:13.786821       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-78ddffb785\" need=3 creating=3\nI1004 00:02:13.790348       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-59b8b5fbf-mq8vl\"\nI1004 00:02:13.791047       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-78ddffb785 to 3\"\nI1004 00:02:13.850611       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-78ddffb785\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-78ddffb785-qprcr\"\nI1004 00:02:13.858027       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-78ddffb785\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-78ddffb785-dm7g7\"\nI1004 00:02:13.897250       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-78ddffb785\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-78ddffb785-vjq65\"\nI1004 00:02:14.911947       1 event.go:291] \"Event occurred\" object=\"statefulset-6453/test-ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod test-ss-1 in StatefulSet test-ss successful\"\nE1004 00:02:15.215189       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-3815/default: secrets \"default-token-2l5hm\" is forbidden: unable to create new content in namespace provisioning-3815 because it is being terminated\nE1004 00:02:15.281345       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:02:15.350344       1 namespace_controller.go:185] Namespace has been deleted provisioning-8651-9245\nI1004 00:02:15.731682       1 pv_controller.go:879] volume \"pvc-c4d3dac6-9192-42e5-9177-cd3dd8c8c4fa\" entered phase \"Bound\"\nI1004 00:02:15.731947       1 pv_controller.go:982] volume \"pvc-c4d3dac6-9192-42e5-9177-cd3dd8c8c4fa\" bound to claim \"ephemeral-8184/inline-volume-tester-c4xv2-my-volume-0\"\nI1004 00:02:15.743090       1 pv_controller.go:823] claim \"ephemeral-8184/inline-volume-tester-c4xv2-my-volume-0\" entered phase \"Bound\"\nE1004 00:02:15.893658       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:02:15.939157       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2401/test-deployment-w85gd-794dd694d8\" need=1 creating=1\nI1004 00:02:15.940693       1 event.go:291] \"Event occurred\" object=\"deployment-2401/test-deployment-w85gd\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-deployment-w85gd-794dd694d8 to 1\"\nI1004 00:02:15.956240       1 event.go:291] \"Event occurred\" object=\"deployment-2401/test-deployment-w85gd-794dd694d8\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-deployment-w85gd-794dd694d8-cghpj\"\nI1004 00:02:15.965971       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2401/test-deployment-w85gd\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-deployment-w85gd\\\": the object has been modified; please apply your changes to the latest version and try again\"\nW1004 00:02:16.138097       1 reconciler.go:222] attacherDetacher.DetachVolume started for volume \"pvc-37a65c3c-bd3d-4fda-9fdd-fff575052c7a\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-6678^6b00b959-24a5-11ec-a065-8a0b90323122\") on node \"nodes-us-west3-a-gn9g\" This volume is not safe to detach, but maxWaitForUnmountDuration 6m0s expired, force detaching\nI1004 00:02:16.242909       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-c4d3dac6-9192-42e5-9177-cd3dd8c8c4fa\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-8184^555831ec-24a6-11ec-844f-5efb37caca84\") from node \"nodes-us-west3-a-chbv\" \nI1004 00:02:16.436468       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8219-560/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI1004 00:02:16.476280       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8219-560/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI1004 00:02:16.719704       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-37a65c3c-bd3d-4fda-9fdd-fff575052c7a\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-6678^6b00b959-24a5-11ec-a065-8a0b90323122\") on node \"nodes-us-west3-a-gn9g\" \nI1004 00:02:16.815116       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-c4d3dac6-9192-42e5-9177-cd3dd8c8c4fa\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-8184^555831ec-24a6-11ec-844f-5efb37caca84\") from node \"nodes-us-west3-a-chbv\" \nI1004 00:02:16.815600       1 event.go:291] \"Event occurred\" object=\"ephemeral-8184/inline-volume-tester-c4xv2\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-c4d3dac6-9192-42e5-9177-cd3dd8c8c4fa\\\" \"\nI1004 00:02:16.982013       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-78ddffb785\" need=3 creating=1\nI1004 00:02:17.030220       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-78ddffb785\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-78ddffb785-qk2mb\"\nI1004 00:02:17.045873       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-8416\nI1004 00:02:17.060944       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-847dcfb7fb\" need=1 creating=1\nI1004 00:02:17.140909       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-8k2jr\"\nI1004 00:02:17.193318       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-8f87946bf\" need=3 creating=1\nI1004 00:02:17.209272       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:17.213041       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-8f87946bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-8f87946bf-rsw5j\"\nI1004 00:02:17.303343       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\" (UniqueName: \"kubernetes.io/gce-pd/e2e-5690f240d7-0a91d-k-pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\") on node \"nodes-us-west3-a-h26h\" \nI1004 00:02:17.431257       1 namespace_controller.go:185] Namespace has been deleted volume-expand-523\nI1004 00:02:17.454390       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-4160/test-rolling-update-with-lb-864fb64577\" need=3 creating=3\nI1004 00:02:17.455611       1 event.go:291] \"Event occurred\" object=\"deployment-4160/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-864fb64577 to 3\"\nI1004 00:02:17.472986       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-4160/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:17.476102       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-78ddffb785\" need=3 creating=1\nI1004 00:02:17.500003       1 event.go:291] \"Event occurred\" object=\"deployment-4160/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-864fb64577-pmxlm\"\nI1004 00:02:17.500038       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-78ddffb785\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-78ddffb785-dxnxd\"\nI1004 00:02:17.521943       1 event.go:291] \"Event occurred\" object=\"deployment-4160/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-864fb64577-2bcgw\"\nI1004 00:02:17.529031       1 event.go:291] \"Event occurred\" object=\"deployment-4160/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-864fb64577-p4tfw\"\nI1004 00:02:17.553027       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-847dcfb7fb\" need=1 creating=1\nI1004 00:02:17.573237       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-3815-1990/csi-hostpathplugin-0\" objectUID=82f1a722-e9e7-47a0-bd80-162a86560290 kind=\"Pod\" virtual=false\nI1004 00:02:17.574942       1 stateful_set.go:440] StatefulSet has been deleted provisioning-3815-1990/csi-hostpathplugin\nI1004 00:02:17.575125       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-3815-1990/csi-hostpathplugin-85b76b4f89\" objectUID=644f731b-df8b-4b57-bb23-37b3dc23e860 kind=\"ControllerRevision\" virtual=false\nI1004 00:02:17.576569       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-66nwl\"\nI1004 00:02:17.596970       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3815-1990/csi-hostpathplugin-85b76b4f89\" objectUID=644f731b-df8b-4b57-bb23-37b3dc23e860 kind=\"ControllerRevision\" propagationPolicy=Background\nI1004 00:02:17.604929       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3815-1990/csi-hostpathplugin-0\" objectUID=82f1a722-e9e7-47a0-bd80-162a86560290 kind=\"Pod\" propagationPolicy=Background\nI1004 00:02:17.651726       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-8f87946bf\" need=3 creating=1\nI1004 00:02:17.668248       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-8f87946bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-8f87946bf-qv64t\"\nI1004 00:02:17.717229       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-8f87946bf\" need=3 creating=1\nI1004 00:02:17.739904       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-8f87946bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-8f87946bf-rmd4t\"\nI1004 00:02:17.868104       1 pv_controller.go:1340] isVolumeReleased[pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5]: volume is released\nI1004 00:02:18.170853       1 namespace_controller.go:185] Namespace has been deleted provisioning-9498\nI1004 00:02:19.000887       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-7536/test-cleanup-deployment\"\nI1004 00:02:19.212193       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-847dcfb7fb to 0\"\nI1004 00:02:19.212577       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-2042/webserver-847dcfb7fb\" need=0 deleting=1\nI1004 00:02:19.212708       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-2042/webserver-847dcfb7fb\" relatedReplicaSets=[webserver-847dcfb7fb webserver-59b8b5fbf webserver-8f87946bf webserver-78ddffb785]\nI1004 00:02:19.212988       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-847dcfb7fb\" pod=\"deployment-2042/webserver-847dcfb7fb-66nwl\"\nI1004 00:02:19.233428       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-847dcfb7fb-66nwl\"\nI1004 00:02:19.235389       1 namespace_controller.go:185] Namespace has been deleted volume-458-8211\nI1004 00:02:19.242559       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-78ddffb785\" need=4 creating=1\nI1004 00:02:19.244113       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-78ddffb785 to 4\"\nI1004 00:02:19.253052       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-78ddffb785\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-78ddffb785-zqqwg\"\nI1004 00:02:19.255501       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:19.302761       1 namespace_controller.go:185] Namespace has been deleted fsgroupchangepolicy-9621\nI1004 00:02:19.432465       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-8048, name: inline-volume-tester2-f9q2q, uid: 6dd79184-05d9-4bcf-99f5-2ded14d982ad] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI1004 00:02:19.433189       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8048/inline-volume-tester2-f9q2q\" objectUID=6dd79184-05d9-4bcf-99f5-2ded14d982ad kind=\"Pod\" virtual=false\nI1004 00:02:19.440963       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-8048, name: inline-volume-tester2-f9q2q, uid: 6dd79184-05d9-4bcf-99f5-2ded14d982ad]\nE1004 00:02:19.483756       1 namespace_controller.go:162] deletion of namespace prestop-9797 failed: unexpected items still remain in namespace: prestop-9797 for gvr: /v1, Resource=pods\nI1004 00:02:19.840966       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 6\"\nI1004 00:02:19.853564       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-59b8b5fbf\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:19.863314       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-2042/webserver-8f87946bf\" need=1 deleting=2\nI1004 00:02:19.863372       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-2042/webserver-8f87946bf\" relatedReplicaSets=[webserver-847dcfb7fb webserver-59b8b5fbf webserver-8f87946bf webserver-78ddffb785]\nI1004 00:02:19.863753       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-8f87946bf\" pod=\"deployment-2042/webserver-8f87946bf-qv64t\"\nI1004 00:02:19.864002       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-8f87946bf\" pod=\"deployment-2042/webserver-8f87946bf-rmd4t\"\nI1004 00:02:19.864240       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-8f87946bf to 1\"\nI1004 00:02:19.868641       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-2042/webserver-78ddffb785\" need=3 deleting=1\nI1004 00:02:19.868691       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-2042/webserver-78ddffb785\" relatedReplicaSets=[webserver-78ddffb785 webserver-847dcfb7fb webserver-59b8b5fbf webserver-8f87946bf]\nI1004 00:02:19.868982       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-78ddffb785\" pod=\"deployment-2042/webserver-78ddffb785-zqqwg\"\nI1004 00:02:19.869214       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-78ddffb785 to 3\"\nI1004 00:02:19.879418       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:19.892982       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-8f87946bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-8f87946bf-rmd4t\"\nI1004 00:02:19.899117       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-8f87946bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-8f87946bf-qv64t\"\nI1004 00:02:19.906306       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-59b8b5fbf to 3\"\nI1004 00:02:19.907063       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" need=3 creating=3\nI1004 00:02:19.916682       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-78ddffb785\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-78ddffb785-zqqwg\"\nI1004 00:02:19.928702       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:19.937486       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-59b8b5fbf-hcfhh\"\nI1004 00:02:19.956371       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-59b8b5fbf-xvmgs\"\nI1004 00:02:19.958734       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-59b8b5fbf-sbzkb\"\nI1004 00:02:19.996356       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:20.003753       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" need=0 deleting=3\nI1004 00:02:20.005607       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" relatedReplicaSets=[webserver-847dcfb7fb webserver-59b8b5fbf webserver-8f87946bf webserver-78ddffb785 webserver-74dbb488d9]\nI1004 00:02:20.006313       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-59b8b5fbf\" pod=\"deployment-2042/webserver-59b8b5fbf-sbzkb\"\nI1004 00:02:20.007131       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-59b8b5fbf\" pod=\"deployment-2042/webserver-59b8b5fbf-hcfhh\"\nI1004 00:02:20.005532       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-59b8b5fbf to 0\"\nI1004 00:02:20.007156       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-59b8b5fbf\" pod=\"deployment-2042/webserver-59b8b5fbf-xvmgs\"\nI1004 00:02:20.030221       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-74dbb488d9\" need=3 creating=3\nI1004 00:02:20.036215       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-59b8b5fbf-xvmgs\"\nI1004 00:02:20.037016       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-74dbb488d9 to 3\"\nI1004 00:02:20.044546       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-74dbb488d9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-74dbb488d9-7srnh\"\nI1004 00:02:20.044584       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-59b8b5fbf-hcfhh\"\nI1004 00:02:20.044600       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-59b8b5fbf-sbzkb\"\nI1004 00:02:20.078472       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-74dbb488d9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-74dbb488d9-frglx\"\nI1004 00:02:20.079804       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-74dbb488d9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-74dbb488d9-9w62h\"\nI1004 00:02:20.226062       1 gce_util.go:89] Successfully deleted GCE PD volume e2e-5690f240d7-0a91d-k-pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\nI1004 00:02:20.226309       1 pv_controller.go:1435] volume \"pvc-2db4ee23-11cb-475f-b5d7-51aa67734cf5\" deleted\nI1004 00:02:20.246759       1 pv_controller_base.go:505] deletion of claim \"provisioning-5165/gcepdczng6\" was already processed\nI1004 00:02:20.620879       1 namespace_controller.go:185] Namespace has been deleted provisioning-3815\nE1004 00:02:21.296906       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1004 00:02:22.569066       1 route_controller.go:294] set node nodes-us-west3-a-gn9g with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:02:22.569104       1 route_controller.go:294] set node master-us-west3-a-nkrg with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:02:22.569117       1 route_controller.go:294] set node nodes-us-west3-a-zfb0 with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:02:22.569130       1 route_controller.go:294] set node nodes-us-west3-a-chbv with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:02:22.569142       1 route_controller.go:294] set node nodes-us-west3-a-h26h with NodeNetworkUnavailable=false was canceled because it is already set\nE1004 00:02:22.736321       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-3815-1990/default: secrets \"default-token-6nnnm\" is forbidden: unable to create new content in namespace provisioning-3815-1990 because it is being terminated\nE1004 00:02:23.288200       1 tokens_controller.go:262] error synchronizing serviceaccount projected-2683/default: secrets \"default-token-wsbnb\" is forbidden: unable to create new content in namespace projected-2683 because it is being terminated\nI1004 00:02:24.000300       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-2536/test-rolling-update-deployment\"\nI1004 00:02:24.409770       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-2042/webserver-78ddffb785\" need=2 deleting=1\nI1004 00:02:24.410162       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-2042/webserver-78ddffb785\" relatedReplicaSets=[webserver-847dcfb7fb webserver-59b8b5fbf webserver-8f87946bf webserver-78ddffb785 webserver-74dbb488d9]\nI1004 00:02:24.410466       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-78ddffb785\" pod=\"deployment-2042/webserver-78ddffb785-dxnxd\"\nI1004 00:02:24.410106       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-78ddffb785 to 2\"\nI1004 00:02:24.426524       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-78ddffb785\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-78ddffb785-dxnxd\"\nI1004 00:02:24.427325       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-74dbb488d9\" need=4 creating=1\nI1004 00:02:24.437293       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-74dbb488d9 to 4\"\nI1004 00:02:24.448160       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:24.449336       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-74dbb488d9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-74dbb488d9-w22jz\"\nI1004 00:02:24.664614       1 namespace_controller.go:185] Namespace has been deleted prestop-9797\nE1004 00:02:25.476321       1 namespace_controller.go:162] deletion of namespace disruption-9642 failed: unexpected items still remain in namespace: disruption-9642 for gvr: /v1, Resource=pods\nI1004 00:02:25.508106       1 namespace_controller.go:185] Namespace has been deleted svcaccounts-5604\nE1004 00:02:25.654242       1 namespace_controller.go:162] deletion of namespace disruption-9642 failed: unexpected items still remain in namespace: disruption-9642 for gvr: /v1, Resource=pods\nE1004 00:02:25.905824       1 namespace_controller.go:162] deletion of namespace disruption-9642 failed: unexpected items still remain in namespace: disruption-9642 for gvr: /v1, Resource=pods\nI1004 00:02:25.957668       1 namespace_controller.go:185] Namespace has been deleted emptydir-6575\nE1004 00:02:26.170495       1 namespace_controller.go:162] deletion of namespace disruption-9642 failed: unexpected items still remain in namespace: disruption-9642 for gvr: /v1, Resource=pods\nE1004 00:02:26.391810       1 namespace_controller.go:162] deletion of namespace disruption-9642 failed: unexpected items still remain in namespace: disruption-9642 for gvr: /v1, Resource=pods\nE1004 00:02:26.666763       1 namespace_controller.go:162] deletion of namespace disruption-9642 failed: unexpected items still remain in namespace: disruption-9642 for gvr: /v1, Resource=pods\nE1004 00:02:27.072523       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-5165/default: secrets \"default-token-mwrzq\" is forbidden: unable to create new content in namespace provisioning-5165 because it is being terminated\nE1004 00:02:27.132956       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1004 00:02:27.343718       1 namespace_controller.go:162] deletion of namespace disruption-9642 failed: unexpected items still remain in namespace: disruption-9642 for gvr: /v1, Resource=pods\nI1004 00:02:27.460771       1 namespace_controller.go:185] Namespace has been deleted secrets-8240\nI1004 00:02:27.475368       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-74dbb488d9\" need=4 creating=1\nI1004 00:02:27.490517       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-74dbb488d9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-74dbb488d9-m6hlk\"\nI1004 00:02:27.530167       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-74dbb488d9\" need=4 creating=1\nI1004 00:02:27.548938       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-74dbb488d9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-74dbb488d9-nsr4d\"\nI1004 00:02:27.567414       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"gcepd-h4wgn\" (UniqueName: \"kubernetes.io/gce-pd/e2e-c9bd0c44-b196-462e-94f0-f0f3d5ecc6f5\") on node \"nodes-us-west3-a-chbv\" \nI1004 00:02:27.578628       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"gcepd-h4wgn\" (UniqueName: \"kubernetes.io/gce-pd/e2e-c9bd0c44-b196-462e-94f0-f0f3d5ecc6f5\") on node \"nodes-us-west3-a-chbv\" \nI1004 00:02:27.584767       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-74dbb488d9\" need=4 creating=1\nI1004 00:02:27.614129       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-74dbb488d9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-74dbb488d9-dzkn2\"\nI1004 00:02:27.639892       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-78ddffb785\" need=2 creating=1\nI1004 00:02:27.654780       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-78ddffb785\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-78ddffb785-2jjrq\"\nI1004 00:02:27.686576       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-78ddffb785\" need=2 creating=1\nI1004 00:02:27.727072       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-78ddffb785\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-78ddffb785-4sf7s\"\nI1004 00:02:27.752689       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-8f87946bf\" need=1 creating=1\nI1004 00:02:27.776120       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-8f87946bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-8f87946bf-59qrj\"\nI1004 00:02:27.814769       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE1004 00:02:27.873589       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1004 00:02:27.994949       1 namespace_controller.go:162] deletion of namespace disruption-9642 failed: unexpected items still remain in namespace: disruption-9642 for gvr: /v1, Resource=pods\nI1004 00:02:28.456403       1 namespace_controller.go:185] Namespace has been deleted projected-2683\nE1004 00:02:28.848192       1 namespace_controller.go:162] deletion of namespace disruption-9642 failed: unexpected items still remain in namespace: disruption-9642 for gvr: /v1, Resource=pods\nI1004 00:02:28.894760       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-80b14a12-5332-4ef7-8886-0d1144222759\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7180^4\") on node \"nodes-us-west3-a-gn9g\" \nI1004 00:02:28.903006       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-80b14a12-5332-4ef7-8886-0d1144222759\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7180^4\") on node \"nodes-us-west3-a-gn9g\" \nI1004 00:02:29.221843       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-8416-7569\nI1004 00:02:29.350531       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-7180/pvc-hkkmf\"\nI1004 00:02:29.362415       1 pv_controller.go:640] volume \"pvc-80b14a12-5332-4ef7-8886-0d1144222759\" is released and reclaim policy \"Delete\" will be executed\nI1004 00:02:29.369395       1 pv_controller.go:879] volume \"pvc-80b14a12-5332-4ef7-8886-0d1144222759\" entered phase \"Released\"\nI1004 00:02:29.372420       1 pv_controller.go:1340] isVolumeReleased[pvc-80b14a12-5332-4ef7-8886-0d1144222759]: volume is released\nI1004 00:02:29.414141       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-7180/pvc-hkkmf\" was already processed\nI1004 00:02:29.471165       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-80b14a12-5332-4ef7-8886-0d1144222759\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7180^4\") on node \"nodes-us-west3-a-gn9g\" \nI1004 00:02:29.665759       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 8\"\nI1004 00:02:29.678535       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-59b8b5fbf\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:29.693394       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-2042/webserver-8f87946bf\" need=0 deleting=1\nI1004 00:02:29.693934       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-2042/webserver-8f87946bf\" relatedReplicaSets=[webserver-8f87946bf webserver-78ddffb785 webserver-74dbb488d9 webserver-847dcfb7fb webserver-59b8b5fbf]\nI1004 00:02:29.694244       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-8f87946bf\" pod=\"deployment-2042/webserver-8f87946bf-59qrj\"\nI1004 00:02:29.694610       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-8f87946bf to 0\"\nI1004 00:02:29.700146       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-78ddffb785 to 0\"\nI1004 00:02:29.701919       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-2042/webserver-78ddffb785\" need=0 deleting=2\nI1004 00:02:29.702129       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-2042/webserver-78ddffb785\" relatedReplicaSets=[webserver-78ddffb785 webserver-74dbb488d9 webserver-847dcfb7fb webserver-59b8b5fbf webserver-8f87946bf]\nI1004 00:02:29.702446       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-78ddffb785\" pod=\"deployment-2042/webserver-78ddffb785-2jjrq\"\nI1004 00:02:29.702534       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-78ddffb785\" pod=\"deployment-2042/webserver-78ddffb785-4sf7s\"\nI1004 00:02:29.706255       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2042/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1004 00:02:29.722286       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" need=3 creating=3\nI1004 00:02:29.723082       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-59b8b5fbf to 3\"\nI1004 00:02:29.728497       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-8f87946bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-8f87946bf-59qrj\"\nI1004 00:02:29.745309       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-78ddffb785\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-78ddffb785-2jjrq\"\nI1004 00:02:29.754182       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-78ddffb785\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-78ddffb785-4sf7s\"\nI1004 00:02:29.760277       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-59b8b5fbf-kr7p7\"\nI1004 00:02:29.776672       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-59b8b5fbf-9dp75\"\nI1004 00:02:29.778084       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-59b8b5fbf-tspng\"\nI1004 00:02:30.176050       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-9607/pvc-cqs6c\"\nI1004 00:02:30.193805       1 pv_controller.go:640] volume \"pvc-65cc08e0-bf00-4c2d-aa94-3164ec3865ec\" is released and reclaim policy \"Delete\" will be executed\nI1004 00:02:30.201749       1 pv_controller.go:879] volume \"pvc-65cc08e0-bf00-4c2d-aa94-3164ec3865ec\" entered phase \"Released\"\nI1004 00:02:30.207691       1 pv_controller.go:1340] isVolumeReleased[pvc-65cc08e0-bf00-4c2d-aa94-3164ec3865ec]: volume is released\nI1004 00:02:30.228060       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-9607/pvc-cqs6c\" was already processed\nE1004 00:02:30.375069       1 namespace_controller.go:162] deletion of namespace disruption-9642 failed: unexpected items still remain in namespace: disruption-9642 for gvr: /v1, Resource=pods\nI1004 00:02:31.019553       1 namespace_controller.go:185] Namespace has been deleted nettest-8424\nE1004 00:02:31.059176       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1004 00:02:31.539893       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1004 00:02:32.193539       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-3629/pvc-79qxx: storageclass.storage.k8s.io \"provisioning-3629\" not found\nI1004 00:02:32.193941       1 event.go:291] \"Event occurred\" object=\"provisioning-3629/pvc-79qxx\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-3629\\\" not found\"\nI1004 00:02:32.222354       1 pv_controller.go:879] volume \"local-x8jv7\" entered phase \"Available\"\nI1004 00:02:32.541408       1 namespace_controller.go:185] Namespace has been deleted provisioning-5165\nI1004 00:02:32.632555       1 route_controller.go:294] set node master-us-west3-a-nkrg with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:02:32.632601       1 route_controller.go:294] set node nodes-us-west3-a-zfb0 with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:02:32.632733       1 route_controller.go:294] set node nodes-us-west3-a-h26h with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:02:32.632748       1 route_controller.go:294] set node nodes-us-west3-a-chbv with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:02:32.632765       1 route_controller.go:294] set node nodes-us-west3-a-gn9g with NodeNetworkUnavailable=false was canceled because it is already set\nI1004 00:02:32.773373       1 operation_generator.go:300] VerifyVolumesAreAttached.BulkVerifyVolumes failed for node \"nodes-us-west3-a-chbv\" and volume \"gcepd-h4wgn\"\nI1004 00:02:32.864469       1 pv_controller.go:930] claim \"provisioning-3629/pvc-79qxx\" bound to volume \"local-x8jv7\"\nI1004 00:02:32.874793       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-9034/pvc-5wzgv\"\nI1004 00:02:32.890255       1 pv_controller.go:879] volume \"local-x8jv7\" entered phase \"Bound\"\nI1004 00:02:32.890308       1 pv_controller.go:982] volume \"local-x8jv7\" bound to claim \"provisioning-3629/pvc-79qxx\"\nI1004 00:02:32.916464       1 pv_controller.go:823] claim \"provisioning-3629/pvc-79qxx\" entered phase \"Bound\"\nI1004 00:02:32.925980       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8219/pvc-vxrpq\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-8219\\\" or manually created by system administrator\"\nI1004 00:02:32.926203       1 pv_controller.go:640] volume \"local-ng5m5\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:02:32.935393       1 pv_controller.go:879] volume \"local-ng5m5\" entered phase \"Released\"\nI1004 00:02:32.942860       1 pv_controller_base.go:505] deletion of claim \"volume-9034/pvc-5wzgv\" was already processed\nI1004 00:02:32.962438       1 pv_controller.go:879] volume \"pvc-6150c1bc-204a-4639-9894-8482f6334fb8\" entered phase \"Bound\"\nI1004 00:02:32.962487       1 pv_controller.go:982] volume \"pvc-6150c1bc-204a-4639-9894-8482f6334fb8\" bound to claim \"csi-mock-volumes-8219/pvc-vxrpq\"\nI1004 00:02:32.982801       1 pv_controller.go:823] claim \"csi-mock-volumes-8219/pvc-vxrpq\" entered phase \"Bound\"\nI1004 00:02:33.053223       1 stateful_set_control.go:555] StatefulSet statefulset-6453/test-ss terminating Pod test-ss-0 for update\nI1004 00:02:33.060902       1 event.go:291] \"Event occurred\" object=\"statefulset-6453/test-ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod test-ss-0 in StatefulSet test-ss successful\"\nI1004 00:02:33.140048       1 namespace_controller.go:185] Namespace has been deleted provisioning-3815-1990\nI1004 00:02:33.196220       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"gcepd-h4wgn\" (UniqueName: \"kubernetes.io/gce-pd/e2e-c9bd0c44-b196-462e-94f0-f0f3d5ecc6f5\") on node \"nodes-us-west3-a-chbv\" \nE1004 00:02:33.224329       1 namespace_controller.go:162] deletion of namespace disruption-9642 failed: unexpected items still remain in namespace: disruption-9642 for gvr: /v1, Resource=pods\nI1004 00:02:33.240689       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"gcepd-h4wgn\" (UniqueName: \"kubernetes.io/gce-pd/e2e-c9bd0c44-b196-462e-94f0-f0f3d5ecc6f5\") from node \"nodes-us-west3-a-zfb0\" \nI1004 00:02:33.548507       1 controller.go:400] Ensuring load balancer for service deployment-4160/test-rolling-update-with-lb\nI1004 00:02:33.548667       1 controller.go:901] Adding finalizer to service deployment-4160/test-rolling-update-with-lb\nI1004 00:02:33.551020       1 event.go:291] \"Event occurred\" object=\"deployment-4160/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Normal\" reason=\"EnsuringLoadBalancer\" message=\"Ensuring load balancer\"\nI1004 00:02:33.616412       1 gce_clusterid.go:208] Created a config map containing clusteriD: 465cd9fbce2fcbb0\nI1004 00:02:34.031647       1 gce_loadbalancer_external.go:74] ensureExternalLoadBalancer(aa071ba85d2b443c8933a04223ffd30d(deployment-4160/test-rolling-update-with-lb), us-west3, , [TCP/80], [nodes-us-west3-a-gn9g nodes-us-west3-a-zfb0 nodes-us-west3-a-chbv nodes-us-west3-a-h26h], map[])\nI1004 00:02:34.237645       1 gce_loadbalancer_external.go:97] ensureExternalLoadBalancer(aa071ba85d2b443c8933a04223ffd30d(deployment-4160/test-rolling-update-with-lb)): Forwarding rule aa071ba85d2b443c8933a04223ffd30d doesn't exist.\nI1004 00:02:34.277944       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2401/test-deployment-w85gd\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-deployment-w85gd\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE1004 00:02:34.339669       1 pv_controller.go:1451] error finding provisioning plugin for claim volumemode-7584/pvc-n2bfj: storageclass.storage.k8s.io \"volumemode-7584\" not found\nI1004 00:02:34.340156       1 event.go:291] \"Event occurred\" object=\"volumemode-7584/pvc-n2bfj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-7584\\\" not found\"\nI1004 00:02:34.389041       1 pv_controller.go:879] volume \"local-bvgvw\" entered phase \"Available\"\nI1004 00:02:34.396192       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6897-9177/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI1004 00:02:34.484569       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6897-9177/csi-mockplugin-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\"\nE1004 00:02:34.576196       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-4386/pvc-2txxr: storageclass.storage.k8s.io \"volume-4386\" not found\nI1004 00:02:34.576563       1 event.go:291] \"Event occurred\" object=\"volume-4386/pvc-2txxr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-4386\\\" not found\"\nI1004 00:02:34.604466       1 pv_controller.go:879] volume \"local-jjg9j\" entered phase \"Available\"\nI1004 00:02:34.880029       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"kubectl-862/update-demo-nautilus\" need=2 creating=2\nI1004 00:02:34.892243       1 event.go:291] \"Event occurred\" object=\"kubectl-862/update-demo-nautilus\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: update-demo-nautilus-ql7k5\"\nI1004 00:02:34.899583       1 event.go:291] \"Event occurred\" object=\"kubectl-862/update-demo-nautilus\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: update-demo-nautilus-5mrwx\"\nI1004 00:02:35.063253       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-6150c1bc-204a-4639-9894-8482f6334fb8\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8219^4\") from node \"nodes-us-west3-a-gn9g\" \nI1004 00:02:35.089822       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-74dbb488d9 to 3\"\nI1004 00:02:35.090733       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-2042/webserver-74dbb488d9\" need=3 deleting=1\nI1004 00:02:35.090776       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-2042/webserver-74dbb488d9\" relatedReplicaSets=[webserver-8f87946bf webserver-78ddffb785 webserver-74dbb488d9 webserver-847dcfb7fb webserver-59b8b5fbf]\nI1004 00:02:35.090995       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-74dbb488d9\" pod=\"deployment-2042/webserver-74dbb488d9-m6hlk\"\nI1004 00:02:35.101826       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" need=2 deleting=1\nI1004 00:02:35.101997       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" relatedReplicaSets=[webserver-8f87946bf webserver-78ddffb785 webserver-74dbb488d9 webserver-847dcfb7fb webserver-59b8b5fbf]\nI1004 00:02:35.102147       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-59b8b5fbf\" pod=\"deployment-2042/webserver-59b8b5fbf-tspng\"\nI1004 00:02:35.107063       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-59b8b5fbf to 2\"\nI1004 00:02:35.107495       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-6453/test-ss-7579f76666\" objectUID=60428f45-a0a0-433b-a887-25c53875d658 kind=\"ControllerRevision\" virtual=false\nI1004 00:02:35.107944       1 stateful_set.go:440] StatefulSet has been deleted statefulset-6453/test-ss\nI1004 00:02:35.110713       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-6453/test-ss-0\" objectUID=2c0da722-b684-41fd-900c-4833a66877ab kind=\"Pod\" virtual=false\nI1004 00:02:35.110759       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-6453/test-ss-5cf9766999\" objectUID=7ce91ef7-7426-4373-9705-297f4d77c6f3 kind=\"ControllerRevision\" virtual=false\nI1004 00:02:35.110795       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-6453/test-ss-1\" objectUID=a37407e0-1bc5-4c21-8786-49d6ab397bbb kind=\"Pod\" virtual=false\nI1004 00:02:35.121665       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-74dbb488d9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-74dbb488d9-m6hlk\"\nI1004 00:02:35.179897       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-59b8b5fbf-tspng\"\nI1004 00:02:35.184933       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-6453/test-ss-1\" objectUID=a37407e0-1bc5-4c21-8786-49d6ab397bbb kind=\"Pod\" propagationPolicy=Background\nI1004 00:02:35.185290       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-6453/test-ss-7579f76666\" objectUID=60428f45-a0a0-433b-a887-25c53875d658 kind=\"ControllerRevision\" propagationPolicy=Background\nI1004 00:02:35.186170       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-6453/test-ss-5cf9766999\" objectUID=7ce91ef7-7426-4373-9705-297f4d77c6f3 kind=\"ControllerRevision\" propagationPolicy=Background\nI1004 00:02:35.452901       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-2042/webserver-74dbb488d9\" need=2 deleting=1\nI1004 00:02:35.452956       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-2042/webserver-74dbb488d9\" relatedReplicaSets=[webserver-8f87946bf webserver-78ddffb785 webserver-74dbb488d9 webserver-847dcfb7fb webserver-59b8b5fbf]\nI1004 00:02:35.453492       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-74dbb488d9\" pod=\"deployment-2042/webserver-74dbb488d9-dzkn2\"\nI1004 00:02:35.456928       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-74dbb488d9 to 2\"\nI1004 00:02:35.477184       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-74dbb488d9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-74dbb488d9-dzkn2\"\nI1004 00:02:35.497866       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2042/webserver-59b8b5fbf\" need=3 creating=1\nI1004 00:02:35.498731       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-59b8b5fbf to 3\"\nI1004 00:02:35.517300       1 event.go:291] \"Event occurred\" object=\"deployment-2042/webserver-59b8b5fbf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-59b8b5fbf-6b4sf\"\nI1004 00:02:35.609847       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-6150c1bc-204a-4639-9894-8482f6334fb8\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8219^4\") from node \"nodes-us-west3-a-gn9g\" \nI1004 00:02:35.610223       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8219/pvc-volume-tester-mk6fw\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-6150c1bc-204a-4639-9894-8482f6334fb8\\\" \"\nI1004 00:02:36.188671       1 namespace_controller.go:185] Namespace has been deleted kubectl-5125\nI1004 00:02:36.222097       1 gce_loadbalancer_external.go:160] ensureExternalLoadBalancer(aa071ba85d2b443c8933a04223ffd30d(deployment-4160/test-rolling-update-with-lb)): Ensured IP address 34.106.190.93 (tier: Premium).\nI1004 00:02:36.397388       1 gce_loadbalancer_external.go:194] ensureExternalLoadBalancer(aa071ba85d2b443c8933a04223ffd30d(deployment-4160/test-rolling-update-with-lb)): Creating firewall.\nI1004 00:02:36.547628       1 namespace_controller.go:185] Namespace has been deleted lease-test-6557\nE1004 00:02:36.620587       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-7180/default: secrets \"default-token-dc8kk\" is forbidden: unable to create new content in namespace csi-mock-volumes-7180 because it is being terminated\nI1004 00:02:37.760736       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-8210/pvc-kh7j6\"\nI1004 00:02:37.772060       1 pv_controller.go:640] volume \"local-8mrwb\" is released and reclaim policy \"Retain\" will be executed\nI1004 00:02:37.800130       1 pv_controller.go:879] volume \"local-8mrwb\" entered phase \"Released\"\nI1004 00:02:37.943101       1 pv_controller_base.go:505] deletion of claim \"provisioning-8210/pvc-kh7j6\" was already processed\nE1004 00:02:38.203241       1 tokens_controller.go:262] error synchronizing serviceaccount security-context-5623/default: secrets \"default-token-tzcd4\" is forbidden: unable to create new content in namespace security-context-5623 because it is being terminated\nI1004 00:02:38.499939       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7180-6341/csi-mockplugin-598bb47d9b\" objectUID=32bc743f-6bb0-4b35-8d85-25992722760d kind=\"ControllerRevision\" virtual=false\nI1004 00:02:38.501790       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-7180-6341/csi-mockplugin\nI1004 00:02:38.501865       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7180-6341/csi-mockplugin-0\" objectUID=7b3b3e00-0fc8-420a-8047-b1b738ea626b kind=\"Pod\" virtual=false\nI1004 00:02:38.510047       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7180-6341/csi-mockplugin-0\" objectUID=7b3b3e00-0fc8-420a-8047-b1b738ea626b kind=\"Pod\" propagationPolicy=Background\nI1004 00:02:38.510372       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7180-6341/csi-mockplugin-598bb47d9b\" objectUID=32bc743f-6bb0-4b35-8d85-25992722760d kind=\"ControllerRevision\" propagationPolicy=Background\nI1004 00:02:38.536150       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7180-6341/csi-mockplugin-attacher-6c89649458\" objectUID=0a8cfd89-fb29-41a2-ac64-4b7b4296ff4a kind=\"ControllerRevision\" virtual=false\nI1004 00:02:38.536504       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7180-6341/csi-mockplugin-attacher-0\" objectUID=8de52855-d413-499a-b629-6eee23dfdf2f kind=\"Pod\" virtual=false\nI1004 00:02:38.537085       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-7180-6341/csi-mockplugin-attacher\nI1004 00:02:38.545545       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7180-6341/csi-mockplugin-attacher-6c89649458\" objectUID=0a8cfd89-fb29-41a2-ac64-4b7b4296ff4a kind=\"ControllerRevision\" propagationPolicy=Background\nI1004 00:02:38.553180       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7180-6341/csi-mockplugin-attacher-0\" objectUID=8de52855-d413-499a-b629-6eee23dfdf2f kind=\"Pod\" propagationPolicy=Background\nI1004 00:02:38.576722       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7180-6341/csi-mockplugin-resizer-7bb5b47777\" objectUID=490573e9-2573-4b11-b860-b5805b909e55 kind=\"ControllerRevision\" virtual=false\nI1004 00:02:38.582759       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-7180-6341/csi-mockplugin-resizer\nI1004 00:02:38.583079       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7180-6341/csi-mockplugin-resizer-0\" objectUID=07882b04-8944-4cc2-918e-cbfb4e486ee9 kind=\"Pod\" virtual=false\nI1004 00:02:38.600353       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7180-6341/csi-mockplugin-resizer-0\" objectUID=07882b04-8944-4cc2-918e-cbfb4e486ee9 kind=\"Pod\" propagationPolicy=Background\nI1004 00:02:38.600906       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7180-6341/csi-mockplugin-resizer-7bb5b47777\" objectUID=490573e9-2573-4b11-b860-b5805b909e55 kind=\"ControllerRevision\" propagationPolicy=Background\nE1004 00:02:38.755433       1 tokens_controller.go:262] error synchronizing serviceaccount volume-9034/default: secrets \"default-token-2vrqg\" is forbidden: unable to create new content in namespace volume-9034 because it is being terminated\nE1004 00:02:38.803423       1 namespace_controller.go:162] deletion of namespace disruption-9642 failed: unexpected items still remain in namespace: disruption-9642 for gvr: /v1, Resource=pods\nI1004 00:02:39.216222       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-9607-2908/csi-mockplugin-77fcf87d9\" objectUID=57cbaeba-5fdd-480c-8897-9d8e6cf10984 kind=\"ControllerRevision\" virtual=false\nI1004 00:02:39.216823       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-9607-2908/csi-mockplugin\nI1004 00:02:39.217105       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-9607-2908/csi-mockplugin-0\" objectUID=e64108d0-47a2-44df-b18b-96987f01366e kind=\"Pod\" virtual=false\nI1004 00:02:39.221225       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-9607-2908/csi-mockplugin-0\" objectUID=e64108d0-47a2-44df-b18b-96987f01366e kind=\"Pod\" propagationPolicy=Background\nI1004 00:02:39.221278       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-9607-2908/csi-mockplugin-77fcf87d9\" objectUID=57cbaeba-5fdd-480c-8897-9d8e6cf10984 kind=\"ControllerRevision\" propagationPolicy=Background\nI1004 00:02:39.567366       1 event.go:291] \"Event occurred\" object=\"ephemeral-8184/inline-volume-tester2-ddwzs-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod inline-volume-tester2-ddwzs to be scheduled\"\nI1004 00:02:39.603821       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6897/pvc-qmg6m\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-6897\\\" or manually created by system administrator\"\nI1004 00:02:39.624078       1 pv_controller.go:879] volume \"pvc-94707864-6b7d-41c0-87bb-1ca027aff4b5\" entered phase \"Bound\"\nI1004 00:02:39.624129       1 pv_controller.go:982] volume \"pvc-94707864-6b7d-41c0-87bb-1ca027aff4b5\" bound to claim \"csi-mock-volumes-6897/pvc-qmg6m\"\nI1004 00:02:39.634201       1 pv_controller.go:823] claim \"csi-mock-volumes-6897/pvc-qmg6m\" entered phase \"Bound\"\nI1004 00:02:39.639272       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2401/test-deployment-w85gd-794dd694d8\" need=1 creating=1\nI1004 00:02:39.664527       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2401/test-deployment-w85gd-794dd694d8\" objectUID=406d75ad-a182-46ca-b559-1443e5b500ae kind=\"ReplicaSet\" virtual=false\nI1004 00:02:39.664904       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-2401/test-deployment-w85gd\"\nI1004 00:02:39.667147       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2401/test-deployment-w85gd-794dd694d8\" objectUID=406d75ad-a182-46ca-b559-1443e5b500ae kind=\"ReplicaSet\" propagationPolicy=Background\nI1004 00:02:40.369971       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"gcepd-h4wgn\" (UniqueName: \"kubernetes.io/gce-pd/e2e-c9bd0c44-b196-462e-94f0-f0f3d5ecc6f5\") from node \"nodes-us-west3-a-zfb0\" \nI1004 00:02:40.370238       1 event.go:291] \"Event occurred\" object=\"volume-5283/gcepd-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"gcepd-h4wgn\\\" \"\nI1004 00:02:40.392451       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-6453/test-fxsqw\" objectUID=a6447abc-c808-4875-b6ba-c43038cb9b28 kind=\"EndpointSlice\" virtual=false\nI1004 00:02:40.403599       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-6453/test-fxsqw\" objectUID=a6447abc-c808-4875-b6ba-c43038cb9b28 kind=\"EndpointSlice\" propagationPolicy=Background\nI1004 00:02:40.458144       1 gce_loadbalancer_external.go:198] ensureExternalLoadBalancer(aa071ba85d2b443c8933a04223ffd30d(deployment-4160/test-rolling-update-with-lb)): Created firewall.\nI1004 00:02:40.668502       1 gce_loadbalancer_external.go:207] ensureExternalLoadBalancer(aa071ba85d2b443c8933a04223ffd30d(deployment-4160/test-rolling-update-with-lb)): Target pool for service doesn't exist.\nE1004 00:02:40.672634       1 tokens_controller.go:262] error synchronizing serviceaccount configmap-6092/default: secrets \"default-token-tv2h6\" is forbidden: unable to create new content in namespace configmap-6092 because it is being terminated\nI1004 00:02:40.853266       1 gce_loadbalancer_external.go:223] ensureExternalLoadBalancer(aa071ba85d2b443c8933a04223ffd30d(deployment-4160/test-rolling-update-with-lb)): Updating from nodes health checks to local traffic health checks.\nI1004 00:02:40.980616       1 gce_loadbalancer_external.go:906] Creating firewall k8s-aa071ba85d2b443c8933a04223ffd30d-http-hc for health checks.\nI1004 00:02:41.178606       1 event.go:291] \"Event occurred\" object=\"ephemeral-8184/inline-volume-tester2-ddwzs-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-8184\\\" or ma