Error lines from build-log.txt
... skipping 183 lines ...
Updating project ssh metadata...
..................................Updated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2].
.done.
WARNING: No host aliases were added to your SSH configs because you do not have any running instances. Try running this command again after running some instances.
I0623 01:05:29.309343 5946 up.go:44] Cleaning up any leaked resources from previous cluster
I0623 01:05:29.309507 5946 dumplogs.go:45] /logs/artifacts/5366fe45-f290-11ec-8dfe-daa417708791/kops toolbox dump --name e2e-e2e-kops-gce-stable.k8s.local --dir /logs/artifacts --private-key /tmp/kops-ssh2431462283/key --ssh-user prow
W0623 01:05:29.518064 5946 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0623 01:05:29.518116 5946 down.go:48] /logs/artifacts/5366fe45-f290-11ec-8dfe-daa417708791/kops delete cluster --name e2e-e2e-kops-gce-stable.k8s.local --yes
I0623 01:05:29.542451 5994 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0623 01:05:29.542600 5994 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-e2e-kops-gce-stable.k8s.local" not found
I0623 01:05:29.657028 5946 gcs.go:51] gsutil ls -b -p k8s-jkns-gce-soak-2 gs://k8s-jkns-gce-soak-2-state-53
I0623 01:05:31.467902 5946 gcs.go:70] gsutil mb -p k8s-jkns-gce-soak-2 gs://k8s-jkns-gce-soak-2-state-53
Creating gs://k8s-jkns-gce-soak-2-state-53/...
I0623 01:05:33.562302 5946 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/06/23 01:05:33 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0623 01:05:33.575545 5946 http.go:37] curl https://ip.jsb.workers.dev
I0623 01:05:33.680325 5946 up.go:159] /logs/artifacts/5366fe45-f290-11ec-8dfe-daa417708791/kops create cluster --name e2e-e2e-kops-gce-stable.k8s.local --cloud gce --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.25.0-alpha.1 --ssh-public-key /tmp/kops-ssh2431462283/key.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --channel=alpha --gce-service-account=default --admin-access 35.222.20.247/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-west3-a --master-size e2-standard-2 --project k8s-jkns-gce-soak-2
I0623 01:05:33.701684 6284 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0623 01:05:33.701805 6284 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
I0623 01:05:33.737161 6284 create_cluster.go:862] Using SSH public key: /tmp/kops-ssh2431462283/key.pub
I0623 01:05:33.992474 6284 new_cluster.go:425] VMs will be configured to use specified Service Account: default
... skipping 375 lines ...
W0623 01:05:41.180493 6305 vfs_castore.go:379] CA private key was not found
I0623 01:05:41.211042 6305 address.go:139] GCE creating address: "api-e2e-e2e-kops-gce-stable-k8s-local"
I0623 01:05:41.271804 6305 keypair.go:225] Issuing new certificate: "kubernetes-ca"
I0623 01:05:41.287560 6305 keypair.go:225] Issuing new certificate: "etcd-peers-ca-main"
I0623 01:05:41.369532 6305 keypair.go:225] Issuing new certificate: "service-account"
I0623 01:05:54.003905 6305 executor.go:111] Tasks: 42 done / 68 total; 20 can run
W0623 01:06:07.503060 6305 executor.go:139] error running task "ForwardingRule/api-e2e-e2e-kops-gce-stable-k8s-local" (9m46s remaining to succeed): error creating ForwardingRule "api-e2e-e2e-kops-gce-stable-k8s-local": googleapi: Error 400: The resource 'projects/k8s-jkns-gce-soak-2/regions/us-west3/targetPools/api-e2e-e2e-kops-gce-stable-k8s-local' is not ready, resourceNotReady
I0623 01:06:07.503172 6305 executor.go:111] Tasks: 61 done / 68 total; 5 can run
I0623 01:06:18.032019 6305 executor.go:111] Tasks: 66 done / 68 total; 2 can run
I0623 01:06:32.814241 6305 executor.go:111] Tasks: 68 done / 68 total; 0 can run
I0623 01:06:32.920014 6305 update_cluster.go:326] Exporting kubeconfig for cluster
kOps has set your kubectl context to e2e-e2e-kops-gce-stable.k8s.local
... skipping 8 lines ...
I0623 01:06:43.287960 5946 up.go:243] /logs/artifacts/5366fe45-f290-11ec-8dfe-daa417708791/kops validate cluster --name e2e-e2e-kops-gce-stable.k8s.local --count 10 --wait 15m0s
I0623 01:06:43.313551 6324 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0623 01:06:43.313679 6324 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-e2e-kops-gce-stable.k8s.local
W0623 01:07:13.621866 6324 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.168.174/api/v1/nodes": dial tcp 34.106.168.174:443: i/o timeout
W0623 01:07:23.646661 6324 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.168.174/api/v1/nodes": dial tcp 34.106.168.174:443: connect: connection refused
W0623 01:07:33.670549 6324 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.168.174/api/v1/nodes": dial tcp 34.106.168.174:443: connect: connection refused
W0623 01:07:43.696159 6324 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.168.174/api/v1/nodes": dial tcp 34.106.168.174:443: connect: connection refused
W0623 01:07:53.719625 6324 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.168.174/api/v1/nodes": dial tcp 34.106.168.174:443: connect: connection refused
W0623 01:08:03.742736 6324 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.168.174/api/v1/nodes": dial tcp 34.106.168.174:443: connect: connection refused
W0623 01:08:13.767325 6324 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.168.174/api/v1/nodes": dial tcp 34.106.168.174:443: connect: connection refused
W0623 01:08:23.790917 6324 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.168.174/api/v1/nodes": dial tcp 34.106.168.174:443: connect: connection refused
W0623 01:08:33.815493 6324 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.168.174/api/v1/nodes": dial tcp 34.106.168.174:443: connect: connection refused
W0623 01:08:43.843948 6324 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.168.174/api/v1/nodes": dial tcp 34.106.168.174:443: connect: connection refused
W0623 01:08:53.867505 6324 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.168.174/api/v1/nodes": dial tcp 34.106.168.174:443: connect: connection refused
W0623 01:09:03.893468 6324 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.168.174/api/v1/nodes": dial tcp 34.106.168.174:443: connect: connection refused
W0623 01:09:23.919789 6324 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.168.174/api/v1/nodes": net/http: TLS handshake timeout
I0623 01:09:41.605102 6324 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west3-a Master e2-standard-2 1 1 us-west3
nodes-us-west3-a Node n1-standard-2 4 4 us-west3
... skipping 5 lines ...
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/master-us-west3-a-bgwv machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/master-us-west3-a-bgwv" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-9jqc machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-9jqc" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-j1m9 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-j1m9" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-l43j machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-l43j" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-s284 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-s284" has not yet joined cluster
Validation Failed
W0623 01:09:42.254318 6324 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 01:09:52.605717 6324 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west3-a Master e2-standard-2 1 1 us-west3
nodes-us-west3-a Node n1-standard-2 4 4 us-west3
... skipping 7 lines ...
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-9jqc machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-9jqc" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-j1m9 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-j1m9" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-l43j machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-l43j" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-s284 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-s284" has not yet joined cluster
Pod kube-system/kube-controller-manager-master-us-west3-a-bgwv system-cluster-critical pod "kube-controller-manager-master-us-west3-a-bgwv" is not ready (kube-controller-manager)
Validation Failed
W0623 01:09:53.251478 6324 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 01:10:03.697589 6324 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west3-a Master e2-standard-2 1 1 us-west3
nodes-us-west3-a Node n1-standard-2 4 4 us-west3
... skipping 6 lines ...
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/master-us-west3-a-bgwv machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/master-us-west3-a-bgwv" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-9jqc machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-9jqc" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-j1m9 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-j1m9" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-l43j machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-l43j" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-s284 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-s284" has not yet joined cluster
Validation Failed
W0623 01:10:04.430497 6324 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 01:10:14.807561 6324 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west3-a Master e2-standard-2 1 1 us-west3
nodes-us-west3-a Node n1-standard-2 4 4 us-west3
... skipping 6 lines ...
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/master-us-west3-a-bgwv machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/master-us-west3-a-bgwv" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-9jqc machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-9jqc" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-j1m9 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-j1m9" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-l43j machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-l43j" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-s284 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-s284" has not yet joined cluster
Validation Failed
W0623 01:10:15.528880 6324 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 01:10:25.946859 6324 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west3-a Master e2-standard-2 1 1 us-west3
nodes-us-west3-a Node n1-standard-2 4 4 us-west3
... skipping 9 lines ...
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-l43j machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-l43j" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-s284 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-s284" has not yet joined cluster
Pod kube-system/coredns-autoscaler-5d4dbc7b59-m5zdd system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-m5zdd" is pending
Pod kube-system/coredns-dd657c749-zjbl8 system-cluster-critical pod "coredns-dd657c749-zjbl8" is pending
Pod kube-system/etcd-manager-main-master-us-west3-a-bgwv system-cluster-critical pod "etcd-manager-main-master-us-west3-a-bgwv" is pending
Validation Failed
W0623 01:10:26.562336 6324 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 01:10:37.027159 6324 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west3-a Master e2-standard-2 1 1 us-west3
nodes-us-west3-a Node n1-standard-2 4 4 us-west3
... skipping 11 lines ...
Node master-us-west3-a-bgwv node "master-us-west3-a-bgwv" of role "master" is not ready
Pod kube-system/coredns-autoscaler-5d4dbc7b59-m5zdd system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-m5zdd" is pending
Pod kube-system/coredns-dd657c749-zjbl8 system-cluster-critical pod "coredns-dd657c749-zjbl8" is pending
Pod kube-system/etcd-manager-events-master-us-west3-a-bgwv system-cluster-critical pod "etcd-manager-events-master-us-west3-a-bgwv" is pending
Pod kube-system/metadata-proxy-v0.12-zgkbf system-node-critical pod "metadata-proxy-v0.12-zgkbf" is pending
Validation Failed
W0623 01:10:37.671970 6324 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 01:10:48.060090 6324 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west3-a Master e2-standard-2 1 1 us-west3
nodes-us-west3-a Node n1-standard-2 4 4 us-west3
... skipping 9 lines ...
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-l43j machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-l43j" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-s284 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-s284" has not yet joined cluster
Node master-us-west3-a-bgwv master "master-us-west3-a-bgwv" is missing kube-apiserver pod
Pod kube-system/coredns-autoscaler-5d4dbc7b59-m5zdd system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-m5zdd" is pending
Pod kube-system/coredns-dd657c749-zjbl8 system-cluster-critical pod "coredns-dd657c749-zjbl8" is pending
Validation Failed
W0623 01:10:48.681188 6324 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 01:10:59.012290 6324 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west3-a Master e2-standard-2 1 1 us-west3
nodes-us-west3-a Node n1-standard-2 4 4 us-west3
... skipping 9 lines ...
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-l43j machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-l43j" has not yet joined cluster
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-s284 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-s284" has not yet joined cluster
Node master-us-west3-a-bgwv master "master-us-west3-a-bgwv" is missing kube-apiserver pod
Pod kube-system/coredns-autoscaler-5d4dbc7b59-m5zdd system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-m5zdd" is pending
Pod kube-system/coredns-dd657c749-zjbl8 system-cluster-critical pod "coredns-dd657c749-zjbl8" is pending
Validation Failed
W0623 01:10:59.709685 6324 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 01:11:10.096664 6324 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west3-a Master e2-standard-2 1 1 us-west3
nodes-us-west3-a Node n1-standard-2 4 4 us-west3
... skipping 10 lines ...
Machine https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-s284 machine "https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-soak-2/zones/us-west3-a/instances/nodes-us-west3-a-s284" has not yet joined cluster
Node nodes-us-west3-a-l43j node "nodes-us-west3-a-l43j" of role "node" is not ready
Pod kube-system/coredns-autoscaler-5d4dbc7b59-m5zdd system-cluster-critical pod "coredns-autoscaler-5d4dbc7b59-m5zdd" is pending
Pod kube-system/coredns-dd657c749-zjbl8 system-cluster-critical pod "coredns-dd657c749-zjbl8" is pending
Pod kube-system/metadata-proxy-v0.12-6x75n system-node-critical pod "metadata-proxy-v0.12-6x75n" is pending
Validation Failed
W0623 01:11:10.751567 6324 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 01:11:21.051813 6324 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west3-a Master e2-standard-2 1 1 us-west3
nodes-us-west3-a Node n1-standard-2 4 4 us-west3
... skipping 16 lines ...
Pod kube-system/coredns-dd657c749-zjbl8 system-cluster-critical pod "coredns-dd657c749-zjbl8" is pending
Pod kube-system/metadata-proxy-v0.12-4frhr system-node-critical pod "metadata-proxy-v0.12-4frhr" is pending
Pod kube-system/metadata-proxy-v0.12-6x75n system-node-critical pod "metadata-proxy-v0.12-6x75n" is pending
Pod kube-system/metadata-proxy-v0.12-qs49b system-node-critical pod "metadata-proxy-v0.12-qs49b" is pending
Pod kube-system/metadata-proxy-v0.12-zxqbk system-node-critical pod "metadata-proxy-v0.12-zxqbk" is pending
Validation Failed
W0623 01:11:21.651743 6324 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 01:11:32.025790 6324 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west3-a Master e2-standard-2 1 1 us-west3
nodes-us-west3-a Node n1-standard-2 4 4 us-west3
... skipping 8 lines ...
VALIDATION ERRORS
KIND NAME MESSAGE
Node nodes-us-west3-a-s284 node "nodes-us-west3-a-s284" of role "node" is not ready
Pod kube-system/metadata-proxy-v0.12-4frhr system-node-critical pod "metadata-proxy-v0.12-4frhr" is pending
Validation Failed
W0623 01:11:32.844895 6324 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 01:11:43.190715 6324 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west3-a Master e2-standard-2 1 1 us-west3
nodes-us-west3-a Node n1-standard-2 4 4 us-west3
... skipping 7 lines ...
nodes-us-west3-a-s284 node True
VALIDATION ERRORS
KIND NAME MESSAGE
Pod kube-system/metadata-proxy-v0.12-4frhr system-node-critical pod "metadata-proxy-v0.12-4frhr" is pending
Validation Failed
W0623 01:11:43.850562 6324 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 01:11:54.189001 6324 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west3-a Master e2-standard-2 1 1 us-west3
nodes-us-west3-a Node n1-standard-2 4 4 us-west3
... skipping 7 lines ...
nodes-us-west3-a-s284 node True
VALIDATION ERRORS
KIND NAME MESSAGE
Pod kube-system/metadata-proxy-v0.12-4frhr system-node-critical pod "metadata-proxy-v0.12-4frhr" is pending
Validation Failed
W0623 01:11:54.918068 6324 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 01:12:05.248650 6324 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west3-a Master e2-standard-2 1 1 us-west3
nodes-us-west3-a Node n1-standard-2 4 4 us-west3
... skipping 7 lines ...
nodes-us-west3-a-s284 node True
VALIDATION ERRORS
KIND NAME MESSAGE
Pod kube-system/kube-proxy-nodes-us-west3-a-j1m9 system-node-critical pod "kube-proxy-nodes-us-west3-a-j1m9" is pending
Validation Failed
W0623 01:12:06.031162 6324 validate_cluster.go:232] (will retry): cluster not yet healthy
I0623 01:12:16.332180 6324 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c]
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west3-a Master e2-standard-2 1 1 us-west3
nodes-us-west3-a Node n1-standard-2 4 4 us-west3
... skipping 183 lines ...
===================================
Random Seed: [1m1655946860[0m - Will randomize all specs
Will run [1m7042[0m specs
Running in parallel across [1m25[0m nodes
Jun 23 01:14:37.554: INFO: lookupDiskImageSources: gcloud error with [[]string{"instance-groups", "list-instances", "", "--format=get(instance)"}]; err:exit status 1
Jun 23 01:14:37.554: INFO: > ERROR: (gcloud.compute.instance-groups.list-instances) could not parse resource []
Jun 23 01:14:37.554: INFO: >
Jun 23 01:14:37.554: INFO: Cluster image sources lookup failed: exit status 1
Jun 23 01:14:37.554: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:14:37.556: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Jun 23 01:14:37.677: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jun 23 01:14:37.772: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jun 23 01:14:37.772: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
... skipping 352 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: cinder]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mOnly supported for providers [openstack] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:1092
[90m------------------------------[0m
... skipping 801 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:14:38.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "conformance-tests-1530" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:14:38.647: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
test/e2e/framework/framework.go:187
... skipping 2 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: aws]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mOnly supported for providers [aws] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:1722
[90m------------------------------[0m
... skipping 81 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:14:38.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "watch-9864" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] InitContainer [NodeConformance]
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:14:37.956: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename init-container
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
test/e2e/common/node/init_container.go:164
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: creating the pod
Jun 23 01:14:38.132: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
test/e2e/framework/framework.go:187
Jun 23 01:14:46.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "init-container-2167" for this suite.
[32m• [SLOW TEST:8.504 seconds][0m
[sig-node] InitContainer [NodeConformance]
[90mtest/e2e/common/node/framework.go:23[0m
should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
[BeforeEach] [sig-auth] ServiceAccounts
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:14:38.194: INFO: >>> kubeConfig: /root/.kube/config
... skipping 12 lines ...
[32m• [SLOW TEST:10.473 seconds][0m
[sig-auth] ServiceAccounts
[90mtest/e2e/auth/framework.go:23[0m
no secret-based service account token should be auto-generated
[90mtest/e2e/auth/service_accounts.go:56[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-auth] ServiceAccounts no secret-based service account token should be auto-generated","total":-1,"completed":1,"skipped":25,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] Watchers
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 29 lines ...
[32m• [SLOW TEST:10.871 seconds][0m
[sig-api-machinery] Watchers
[90mtest/e2e/apimachinery/framework.go:23[0m
should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:14:48.958: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 50 lines ...
[It] should allow exec of files on the volume
test/e2e/storage/testsuites/volumes.go:198
Jun 23 01:14:38.351: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 23 01:14:38.351: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod exec-volume-test-inlinevolume-5j8l
[1mSTEP[0m: Creating a pod to test exec-volume-test
Jun 23 01:14:38.454: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-5j8l" in namespace "volume-5405" to be "Succeeded or Failed"
Jun 23 01:14:38.517: INFO: Pod "exec-volume-test-inlinevolume-5j8l": Phase="Pending", Reason="", readiness=false. Elapsed: 62.074425ms
Jun 23 01:14:40.548: INFO: Pod "exec-volume-test-inlinevolume-5j8l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093871812s
Jun 23 01:14:42.543: INFO: Pod "exec-volume-test-inlinevolume-5j8l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088276442s
Jun 23 01:14:44.545: INFO: Pod "exec-volume-test-inlinevolume-5j8l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090767305s
Jun 23 01:14:46.542: INFO: Pod "exec-volume-test-inlinevolume-5j8l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08712379s
Jun 23 01:14:48.544: INFO: Pod "exec-volume-test-inlinevolume-5j8l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089918682s
[1mSTEP[0m: Saw pod success
Jun 23 01:14:48.545: INFO: Pod "exec-volume-test-inlinevolume-5j8l" satisfied condition "Succeeded or Failed"
Jun 23 01:14:48.569: INFO: Trying to get logs from node nodes-us-west3-a-s284 pod exec-volume-test-inlinevolume-5j8l container exec-container-inlinevolume-5j8l: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:14:49.220: INFO: Waiting for pod exec-volume-test-inlinevolume-5j8l to disappear
Jun 23 01:14:49.245: INFO: Pod exec-volume-test-inlinevolume-5j8l no longer exists
[1mSTEP[0m: Deleting pod exec-volume-test-inlinevolume-5j8l
Jun 23 01:14:49.245: INFO: Deleting pod "exec-volume-test-inlinevolume-5j8l" in namespace "volume-5405"
... skipping 10 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should allow exec of files on the volume
[90mtest/e2e/storage/testsuites/volumes.go:198[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:14:49.352: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/framework/framework.go:187
... skipping 56 lines ...
[32m• [SLOW TEST:11.506 seconds][0m
[sig-apps] ReplicaSet
[90mtest/e2e/apps/framework.go:23[0m
should list and delete a collection of ReplicaSets [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 33 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Kubectl validation
[90mtest/e2e/kubectl/kubectl.go:1033[0m
should create/apply a valid CR for CRD with validation schema
[90mtest/e2e/kubectl/kubectl.go:1052[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR for CRD with validation schema","total":-1,"completed":1,"skipped":3,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] ServerSideApply
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 57 lines ...
[32m• [SLOW TEST:12.989 seconds][0m
[sig-apps] Deployment
[90mtest/e2e/apps/framework.go:23[0m
RecreateDeployment should delete old pods and create new ones [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":1,"skipped":40,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:14:51.316: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
test/e2e/common/node/sysctl.go:67
[It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
test/e2e/common/node/sysctl.go:159
[1mSTEP[0m: Creating a pod with an ignorelisted, but not allowlisted sysctl on the node
[1mSTEP[0m: Wait for pod failed reason
Jun 23 01:14:49.211: INFO: Waiting up to 5m0s for pod "sysctl-62091d45-4d04-4f49-9922-9d6a44a7a40e" in namespace "sysctl-3831" to be "failed with reason SysctlForbidden"
Jun 23 01:14:49.237: INFO: Pod "sysctl-62091d45-4d04-4f49-9922-9d6a44a7a40e": Phase="Pending", Reason="", readiness=false. Elapsed: 25.318412ms
Jun 23 01:14:51.263: INFO: Pod "sysctl-62091d45-4d04-4f49-9922-9d6a44a7a40e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051468162s
Jun 23 01:14:53.264: INFO: Pod "sysctl-62091d45-4d04-4f49-9922-9d6a44a7a40e": Phase="Failed", Reason="SysctlForbidden", readiness=false. Elapsed: 4.052313092s
Jun 23 01:14:53.264: INFO: Pod "sysctl-62091d45-4d04-4f49-9922-9d6a44a7a40e" satisfied condition "failed with reason SysctlForbidden"
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
test/e2e/framework/framework.go:187
Jun 23 01:14:53.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "sysctl-3831" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":2,"skipped":13,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:14:53.361: INFO: Only supported for providers [vsphere] (not gce)
... skipping 47 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is root
test/e2e/common/storage/empty_dir.go:55
[1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs
Jun 23 01:14:39.120: INFO: Waiting up to 5m0s for pod "pod-f5e01d0e-b15b-48a4-95cc-e7d8fd3d2ca5" in namespace "emptydir-8542" to be "Succeeded or Failed"
Jun 23 01:14:39.146: INFO: Pod "pod-f5e01d0e-b15b-48a4-95cc-e7d8fd3d2ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.409785ms
Jun 23 01:14:41.172: INFO: Pod "pod-f5e01d0e-b15b-48a4-95cc-e7d8fd3d2ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052738327s
Jun 23 01:14:43.172: INFO: Pod "pod-f5e01d0e-b15b-48a4-95cc-e7d8fd3d2ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052702728s
Jun 23 01:14:45.174: INFO: Pod "pod-f5e01d0e-b15b-48a4-95cc-e7d8fd3d2ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053796707s
Jun 23 01:14:47.173: INFO: Pod "pod-f5e01d0e-b15b-48a4-95cc-e7d8fd3d2ca5": Phase="Running", Reason="", readiness=true. Elapsed: 8.053535543s
Jun 23 01:14:49.175: INFO: Pod "pod-f5e01d0e-b15b-48a4-95cc-e7d8fd3d2ca5": Phase="Running", Reason="", readiness=false. Elapsed: 10.054870631s
Jun 23 01:14:51.173: INFO: Pod "pod-f5e01d0e-b15b-48a4-95cc-e7d8fd3d2ca5": Phase="Running", Reason="", readiness=false. Elapsed: 12.053352938s
Jun 23 01:14:53.174: INFO: Pod "pod-f5e01d0e-b15b-48a4-95cc-e7d8fd3d2ca5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.054204325s
[1mSTEP[0m: Saw pod success
Jun 23 01:14:53.174: INFO: Pod "pod-f5e01d0e-b15b-48a4-95cc-e7d8fd3d2ca5" satisfied condition "Succeeded or Failed"
Jun 23 01:14:53.199: INFO: Trying to get logs from node nodes-us-west3-a-j1m9 pod pod-f5e01d0e-b15b-48a4-95cc-e7d8fd3d2ca5 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:14:53.671: INFO: Waiting for pod pod-f5e01d0e-b15b-48a4-95cc-e7d8fd3d2ca5 to disappear
Jun 23 01:14:53.695: INFO: Pod pod-f5e01d0e-b15b-48a4-95cc-e7d8fd3d2ca5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 6 lines ...
[90mtest/e2e/common/storage/framework.go:23[0m
when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/empty_dir.go:48[0m
new files should be created with FSGroup ownership when container is root
[90mtest/e2e/common/storage/empty_dir.go:55[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":2,"skipped":11,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:14:53.801: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 25 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name configmap-test-volume-05773e9f-fa9a-4f43-a1f1-0d03f03225bb
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 23 01:14:38.358: INFO: Waiting up to 5m0s for pod "pod-configmaps-758577fd-0dc5-4d76-96b1-3ec8b2b26574" in namespace "configmap-4159" to be "Succeeded or Failed"
Jun 23 01:14:38.422: INFO: Pod "pod-configmaps-758577fd-0dc5-4d76-96b1-3ec8b2b26574": Phase="Pending", Reason="", readiness=false. Elapsed: 63.774701ms
Jun 23 01:14:40.447: INFO: Pod "pod-configmaps-758577fd-0dc5-4d76-96b1-3ec8b2b26574": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089119036s
Jun 23 01:14:42.446: INFO: Pod "pod-configmaps-758577fd-0dc5-4d76-96b1-3ec8b2b26574": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087884954s
Jun 23 01:14:44.446: INFO: Pod "pod-configmaps-758577fd-0dc5-4d76-96b1-3ec8b2b26574": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088297895s
Jun 23 01:14:46.449: INFO: Pod "pod-configmaps-758577fd-0dc5-4d76-96b1-3ec8b2b26574": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090447226s
Jun 23 01:14:48.446: INFO: Pod "pod-configmaps-758577fd-0dc5-4d76-96b1-3ec8b2b26574": Phase="Running", Reason="", readiness=true. Elapsed: 10.087636604s
Jun 23 01:14:50.447: INFO: Pod "pod-configmaps-758577fd-0dc5-4d76-96b1-3ec8b2b26574": Phase="Running", Reason="", readiness=false. Elapsed: 12.088617965s
Jun 23 01:14:52.454: INFO: Pod "pod-configmaps-758577fd-0dc5-4d76-96b1-3ec8b2b26574": Phase="Running", Reason="", readiness=false. Elapsed: 14.09620955s
Jun 23 01:14:54.448: INFO: Pod "pod-configmaps-758577fd-0dc5-4d76-96b1-3ec8b2b26574": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.089759939s
[1mSTEP[0m: Saw pod success
Jun 23 01:14:54.448: INFO: Pod "pod-configmaps-758577fd-0dc5-4d76-96b1-3ec8b2b26574" satisfied condition "Succeeded or Failed"
Jun 23 01:14:54.472: INFO: Trying to get logs from node nodes-us-west3-a-9jqc pod pod-configmaps-758577fd-0dc5-4d76-96b1-3ec8b2b26574 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:14:54.788: INFO: Waiting for pod pod-configmaps-758577fd-0dc5-4d76-96b1-3ec8b2b26574 to disappear
Jun 23 01:14:54.810: INFO: Pod pod-configmaps-758577fd-0dc5-4d76-96b1-3ec8b2b26574 no longer exists
[AfterEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:16.843 seconds][0m
[sig-storage] ConfigMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Pods
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 30 lines ...
[32m• [SLOW TEST:20.418 seconds][0m
[sig-node] Pods
[90mtest/e2e/common/node/framework.go:23[0m
should be submitted and removed [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Container Runtime
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 23 lines ...
[90mtest/e2e/common/node/runtime.go:43[0m
on terminated container
[90mtest/e2e/common/node/runtime.go:136[0m
should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":49,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:14:58.996: INFO: Only supported for providers [azure] (not gce)
... skipping 49 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/empty_dir.go:50
[It] volume on tmpfs should have the correct mode using FSGroup
test/e2e/common/storage/empty_dir.go:75
[1mSTEP[0m: Creating a pod to test emptydir volume type on tmpfs
Jun 23 01:14:38.731: INFO: Waiting up to 5m0s for pod "pod-b3c381a3-8777-4f82-b54a-7cf5e543acff" in namespace "emptydir-1997" to be "Succeeded or Failed"
Jun 23 01:14:38.764: INFO: Pod "pod-b3c381a3-8777-4f82-b54a-7cf5e543acff": Phase="Pending", Reason="", readiness=false. Elapsed: 33.138354ms
Jun 23 01:14:40.791: INFO: Pod "pod-b3c381a3-8777-4f82-b54a-7cf5e543acff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059410496s
Jun 23 01:14:42.791: INFO: Pod "pod-b3c381a3-8777-4f82-b54a-7cf5e543acff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059249893s
Jun 23 01:14:44.791: INFO: Pod "pod-b3c381a3-8777-4f82-b54a-7cf5e543acff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05979922s
Jun 23 01:14:46.791: INFO: Pod "pod-b3c381a3-8777-4f82-b54a-7cf5e543acff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059218739s
Jun 23 01:14:48.790: INFO: Pod "pod-b3c381a3-8777-4f82-b54a-7cf5e543acff": Phase="Pending", Reason="", readiness=false. Elapsed: 10.058693441s
Jun 23 01:14:50.804: INFO: Pod "pod-b3c381a3-8777-4f82-b54a-7cf5e543acff": Phase="Pending", Reason="", readiness=false. Elapsed: 12.072226349s
Jun 23 01:14:52.795: INFO: Pod "pod-b3c381a3-8777-4f82-b54a-7cf5e543acff": Phase="Pending", Reason="", readiness=false. Elapsed: 14.06372841s
Jun 23 01:14:54.796: INFO: Pod "pod-b3c381a3-8777-4f82-b54a-7cf5e543acff": Phase="Pending", Reason="", readiness=false. Elapsed: 16.064398535s
Jun 23 01:14:56.791: INFO: Pod "pod-b3c381a3-8777-4f82-b54a-7cf5e543acff": Phase="Pending", Reason="", readiness=false. Elapsed: 18.059449095s
Jun 23 01:14:58.791: INFO: Pod "pod-b3c381a3-8777-4f82-b54a-7cf5e543acff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.059636318s
[1mSTEP[0m: Saw pod success
Jun 23 01:14:58.791: INFO: Pod "pod-b3c381a3-8777-4f82-b54a-7cf5e543acff" satisfied condition "Succeeded or Failed"
Jun 23 01:14:58.826: INFO: Trying to get logs from node nodes-us-west3-a-9jqc pod pod-b3c381a3-8777-4f82-b54a-7cf5e543acff container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:14:58.933: INFO: Waiting for pod pod-b3c381a3-8777-4f82-b54a-7cf5e543acff to disappear
Jun 23 01:14:58.968: INFO: Pod pod-b3c381a3-8777-4f82-b54a-7cf5e543acff no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 6 lines ...
[90mtest/e2e/common/storage/framework.go:23[0m
when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/empty_dir.go:48[0m
volume on tmpfs should have the correct mode using FSGroup
[90mtest/e2e/common/storage/empty_dir.go:75[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":1,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:14:59.064: INFO: Driver hostPathSymlink doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 90 lines ...
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test emptydir 0777 on node default medium
Jun 23 01:14:49.588: INFO: Waiting up to 5m0s for pod "pod-54850a72-d151-4a66-9a92-f6a24ca2b6e2" in namespace "emptydir-8896" to be "Succeeded or Failed"
Jun 23 01:14:49.613: INFO: Pod "pod-54850a72-d151-4a66-9a92-f6a24ca2b6e2": Phase="Pending", Reason="", readiness=false. Elapsed: 24.061219ms
Jun 23 01:14:51.638: INFO: Pod "pod-54850a72-d151-4a66-9a92-f6a24ca2b6e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049789799s
Jun 23 01:14:53.640: INFO: Pod "pod-54850a72-d151-4a66-9a92-f6a24ca2b6e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051432051s
Jun 23 01:14:55.638: INFO: Pod "pod-54850a72-d151-4a66-9a92-f6a24ca2b6e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04981571s
Jun 23 01:14:57.640: INFO: Pod "pod-54850a72-d151-4a66-9a92-f6a24ca2b6e2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051429436s
Jun 23 01:14:59.638: INFO: Pod "pod-54850a72-d151-4a66-9a92-f6a24ca2b6e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.049770146s
[1mSTEP[0m: Saw pod success
Jun 23 01:14:59.638: INFO: Pod "pod-54850a72-d151-4a66-9a92-f6a24ca2b6e2" satisfied condition "Succeeded or Failed"
Jun 23 01:14:59.664: INFO: Trying to get logs from node nodes-us-west3-a-s284 pod pod-54850a72-d151-4a66-9a92-f6a24ca2b6e2 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:14:59.719: INFO: Waiting for pod pod-54850a72-d151-4a66-9a92-f6a24ca2b6e2 to disappear
Jun 23 01:14:59.744: INFO: Pod pod-54850a72-d151-4a66-9a92-f6a24ca2b6e2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.418 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] SCTP [LinuxOnly]
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 20 lines ...
Jun 23 01:14:48.736: INFO: ExecWithOptions: Clientset creation
Jun 23 01:14:48.736: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/sctp-1491/pods/hostexec-nodes-us-west3-a-j1m9-kntqq/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
Jun 23 01:14:48.955: INFO: exec nodes-us-west3-a-j1m9: command: lsmod | grep sctp
Jun 23 01:14:48.955: INFO: exec nodes-us-west3-a-j1m9: stdout: ""
Jun 23 01:14:48.955: INFO: exec nodes-us-west3-a-j1m9: stderr: ""
Jun 23 01:14:48.955: INFO: exec nodes-us-west3-a-j1m9: exit code: 0
Jun 23 01:14:48.955: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1
Jun 23 01:14:48.955: INFO: the sctp module is not loaded on node: nodes-us-west3-a-j1m9
[1mSTEP[0m: Deleting pod hostexec-nodes-us-west3-a-j1m9-kntqq in namespace sctp-1491
[1mSTEP[0m: creating a pod with hostport on the selected node
[1mSTEP[0m: Launching the pod on node nodes-us-west3-a-j1m9
Jun 23 01:14:49.018: INFO: Waiting up to 5m0s for pod "hostport" in namespace "sctp-1491" to be "running and ready"
Jun 23 01:14:49.047: INFO: Pod "hostport": Phase="Pending", Reason="", readiness=false. Elapsed: 28.614897ms
... skipping 36 lines ...
[32m• [SLOW TEST:23.703 seconds][0m
[sig-network] SCTP [LinuxOnly]
[90mtest/e2e/network/common/framework.go:23[0m
should create a Pod with SCTP HostPort
[90mtest/e2e/network/service.go:4124[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] SCTP [LinuxOnly] should create a Pod with SCTP HostPort","total":-1,"completed":1,"skipped":12,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:01.907: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 125 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Kubectl validation
[90mtest/e2e/kubectl/kubectl.go:1033[0m
should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
[90mtest/e2e/kubectl/kubectl.go:1078[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":2,"skipped":2,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:02.901: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 71 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating secret with name secret-test-1993bec6-4524-4594-9936-9c7261b8dae4
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 23 01:14:59.269: INFO: Waiting up to 5m0s for pod "pod-secrets-4165cb8c-3613-422e-ba9f-1a8b54efc54a" in namespace "secrets-8484" to be "Succeeded or Failed"
Jun 23 01:14:59.294: INFO: Pod "pod-secrets-4165cb8c-3613-422e-ba9f-1a8b54efc54a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.882411ms
Jun 23 01:15:01.319: INFO: Pod "pod-secrets-4165cb8c-3613-422e-ba9f-1a8b54efc54a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050420771s
Jun 23 01:15:03.319: INFO: Pod "pod-secrets-4165cb8c-3613-422e-ba9f-1a8b54efc54a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050446867s
Jun 23 01:15:05.320: INFO: Pod "pod-secrets-4165cb8c-3613-422e-ba9f-1a8b54efc54a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050738509s
Jun 23 01:15:07.320: INFO: Pod "pod-secrets-4165cb8c-3613-422e-ba9f-1a8b54efc54a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050978821s
[1mSTEP[0m: Saw pod success
Jun 23 01:15:07.320: INFO: Pod "pod-secrets-4165cb8c-3613-422e-ba9f-1a8b54efc54a" satisfied condition "Succeeded or Failed"
Jun 23 01:15:07.345: INFO: Trying to get logs from node nodes-us-west3-a-9jqc pod pod-secrets-4165cb8c-3613-422e-ba9f-1a8b54efc54a container secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:15:07.401: INFO: Waiting for pod pod-secrets-4165cb8c-3613-422e-ba9f-1a8b54efc54a to disappear
Jun 23 01:15:07.425: INFO: Pod pod-secrets-4165cb8c-3613-422e-ba9f-1a8b54efc54a no longer exists
[AfterEach] [sig-storage] Secrets
test/e2e/framework/framework.go:187
... skipping 50 lines ...
[32m• [SLOW TEST:7.842 seconds][0m
[sig-scheduling] LimitRange
[90mtest/e2e/scheduling/framework.go:40[0m
should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0}
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field","total":-1,"completed":2,"skipped":4,"failed":0}
[BeforeEach] [sig-network] DNS
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:14:51.024: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename dns
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 41 lines ...
[32m• [SLOW TEST:17.026 seconds][0m
[sig-network] DNS
[90mtest/e2e/network/common/framework.go:23[0m
should support configurable pod resolv.conf
[90mtest/e2e/network/dns.go:460[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":3,"skipped":4,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 18 lines ...
[32m• [SLOW TEST:10.675 seconds][0m
[sig-auth] Certificates API [Privileged:ClusterAdmin]
[90mtest/e2e/auth/framework.go:23[0m
should support building a client with a CSR
[90mtest/e2e/auth/certificates.go:59[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR","total":-1,"completed":2,"skipped":11,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":1,"skipped":16,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:14:38.694: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename kubectl
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 6 lines ...
Jun 23 01:14:38.942: INFO: Running '/logs/artifacts/5366fe45-f290-11ec-8dfe-daa417708791/kubectl --server=https://34.106.168.174 --kubeconfig=/root/.kube/config --namespace=kubectl-938 create -f -'
Jun 23 01:14:40.453: INFO: stderr: ""
Jun 23 01:14:40.454: INFO: stdout: "pod/httpd created\n"
Jun 23 01:14:40.454: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jun 23 01:14:40.454: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-938" to be "running and ready"
Jun 23 01:14:40.478: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 24.608301ms
Jun 23 01:14:40.478: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-9jqc' to be 'Running' but was 'Pending'
Jun 23 01:14:42.505: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051289879s
Jun 23 01:14:42.505: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-9jqc' to be 'Running' but was 'Pending'
Jun 23 01:14:44.507: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053351324s
Jun 23 01:14:44.507: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-9jqc' to be 'Running' but was 'Pending'
Jun 23 01:14:46.505: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051200907s
Jun 23 01:14:46.505: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-9jqc' to be 'Running' but was 'Pending'
Jun 23 01:14:48.504: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050141781s
Jun 23 01:14:48.504: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-9jqc' to be 'Running' but was 'Pending'
Jun 23 01:14:50.504: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.049873579s
Jun 23 01:14:50.504: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-9jqc' to be 'Running' but was 'Pending'
Jun 23 01:14:52.505: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.051030009s
Jun 23 01:14:52.505: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-9jqc' to be 'Running' but was 'Pending'
Jun 23 01:14:54.507: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.053398487s
Jun 23 01:14:54.507: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-9jqc' to be 'Running' but was 'Pending'
Jun 23 01:14:56.506: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 16.052603417s
Jun 23 01:14:56.506: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west3-a-9jqc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC }]
Jun 23 01:14:58.508: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 18.054549781s
Jun 23 01:14:58.508: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west3-a-9jqc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC }]
Jun 23 01:15:00.514: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 20.060118195s
Jun 23 01:15:00.514: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west3-a-9jqc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC }]
Jun 23 01:15:02.505: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 22.051568207s
Jun 23 01:15:02.505: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west3-a-9jqc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC }]
Jun 23 01:15:04.506: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 24.052544125s
Jun 23 01:15:04.506: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west3-a-9jqc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC }]
Jun 23 01:15:06.505: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 26.05104301s
Jun 23 01:15:06.505: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west3-a-9jqc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:40 +0000 UTC }]
Jun 23 01:15:08.504: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 28.050250943s
Jun 23 01:15:08.504: INFO: Pod "httpd" satisfied condition "running and ready"
Jun 23 01:15:08.504: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should support exec using resource/name
test/e2e/kubectl/kubectl.go:459
[1mSTEP[0m: executing a command in the container
... skipping 127 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
test/e2e/common/node/security_context.go:48
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
Jun 23 01:15:07.885: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-05536595-6ea6-4e58-bf2b-ef7e2ecfab85" in namespace "security-context-test-1210" to be "Succeeded or Failed"
Jun 23 01:15:07.912: INFO: Pod "busybox-readonly-false-05536595-6ea6-4e58-bf2b-ef7e2ecfab85": Phase="Pending", Reason="", readiness=false. Elapsed: 26.412653ms
Jun 23 01:15:09.937: INFO: Pod "busybox-readonly-false-05536595-6ea6-4e58-bf2b-ef7e2ecfab85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051376101s
Jun 23 01:15:11.937: INFO: Pod "busybox-readonly-false-05536595-6ea6-4e58-bf2b-ef7e2ecfab85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052130581s
Jun 23 01:15:11.937: INFO: Pod "busybox-readonly-false-05536595-6ea6-4e58-bf2b-ef7e2ecfab85" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
Jun 23 01:15:11.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-1210" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":16,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:12.055: INFO: Only supported for providers [aws] (not gce)
... skipping 226 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90mtest/e2e/storage/testsuites/volumemode.go:354[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:12.843: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
test/e2e/framework/framework.go:187
... skipping 84 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
when create a pod with lifecycle hook
[90mtest/e2e/common/node/lifecycle_hook.go:46[0m
should execute poststart exec hook properly [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":19,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:15.064: INFO: Driver hostPath doesn't support ext3 -- skipping
... skipping 207 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name configmap-test-volume-aba52d19-4c0b-41d6-8ee9-b3effbbe34c6
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 23 01:15:12.660: INFO: Waiting up to 5m0s for pod "pod-configmaps-81013044-55c4-438d-8813-f3f3c4df0766" in namespace "configmap-3124" to be "Succeeded or Failed"
Jun 23 01:15:12.688: INFO: Pod "pod-configmaps-81013044-55c4-438d-8813-f3f3c4df0766": Phase="Pending", Reason="", readiness=false. Elapsed: 28.059875ms
Jun 23 01:15:14.714: INFO: Pod "pod-configmaps-81013044-55c4-438d-8813-f3f3c4df0766": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05381932s
Jun 23 01:15:16.714: INFO: Pod "pod-configmaps-81013044-55c4-438d-8813-f3f3c4df0766": Phase="Running", Reason="", readiness=true. Elapsed: 4.053624636s
Jun 23 01:15:18.713: INFO: Pod "pod-configmaps-81013044-55c4-438d-8813-f3f3c4df0766": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053289586s
[1mSTEP[0m: Saw pod success
Jun 23 01:15:18.713: INFO: Pod "pod-configmaps-81013044-55c4-438d-8813-f3f3c4df0766" satisfied condition "Succeeded or Failed"
Jun 23 01:15:18.738: INFO: Trying to get logs from node nodes-us-west3-a-j1m9 pod pod-configmaps-81013044-55c4-438d-8813-f3f3c4df0766 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:15:18.808: INFO: Waiting for pod pod-configmaps-81013044-55c4-438d-8813-f3f3c4df0766 to disappear
Jun 23 01:15:18.833: INFO: Pod pod-configmaps-81013044-55c4-438d-8813-f3f3c4df0766 no longer exists
[AfterEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:6.467 seconds][0m
[sig-storage] ConfigMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":36,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:18.932: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 402 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:15:19.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "protocol-8761" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":1,"skipped":31,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:19.560: INFO: Only supported for providers [openstack] (not gce)
... skipping 14 lines ...
[36mOnly supported for providers [openstack] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:1092
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":55,"failed":0}
[BeforeEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:15:07.496: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename projected
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-a5b3f7d8-ddbe-4f16-82bd-7ce4e2fc315e
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 23 01:15:07.740: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ff11bc93-c85d-4f7a-bf44-20e38668f211" in namespace "projected-9126" to be "Succeeded or Failed"
Jun 23 01:15:07.765: INFO: Pod "pod-projected-configmaps-ff11bc93-c85d-4f7a-bf44-20e38668f211": Phase="Pending", Reason="", readiness=false. Elapsed: 24.808752ms
Jun 23 01:15:09.792: INFO: Pod "pod-projected-configmaps-ff11bc93-c85d-4f7a-bf44-20e38668f211": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052179593s
Jun 23 01:15:11.791: INFO: Pod "pod-projected-configmaps-ff11bc93-c85d-4f7a-bf44-20e38668f211": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050927663s
Jun 23 01:15:13.790: INFO: Pod "pod-projected-configmaps-ff11bc93-c85d-4f7a-bf44-20e38668f211": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050351287s
Jun 23 01:15:15.791: INFO: Pod "pod-projected-configmaps-ff11bc93-c85d-4f7a-bf44-20e38668f211": Phase="Running", Reason="", readiness=true. Elapsed: 8.051204194s
Jun 23 01:15:17.791: INFO: Pod "pod-projected-configmaps-ff11bc93-c85d-4f7a-bf44-20e38668f211": Phase="Running", Reason="", readiness=true. Elapsed: 10.050953216s
Jun 23 01:15:19.795: INFO: Pod "pod-projected-configmaps-ff11bc93-c85d-4f7a-bf44-20e38668f211": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.054613755s
[1mSTEP[0m: Saw pod success
Jun 23 01:15:19.795: INFO: Pod "pod-projected-configmaps-ff11bc93-c85d-4f7a-bf44-20e38668f211" satisfied condition "Succeeded or Failed"
Jun 23 01:15:19.820: INFO: Trying to get logs from node nodes-us-west3-a-9jqc pod pod-projected-configmaps-ff11bc93-c85d-4f7a-bf44-20e38668f211 container projected-configmap-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:15:19.881: INFO: Waiting for pod pod-projected-configmaps-ff11bc93-c85d-4f7a-bf44-20e38668f211 to disappear
Jun 23 01:15:19.906: INFO: Pod pod-projected-configmaps-ff11bc93-c85d-4f7a-bf44-20e38668f211 no longer exists
[AfterEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:12.467 seconds][0m
[sig-storage] Projected configMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":55,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:20.001: INFO: Only supported for providers [vsphere] (not gce)
... skipping 14 lines ...
[36mOnly supported for providers [vsphere] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:1439
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:14:46.471: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename provisioning
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:221
Jun 23 01:14:46.642: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 23 01:14:46.706: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9881" in namespace "provisioning-9881" to be "Succeeded or Failed"
Jun 23 01:14:46.730: INFO: Pod "hostpath-symlink-prep-provisioning-9881": Phase="Pending", Reason="", readiness=false. Elapsed: 23.355125ms
Jun 23 01:14:48.755: INFO: Pod "hostpath-symlink-prep-provisioning-9881": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048382584s
Jun 23 01:14:50.774: INFO: Pod "hostpath-symlink-prep-provisioning-9881": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067042632s
Jun 23 01:14:52.755: INFO: Pod "hostpath-symlink-prep-provisioning-9881": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048121051s
Jun 23 01:14:54.755: INFO: Pod "hostpath-symlink-prep-provisioning-9881": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048280111s
Jun 23 01:14:56.754: INFO: Pod "hostpath-symlink-prep-provisioning-9881": Phase="Pending", Reason="", readiness=false. Elapsed: 10.047043123s
Jun 23 01:14:58.754: INFO: Pod "hostpath-symlink-prep-provisioning-9881": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.047459092s
[1mSTEP[0m: Saw pod success
Jun 23 01:14:58.754: INFO: Pod "hostpath-symlink-prep-provisioning-9881" satisfied condition "Succeeded or Failed"
Jun 23 01:14:58.754: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9881" in namespace "provisioning-9881"
Jun 23 01:14:58.787: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9881" to be fully deleted
Jun 23 01:14:58.821: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-nz2r
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:14:58.851: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-nz2r" in namespace "provisioning-9881" to be "Succeeded or Failed"
Jun 23 01:14:58.889: INFO: Pod "pod-subpath-test-inlinevolume-nz2r": Phase="Pending", Reason="", readiness=false. Elapsed: 38.053725ms
Jun 23 01:15:00.917: INFO: Pod "pod-subpath-test-inlinevolume-nz2r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065339141s
Jun 23 01:15:02.915: INFO: Pod "pod-subpath-test-inlinevolume-nz2r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063359411s
Jun 23 01:15:04.915: INFO: Pod "pod-subpath-test-inlinevolume-nz2r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064108854s
Jun 23 01:15:06.915: INFO: Pod "pod-subpath-test-inlinevolume-nz2r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063303675s
Jun 23 01:15:08.914: INFO: Pod "pod-subpath-test-inlinevolume-nz2r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063140085s
Jun 23 01:15:10.914: INFO: Pod "pod-subpath-test-inlinevolume-nz2r": Phase="Pending", Reason="", readiness=false. Elapsed: 12.06313987s
Jun 23 01:15:12.916: INFO: Pod "pod-subpath-test-inlinevolume-nz2r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.064499121s
[1mSTEP[0m: Saw pod success
Jun 23 01:15:12.916: INFO: Pod "pod-subpath-test-inlinevolume-nz2r" satisfied condition "Succeeded or Failed"
Jun 23 01:15:12.941: INFO: Trying to get logs from node nodes-us-west3-a-9jqc pod pod-subpath-test-inlinevolume-nz2r container test-container-subpath-inlinevolume-nz2r: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:15:13.021: INFO: Waiting for pod pod-subpath-test-inlinevolume-nz2r to disappear
Jun 23 01:15:13.046: INFO: Pod pod-subpath-test-inlinevolume-nz2r no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-nz2r
Jun 23 01:15:13.046: INFO: Deleting pod "pod-subpath-test-inlinevolume-nz2r" in namespace "provisioning-9881"
[1mSTEP[0m: Deleting pod
Jun 23 01:15:13.076: INFO: Deleting pod "pod-subpath-test-inlinevolume-nz2r" in namespace "provisioning-9881"
Jun 23 01:15:13.130: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9881" in namespace "provisioning-9881" to be "Succeeded or Failed"
Jun 23 01:15:13.154: INFO: Pod "hostpath-symlink-prep-provisioning-9881": Phase="Pending", Reason="", readiness=false. Elapsed: 23.676357ms
Jun 23 01:15:15.179: INFO: Pod "hostpath-symlink-prep-provisioning-9881": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048445032s
Jun 23 01:15:17.180: INFO: Pod "hostpath-symlink-prep-provisioning-9881": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0492861s
Jun 23 01:15:19.189: INFO: Pod "hostpath-symlink-prep-provisioning-9881": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059003143s
Jun 23 01:15:21.180: INFO: Pod "hostpath-symlink-prep-provisioning-9881": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049210032s
[1mSTEP[0m: Saw pod success
Jun 23 01:15:21.180: INFO: Pod "hostpath-symlink-prep-provisioning-9881" satisfied condition "Succeeded or Failed"
Jun 23 01:15:21.180: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9881" in namespace "provisioning-9881"
Jun 23 01:15:21.216: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9881" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:187
Jun 23 01:15:21.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "provisioning-9881" for this suite.
... skipping 6 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing single file [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:221[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":1,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:21.340: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 77 lines ...
[1mSTEP[0m: Create set of pods
Jun 23 01:14:55.102: INFO: created test-pod-1
Jun 23 01:14:55.126: INFO: created test-pod-2
Jun 23 01:14:55.150: INFO: created test-pod-3
[1mSTEP[0m: waiting for all 3 pods to be running
Jun 23 01:14:55.150: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-4768' to be running and ready
Jun 23 01:14:55.219: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 01:14:55.219: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 01:14:55.219: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 01:14:55.219: INFO: 0 / 3 pods in namespace 'pods-4768' are running and ready (0 seconds elapsed)
Jun 23 01:14:55.219: INFO: expected 0 pod replicas in namespace 'pods-4768', 0 are Running and Ready.
Jun 23 01:14:55.219: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 23 01:14:55.219: INFO: test-pod-1 nodes-us-west3-a-9jqc Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC }]
Jun 23 01:14:55.220: INFO: test-pod-2 nodes-us-west3-a-s284 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC }]
Jun 23 01:14:55.220: INFO: test-pod-3 nodes-us-west3-a-9jqc Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC }]
Jun 23 01:14:55.220: INFO:
Jun 23 01:14:57.292: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 01:14:57.292: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 01:14:57.292: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 01:14:57.292: INFO: 0 / 3 pods in namespace 'pods-4768' are running and ready (2 seconds elapsed)
Jun 23 01:14:57.292: INFO: expected 0 pod replicas in namespace 'pods-4768', 0 are Running and Ready.
Jun 23 01:14:57.292: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 23 01:14:57.292: INFO: test-pod-1 nodes-us-west3-a-9jqc Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC }]
Jun 23 01:14:57.292: INFO: test-pod-2 nodes-us-west3-a-s284 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC }]
Jun 23 01:14:57.292: INFO: test-pod-3 nodes-us-west3-a-9jqc Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC }]
Jun 23 01:14:57.292: INFO:
Jun 23 01:14:59.293: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 01:14:59.293: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 01:14:59.293: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 01:14:59.293: INFO: 0 / 3 pods in namespace 'pods-4768' are running and ready (4 seconds elapsed)
Jun 23 01:14:59.293: INFO: expected 0 pod replicas in namespace 'pods-4768', 0 are Running and Ready.
Jun 23 01:14:59.293: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 23 01:14:59.293: INFO: test-pod-1 nodes-us-west3-a-9jqc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC }]
Jun 23 01:14:59.293: INFO: test-pod-2 nodes-us-west3-a-s284 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC }]
Jun 23 01:14:59.293: INFO: test-pod-3 nodes-us-west3-a-9jqc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC }]
Jun 23 01:14:59.293: INFO:
Jun 23 01:15:01.293: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 01:15:01.293: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 01:15:01.293: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 01:15:01.293: INFO: 0 / 3 pods in namespace 'pods-4768' are running and ready (6 seconds elapsed)
Jun 23 01:15:01.293: INFO: expected 0 pod replicas in namespace 'pods-4768', 0 are Running and Ready.
Jun 23 01:15:01.293: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 23 01:15:01.293: INFO: test-pod-1 nodes-us-west3-a-9jqc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC }]
Jun 23 01:15:01.293: INFO: test-pod-2 nodes-us-west3-a-s284 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC }]
Jun 23 01:15:01.293: INFO: test-pod-3 nodes-us-west3-a-9jqc Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC }]
Jun 23 01:15:01.293: INFO:
Jun 23 01:15:03.290: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 01:15:03.290: INFO: 2 / 3 pods in namespace 'pods-4768' are running and ready (8 seconds elapsed)
Jun 23 01:15:03.290: INFO: expected 0 pod replicas in namespace 'pods-4768', 0 are Running and Ready.
Jun 23 01:15:03.290: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 23 01:15:03.290: INFO: test-pod-2 nodes-us-west3-a-s284 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC }]
Jun 23 01:15:03.290: INFO:
Jun 23 01:15:05.292: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 01:15:05.292: INFO: 2 / 3 pods in namespace 'pods-4768' are running and ready (10 seconds elapsed)
Jun 23 01:15:05.292: INFO: expected 0 pod replicas in namespace 'pods-4768', 0 are Running and Ready.
Jun 23 01:15:05.292: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 23 01:15:05.292: INFO: test-pod-2 nodes-us-west3-a-s284 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC }]
Jun 23 01:15:05.292: INFO:
Jun 23 01:15:07.291: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 01:15:07.291: INFO: 2 / 3 pods in namespace 'pods-4768' are running and ready (12 seconds elapsed)
Jun 23 01:15:07.291: INFO: expected 0 pod replicas in namespace 'pods-4768', 0 are Running and Ready.
Jun 23 01:15:07.291: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 23 01:15:07.291: INFO: test-pod-2 nodes-us-west3-a-s284 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:14:55 +0000 UTC }]
Jun 23 01:15:07.291: INFO:
Jun 23 01:15:09.296: INFO: 3 / 3 pods in namespace 'pods-4768' are running and ready (14 seconds elapsed)
... skipping 20 lines ...
[32m• [SLOW TEST:26.549 seconds][0m
[sig-node] Pods
[90mtest/e2e/common/node/framework.go:23[0m
should delete a collection of pods [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:21.508: INFO: Only supported for providers [openstack] (not gce)
... skipping 100 lines ...
[32m• [SLOW TEST:22.471 seconds][0m
[sig-api-machinery] Aggregator
[90mtest/e2e/apimachinery/framework.go:23[0m
Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":2,"skipped":58,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:21.737: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 192 lines ...
[32m• [SLOW TEST:10.074 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
should include webhook resources in discovery documents [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":3,"skipped":27,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Secrets
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 2 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: creating secret secrets-3585/secret-test-2cd6e2b9-ae68-4297-81e2-3be06ef30a0f
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 23 01:15:15.438: INFO: Waiting up to 5m0s for pod "pod-configmaps-27257e43-fad4-444d-b37d-268e07a77663" in namespace "secrets-3585" to be "Succeeded or Failed"
Jun 23 01:15:15.464: INFO: Pod "pod-configmaps-27257e43-fad4-444d-b37d-268e07a77663": Phase="Pending", Reason="", readiness=false. Elapsed: 25.249668ms
Jun 23 01:15:17.494: INFO: Pod "pod-configmaps-27257e43-fad4-444d-b37d-268e07a77663": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055907909s
Jun 23 01:15:19.491: INFO: Pod "pod-configmaps-27257e43-fad4-444d-b37d-268e07a77663": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051973411s
Jun 23 01:15:21.489: INFO: Pod "pod-configmaps-27257e43-fad4-444d-b37d-268e07a77663": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050612113s
Jun 23 01:15:23.489: INFO: Pod "pod-configmaps-27257e43-fad4-444d-b37d-268e07a77663": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050676546s
[1mSTEP[0m: Saw pod success
Jun 23 01:15:23.489: INFO: Pod "pod-configmaps-27257e43-fad4-444d-b37d-268e07a77663" satisfied condition "Succeeded or Failed"
Jun 23 01:15:23.514: INFO: Trying to get logs from node nodes-us-west3-a-9jqc pod pod-configmaps-27257e43-fad4-444d-b37d-268e07a77663 container env-test: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:15:23.584: INFO: Waiting for pod pod-configmaps-27257e43-fad4-444d-b37d-268e07a77663 to disappear
Jun 23 01:15:23.612: INFO: Pod pod-configmaps-27257e43-fad4-444d-b37d-268e07a77663 no longer exists
[AfterEach] [sig-node] Secrets
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.461 seconds][0m
[sig-node] Secrets
[90mtest/e2e/common/node/framework.go:23[0m
should be consumable via the environment [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":41,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:23.694: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 176 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/configmap_volume.go:77
[1mSTEP[0m: Creating configMap with name configmap-test-volume-f3876e5e-a969-4082-a99c-1c2817ef81b9
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 23 01:15:19.492: INFO: Waiting up to 5m0s for pod "pod-configmaps-6950b74a-416b-4c02-a3ae-9de1261ebf6c" in namespace "configmap-273" to be "Succeeded or Failed"
Jun 23 01:15:19.517: INFO: Pod "pod-configmaps-6950b74a-416b-4c02-a3ae-9de1261ebf6c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.767953ms
Jun 23 01:15:21.544: INFO: Pod "pod-configmaps-6950b74a-416b-4c02-a3ae-9de1261ebf6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051090405s
Jun 23 01:15:23.544: INFO: Pod "pod-configmaps-6950b74a-416b-4c02-a3ae-9de1261ebf6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051281475s
[1mSTEP[0m: Saw pod success
Jun 23 01:15:23.544: INFO: Pod "pod-configmaps-6950b74a-416b-4c02-a3ae-9de1261ebf6c" satisfied condition "Succeeded or Failed"
Jun 23 01:15:23.572: INFO: Trying to get logs from node nodes-us-west3-a-j1m9 pod pod-configmaps-6950b74a-416b-4c02-a3ae-9de1261ebf6c container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:15:23.647: INFO: Waiting for pod pod-configmaps-6950b74a-416b-4c02-a3ae-9de1261ebf6c to disappear
Jun 23 01:15:23.674: INFO: Pod pod-configmaps-6950b74a-416b-4c02-a3ae-9de1261ebf6c no longer exists
[AfterEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:187
Jun 23 01:15:23.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "configmap-273" for this suite.
[32m•[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":20,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":7,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:23.782: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 494 lines ...
[32m• [SLOW TEST:47.259 seconds][0m
[sig-apps] Deployment
[90mtest/e2e/apps/framework.go:23[0m
iterative rollouts should eventually progress
[90mtest/e2e/apps/deployment.go:135[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":1,"skipped":6,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:25.362: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 46 lines ...
[90mtest/e2e/common/node/runtime.go:43[0m
on terminated container
[90mtest/e2e/common/node/runtime.go:136[0m
should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":62,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:25.526: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 94 lines ...
[36mOnly supported for providers [azure] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:1577
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:25.646: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 179 lines ...
Jun 23 01:15:17.687: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:17.687: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/hostexec-nodes-us-west3-a-l43j-jw2gp/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+%22%2Fvar%2Flib%2Fkubelet%2Fmount-propagation-4182%22%2Fhost%3B+mount+-t+tmpfs+e2e-mount-propagation-host+%22%2Fvar%2Flib%2Fkubelet%2Fmount-propagation-4182%22%2Fhost%3B+echo+host+%3E+%22%2Fvar%2Flib%2Fkubelet%2Fmount-propagation-4182%22%2Fhost%2Ffile&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
Jun 23 01:15:17.926: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4182 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:17.926: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:17.926: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:17.927: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fmaster%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:18.155: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Jun 23 01:15:18.182: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4182 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:18.182: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:18.183: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:18.183: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fslave%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:18.378: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jun 23 01:15:18.403: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4182 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:18.403: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:18.404: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:18.404: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fprivate%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:18.631: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jun 23 01:15:18.655: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4182 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:18.655: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:18.656: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:18.656: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fdefault%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:18.896: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jun 23 01:15:18.919: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4182 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:18.919: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:18.920: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:18.920: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/master/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fhost%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:19.130: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Jun 23 01:15:19.155: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4182 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:19.155: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:19.156: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:19.156: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fmaster%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:19.357: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Jun 23 01:15:19.381: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4182 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:19.381: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:19.382: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:19.382: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fslave%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:19.645: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Jun 23 01:15:19.668: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4182 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:19.668: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:19.669: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:19.669: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fprivate%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:19.936: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jun 23 01:15:19.959: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4182 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:19.959: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:19.960: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:19.960: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fdefault%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:20.161: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jun 23 01:15:20.187: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4182 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:20.187: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:20.188: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:20.188: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/slave/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fhost%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:20.389: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Jun 23 01:15:20.413: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4182 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:20.413: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:20.414: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:20.414: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fmaster%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:20.654: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Jun 23 01:15:20.677: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4182 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:20.677: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:20.678: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:20.678: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fslave%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:20.890: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jun 23 01:15:20.913: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4182 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:20.913: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:20.914: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:20.914: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fprivate%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:21.138: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Jun 23 01:15:21.161: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4182 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:21.161: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:21.162: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:21.162: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fdefault%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:21.403: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jun 23 01:15:21.434: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4182 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:21.434: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:21.435: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:21.435: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/private/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fhost%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:21.649: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Jun 23 01:15:21.672: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4182 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:21.672: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:21.673: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:21.673: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fmaster%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:21.888: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Jun 23 01:15:21.912: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4182 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:21.912: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:21.913: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:21.913: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fslave%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:22.115: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jun 23 01:15:22.138: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4182 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:22.138: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:22.139: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:22.139: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fprivate%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:22.374: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jun 23 01:15:22.398: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4182 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:22.398: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:22.398: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:22.398: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fdefault%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:22.635: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Jun 23 01:15:22.658: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4182 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 23 01:15:22.658: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:22.660: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:22.660: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/default/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fmnt%2Ftest%2Fhost%2Ffile&container=cntr&container=cntr&stderr=true&stdout=true)
Jun 23 01:15:22.861: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Jun 23 01:15:22.861: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c pidof kubelet] Namespace:mount-propagation-4182 PodName:hostexec-nodes-us-west3-a-l43j-jw2gp ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Jun 23 01:15:22.861: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:15:22.862: INFO: ExecWithOptions: Clientset creation
Jun 23 01:15:22.862: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/mount-propagation-4182/pods/hostexec-nodes-us-west3-a-l43j-jw2gp/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=pidof+kubelet&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
Jun 23 01:15:23.080: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c nsenter -t 4908 -m cat "/var/lib/kubelet/mount-propagation-4182/host/file"] Namespace:mount-propagation-4182 PodName:hostexec-nodes-us-west3-a-l43j-jw2gp ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Jun 23 01:15:23.081: INFO: >>> kubeConfig: /root/.kube/config
... skipping 53 lines ...
[32m• [SLOW TEST:47.783 seconds][0m
[sig-node] Mount propagation
[90mtest/e2e/node/framework.go:23[0m
should propagate mounts within defined scopes
[90mtest/e2e/node/mount_propagation.go:85[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Mount propagation should propagate mounts within defined scopes","total":-1,"completed":1,"skipped":4,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:25.831: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 76 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:15:26.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "secrets-4207" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:15:10.550: INFO: >>> kubeConfig: /root/.kube/config
... skipping 20 lines ...
Jun 23 01:15:15.203: INFO: PersistentVolumeClaim pvc-m4qgb found but phase is Pending instead of Bound.
Jun 23 01:15:17.228: INFO: PersistentVolumeClaim pvc-m4qgb found and phase=Bound (4.084473096s)
Jun 23 01:15:17.228: INFO: Waiting up to 3m0s for PersistentVolume local-vwzlb to have phase Bound
Jun 23 01:15:17.253: INFO: PersistentVolume local-vwzlb found and phase=Bound (24.82375ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-wvrw
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:15:17.337: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wvrw" in namespace "provisioning-5234" to be "Succeeded or Failed"
Jun 23 01:15:17.368: INFO: Pod "pod-subpath-test-preprovisionedpv-wvrw": Phase="Pending", Reason="", readiness=false. Elapsed: 31.017266ms
Jun 23 01:15:19.394: INFO: Pod "pod-subpath-test-preprovisionedpv-wvrw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056749794s
Jun 23 01:15:21.395: INFO: Pod "pod-subpath-test-preprovisionedpv-wvrw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057740811s
Jun 23 01:15:23.395: INFO: Pod "pod-subpath-test-preprovisionedpv-wvrw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05814647s
Jun 23 01:15:25.396: INFO: Pod "pod-subpath-test-preprovisionedpv-wvrw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058410554s
Jun 23 01:15:27.395: INFO: Pod "pod-subpath-test-preprovisionedpv-wvrw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057436628s
[1mSTEP[0m: Saw pod success
Jun 23 01:15:27.395: INFO: Pod "pod-subpath-test-preprovisionedpv-wvrw" satisfied condition "Succeeded or Failed"
Jun 23 01:15:27.420: INFO: Trying to get logs from node nodes-us-west3-a-l43j pod pod-subpath-test-preprovisionedpv-wvrw container test-container-volume-preprovisionedpv-wvrw: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:15:27.478: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wvrw to disappear
Jun 23 01:15:27.503: INFO: Pod pod-subpath-test-preprovisionedpv-wvrw no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-wvrw
Jun 23 01:15:27.504: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wvrw" in namespace "provisioning-5234"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support non-existent path
[90mtest/e2e/storage/testsuites/subpath.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:27.963: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/framework/framework.go:187
... skipping 47 lines ...
Jun 23 01:15:15.641: INFO: PersistentVolumeClaim pvc-qjncb found but phase is Pending instead of Bound.
Jun 23 01:15:17.666: INFO: PersistentVolumeClaim pvc-qjncb found and phase=Bound (6.099214936s)
Jun 23 01:15:17.666: INFO: Waiting up to 3m0s for PersistentVolume local-t9g5p to have phase Bound
Jun 23 01:15:17.689: INFO: PersistentVolume local-t9g5p found and phase=Bound (22.905292ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-wkz6
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:15:17.763: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wkz6" in namespace "provisioning-1675" to be "Succeeded or Failed"
Jun 23 01:15:17.787: INFO: Pod "pod-subpath-test-preprovisionedpv-wkz6": Phase="Pending", Reason="", readiness=false. Elapsed: 23.269982ms
Jun 23 01:15:19.813: INFO: Pod "pod-subpath-test-preprovisionedpv-wkz6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049100383s
Jun 23 01:15:21.812: INFO: Pod "pod-subpath-test-preprovisionedpv-wkz6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048660934s
Jun 23 01:15:23.814: INFO: Pod "pod-subpath-test-preprovisionedpv-wkz6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050051808s
Jun 23 01:15:25.812: INFO: Pod "pod-subpath-test-preprovisionedpv-wkz6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048428772s
Jun 23 01:15:27.815: INFO: Pod "pod-subpath-test-preprovisionedpv-wkz6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.051880464s
Jun 23 01:15:29.814: INFO: Pod "pod-subpath-test-preprovisionedpv-wkz6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.050102922s
[1mSTEP[0m: Saw pod success
Jun 23 01:15:29.814: INFO: Pod "pod-subpath-test-preprovisionedpv-wkz6" satisfied condition "Succeeded or Failed"
Jun 23 01:15:29.845: INFO: Trying to get logs from node nodes-us-west3-a-l43j pod pod-subpath-test-preprovisionedpv-wkz6 container test-container-subpath-preprovisionedpv-wkz6: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:15:29.908: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wkz6 to disappear
Jun 23 01:15:29.932: INFO: Pod pod-subpath-test-preprovisionedpv-wkz6 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-wkz6
Jun 23 01:15:29.932: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wkz6" in namespace "provisioning-1675"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":22,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-instrumentation] MetricsGrabber
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 12 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:15:31.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "metrics-grabber-1326" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":4,"skipped":23,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:31.212: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 75 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
when create a pod with lifecycle hook
[90mtest/e2e/common/node/lifecycle_hook.go:46[0m
should execute prestop exec hook properly [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:31.935: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 26 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
test/e2e/common/storage/host_path.go:39
[It] should support subPath [NodeConformance]
test/e2e/common/storage/host_path.go:95
[1mSTEP[0m: Creating a pod to test hostPath subPath
Jun 23 01:15:19.797: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4159" to be "Succeeded or Failed"
Jun 23 01:15:19.821: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 24.466542ms
Jun 23 01:15:21.846: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049291923s
Jun 23 01:15:23.847: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050339841s
Jun 23 01:15:25.846: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049195333s
Jun 23 01:15:27.849: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051539992s
Jun 23 01:15:29.850: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.053125489s
Jun 23 01:15:31.852: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.055022299s
[1mSTEP[0m: Saw pod success
Jun 23 01:15:31.852: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jun 23 01:15:31.889: INFO: Trying to get logs from node nodes-us-west3-a-9jqc pod pod-host-path-test container test-container-2: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:15:31.965: INFO: Waiting for pod pod-host-path-test to disappear
Jun 23 01:15:31.990: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:12.497 seconds][0m
[sig-storage] HostPath
[90mtest/e2e/common/storage/framework.go:23[0m
should support subPath [NodeConformance]
[90mtest/e2e/common/storage/host_path.go:95[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":2,"skipped":39,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Security Context
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 2 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
test/e2e/common/node/security_context.go:48
[It] should run with an explicit non-root user ID [LinuxOnly]
test/e2e/common/node/security_context.go:131
Jun 23 01:15:23.973: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-2715" to be "Succeeded or Failed"
Jun 23 01:15:23.998: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 24.325879ms
Jun 23 01:15:26.023: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049834439s
Jun 23 01:15:28.028: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055199766s
Jun 23 01:15:30.024: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050349597s
Jun 23 01:15:32.062: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089037602s
Jun 23 01:15:32.062: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
Jun 23 01:15:32.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-2715" for this suite.
... skipping 51 lines ...
Jun 23 01:15:00.639: INFO: PersistentVolumeClaim pvc-fj5hv found but phase is Pending instead of Bound.
Jun 23 01:15:02.663: INFO: PersistentVolumeClaim pvc-fj5hv found and phase=Bound (10.214346597s)
Jun 23 01:15:02.663: INFO: Waiting up to 3m0s for PersistentVolume local-7td9b to have phase Bound
Jun 23 01:15:02.685: INFO: PersistentVolume local-7td9b found and phase=Bound (22.638848ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-c8m5
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 23 01:15:02.758: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-c8m5" in namespace "provisioning-7889" to be "Succeeded or Failed"
Jun 23 01:15:02.781: INFO: Pod "pod-subpath-test-preprovisionedpv-c8m5": Phase="Pending", Reason="", readiness=false. Elapsed: 23.056693ms
Jun 23 01:15:04.808: INFO: Pod "pod-subpath-test-preprovisionedpv-c8m5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049538559s
Jun 23 01:15:06.808: INFO: Pod "pod-subpath-test-preprovisionedpv-c8m5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050104668s
Jun 23 01:15:08.807: INFO: Pod "pod-subpath-test-preprovisionedpv-c8m5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049132098s
Jun 23 01:15:10.809: INFO: Pod "pod-subpath-test-preprovisionedpv-c8m5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050853757s
Jun 23 01:15:12.809: INFO: Pod "pod-subpath-test-preprovisionedpv-c8m5": Phase="Running", Reason="", readiness=true. Elapsed: 10.051110449s
... skipping 5 lines ...
Jun 23 01:15:24.814: INFO: Pod "pod-subpath-test-preprovisionedpv-c8m5": Phase="Running", Reason="", readiness=true. Elapsed: 22.055555501s
Jun 23 01:15:26.816: INFO: Pod "pod-subpath-test-preprovisionedpv-c8m5": Phase="Running", Reason="", readiness=true. Elapsed: 24.058359787s
Jun 23 01:15:28.808: INFO: Pod "pod-subpath-test-preprovisionedpv-c8m5": Phase="Running", Reason="", readiness=true. Elapsed: 26.050474388s
Jun 23 01:15:30.806: INFO: Pod "pod-subpath-test-preprovisionedpv-c8m5": Phase="Running", Reason="", readiness=true. Elapsed: 28.048066961s
Jun 23 01:15:32.807: INFO: Pod "pod-subpath-test-preprovisionedpv-c8m5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.049189926s
[1mSTEP[0m: Saw pod success
Jun 23 01:15:32.807: INFO: Pod "pod-subpath-test-preprovisionedpv-c8m5" satisfied condition "Succeeded or Failed"
Jun 23 01:15:32.834: INFO: Trying to get logs from node nodes-us-west3-a-l43j pod pod-subpath-test-preprovisionedpv-c8m5 container test-container-subpath-preprovisionedpv-c8m5: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:15:32.888: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-c8m5 to disappear
Jun 23 01:15:32.914: INFO: Pod pod-subpath-test-preprovisionedpv-c8m5 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-c8m5
Jun 23 01:15:32.915: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-c8m5" in namespace "provisioning-7889"
... skipping 34 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":4,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Security Context
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 2 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
test/e2e/common/node/security_context.go:48
[It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
test/e2e/common/node/security_context.go:284
Jun 23 01:15:28.193: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-b99e4fd2-3741-4031-839d-b43c93abfeaa" in namespace "security-context-test-6839" to be "Succeeded or Failed"
Jun 23 01:15:28.218: INFO: Pod "busybox-privileged-true-b99e4fd2-3741-4031-839d-b43c93abfeaa": Phase="Pending", Reason="", readiness=false. Elapsed: 25.04027ms
Jun 23 01:15:30.244: INFO: Pod "busybox-privileged-true-b99e4fd2-3741-4031-839d-b43c93abfeaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0513017s
Jun 23 01:15:32.248: INFO: Pod "busybox-privileged-true-b99e4fd2-3741-4031-839d-b43c93abfeaa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055052662s
Jun 23 01:15:34.246: INFO: Pod "busybox-privileged-true-b99e4fd2-3741-4031-839d-b43c93abfeaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053218783s
Jun 23 01:15:34.246: INFO: Pod "busybox-privileged-true-b99e4fd2-3741-4031-839d-b43c93abfeaa" satisfied condition "Succeeded or Failed"
Jun 23 01:15:34.280: INFO: Got logs for pod "busybox-privileged-true-b99e4fd2-3741-4031-839d-b43c93abfeaa": ""
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
Jun 23 01:15:34.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-6839" for this suite.
... skipping 3 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
When creating a pod with privileged
[90mtest/e2e/common/node/security_context.go:234[0m
should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
[90mtest/e2e/common/node/security_context.go:284[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":3,"skipped":29,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:34.377: INFO: Only supported for providers [azure] (not gce)
... skipping 46 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-0766bd1c-9ce0-4128-8c63-88ac47f948a8
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 23 01:15:25.896: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0afe2be6-667f-47b0-b913-c1b11246d364" in namespace "projected-4762" to be "Succeeded or Failed"
Jun 23 01:15:25.922: INFO: Pod "pod-projected-configmaps-0afe2be6-667f-47b0-b913-c1b11246d364": Phase="Pending", Reason="", readiness=false. Elapsed: 24.808012ms
Jun 23 01:15:27.949: INFO: Pod "pod-projected-configmaps-0afe2be6-667f-47b0-b913-c1b11246d364": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051865546s
Jun 23 01:15:29.953: INFO: Pod "pod-projected-configmaps-0afe2be6-667f-47b0-b913-c1b11246d364": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056253522s
Jun 23 01:15:31.950: INFO: Pod "pod-projected-configmaps-0afe2be6-667f-47b0-b913-c1b11246d364": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052756427s
Jun 23 01:15:33.950: INFO: Pod "pod-projected-configmaps-0afe2be6-667f-47b0-b913-c1b11246d364": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053266943s
Jun 23 01:15:35.948: INFO: Pod "pod-projected-configmaps-0afe2be6-667f-47b0-b913-c1b11246d364": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.050828995s
[1mSTEP[0m: Saw pod success
Jun 23 01:15:35.948: INFO: Pod "pod-projected-configmaps-0afe2be6-667f-47b0-b913-c1b11246d364" satisfied condition "Succeeded or Failed"
Jun 23 01:15:35.973: INFO: Trying to get logs from node nodes-us-west3-a-s284 pod pod-projected-configmaps-0afe2be6-667f-47b0-b913-c1b11246d364 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:15:36.040: INFO: Waiting for pod pod-projected-configmaps-0afe2be6-667f-47b0-b913-c1b11246d364 to disappear
Jun 23 01:15:36.070: INFO: Pod pod-projected-configmaps-0afe2be6-667f-47b0-b913-c1b11246d364 no longer exists
[AfterEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.475 seconds][0m
[sig-storage] Projected configMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}
[BeforeEach] [sig-node] ConfigMap
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:15:26.357: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename configmap
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap configmap-3386/configmap-test-183c5c63-586d-4f7b-99db-7fbde5874d97
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 23 01:15:26.572: INFO: Waiting up to 5m0s for pod "pod-configmaps-83281f6c-7771-48c1-9400-2457afcf8043" in namespace "configmap-3386" to be "Succeeded or Failed"
Jun 23 01:15:26.596: INFO: Pod "pod-configmaps-83281f6c-7771-48c1-9400-2457afcf8043": Phase="Pending", Reason="", readiness=false. Elapsed: 23.608469ms
Jun 23 01:15:28.620: INFO: Pod "pod-configmaps-83281f6c-7771-48c1-9400-2457afcf8043": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047650087s
Jun 23 01:15:30.627: INFO: Pod "pod-configmaps-83281f6c-7771-48c1-9400-2457afcf8043": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054459454s
Jun 23 01:15:32.628: INFO: Pod "pod-configmaps-83281f6c-7771-48c1-9400-2457afcf8043": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055762794s
Jun 23 01:15:34.628: INFO: Pod "pod-configmaps-83281f6c-7771-48c1-9400-2457afcf8043": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05574754s
Jun 23 01:15:36.620: INFO: Pod "pod-configmaps-83281f6c-7771-48c1-9400-2457afcf8043": Phase="Pending", Reason="", readiness=false. Elapsed: 10.04770473s
Jun 23 01:15:38.621: INFO: Pod "pod-configmaps-83281f6c-7771-48c1-9400-2457afcf8043": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.049069867s
[1mSTEP[0m: Saw pod success
Jun 23 01:15:38.621: INFO: Pod "pod-configmaps-83281f6c-7771-48c1-9400-2457afcf8043" satisfied condition "Succeeded or Failed"
Jun 23 01:15:38.644: INFO: Trying to get logs from node nodes-us-west3-a-s284 pod pod-configmaps-83281f6c-7771-48c1-9400-2457afcf8043 container env-test: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:15:38.698: INFO: Waiting for pod pod-configmaps-83281f6c-7771-48c1-9400-2457afcf8043 to disappear
Jun 23 01:15:38.721: INFO: Pod pod-configmaps-83281f6c-7771-48c1-9400-2457afcf8043 no longer exists
[AfterEach] [sig-node] ConfigMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:12.415 seconds][0m
[sig-node] ConfigMap
[90mtest/e2e/common/node/framework.go:23[0m
should be consumable via environment variable [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:38.791: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 54 lines ...
[32m• [SLOW TEST:14.667 seconds][0m
[sig-apps] Deployment
[90mtest/e2e/apps/framework.go:23[0m
deployment should delete old replica sets [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":3,"skipped":98,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:39.057: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 53 lines ...
[90mtest/e2e/apps/framework.go:23[0m
Basic StatefulSet functionality [StatefulSetBasic]
[90mtest/e2e/apps/statefulset.go:101[0m
should list, patch and delete a collection of StatefulSets [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":-1,"completed":6,"skipped":56,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:39.776: INFO: Only supported for providers [azure] (not gce)
... skipping 177 lines ...
[32m• [SLOW TEST:9.036 seconds][0m
[sig-storage] Downward API volume
[90mtest/e2e/common/storage/framework.go:23[0m
should update annotations on modification [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":27,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [sig-node] Security Context
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:15:32.184: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename security-context
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support seccomp unconfined on the container [LinuxOnly]
test/e2e/node/security_context.go:163
[1mSTEP[0m: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jun 23 01:15:32.401: INFO: Waiting up to 5m0s for pod "security-context-1538f96a-e541-4fa2-a385-8a8c24357b2f" in namespace "security-context-9353" to be "Succeeded or Failed"
Jun 23 01:15:32.427: INFO: Pod "security-context-1538f96a-e541-4fa2-a385-8a8c24357b2f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.657043ms
Jun 23 01:15:34.452: INFO: Pod "security-context-1538f96a-e541-4fa2-a385-8a8c24357b2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051208619s
Jun 23 01:15:36.452: INFO: Pod "security-context-1538f96a-e541-4fa2-a385-8a8c24357b2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051323442s
Jun 23 01:15:38.455: INFO: Pod "security-context-1538f96a-e541-4fa2-a385-8a8c24357b2f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054143009s
Jun 23 01:15:40.455: INFO: Pod "security-context-1538f96a-e541-4fa2-a385-8a8c24357b2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054271521s
[1mSTEP[0m: Saw pod success
Jun 23 01:15:40.455: INFO: Pod "security-context-1538f96a-e541-4fa2-a385-8a8c24357b2f" satisfied condition "Succeeded or Failed"
Jun 23 01:15:40.480: INFO: Trying to get logs from node nodes-us-west3-a-j1m9 pod security-context-1538f96a-e541-4fa2-a385-8a8c24357b2f container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:15:40.549: INFO: Waiting for pod security-context-1538f96a-e541-4fa2-a385-8a8c24357b2f to disappear
Jun 23 01:15:40.583: INFO: Pod security-context-1538f96a-e541-4fa2-a385-8a8c24357b2f no longer exists
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.475 seconds][0m
[sig-node] Security Context
[90mtest/e2e/node/framework.go:23[0m
should support seccomp unconfined on the container [LinuxOnly]
[90mtest/e2e/node/security_context.go:163[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":3,"skipped":10,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:40.703: INFO: Only supported for providers [aws] (not gce)
... skipping 107 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
ConfigMap
[90mtest/e2e/storage/volumes.go:49[0m
should be mountable
[90mtest/e2e/storage/volumes.go:50[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":3,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:41.805: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 129 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should be able to unmount after the subpath directory is deleted [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:447[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:44.126: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/framework/framework.go:187
... skipping 101 lines ...
[36mOnly supported for providers [azure] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:2079
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":27,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:15:39.946: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test emptydir 0777 on tmpfs
Jun 23 01:15:40.140: INFO: Waiting up to 5m0s for pod "pod-69dfa48e-a815-43aa-86ea-81fa02bff374" in namespace "emptydir-5885" to be "Succeeded or Failed"
Jun 23 01:15:40.164: INFO: Pod "pod-69dfa48e-a815-43aa-86ea-81fa02bff374": Phase="Pending", Reason="", readiness=false. Elapsed: 23.921133ms
Jun 23 01:15:42.189: INFO: Pod "pod-69dfa48e-a815-43aa-86ea-81fa02bff374": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04873388s
Jun 23 01:15:44.190: INFO: Pod "pod-69dfa48e-a815-43aa-86ea-81fa02bff374": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049923396s
[1mSTEP[0m: Saw pod success
Jun 23 01:15:44.190: INFO: Pod "pod-69dfa48e-a815-43aa-86ea-81fa02bff374" satisfied condition "Succeeded or Failed"
Jun 23 01:15:44.214: INFO: Trying to get logs from node nodes-us-west3-a-s284 pod pod-69dfa48e-a815-43aa-86ea-81fa02bff374 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:15:44.275: INFO: Waiting for pod pod-69dfa48e-a815-43aa-86ea-81fa02bff374 to disappear
Jun 23 01:15:44.300: INFO: Pod pod-69dfa48e-a815-43aa-86ea-81fa02bff374 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
Jun 23 01:15:44.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "emptydir-5885" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":27,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:44.381: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 23 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
test/e2e/common/node/security_context.go:48
[It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
test/e2e/common/node/security_context.go:337
Jun 23 01:15:39.007: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-03273cf2-a864-466d-8259-820dba98e363" in namespace "security-context-test-6811" to be "Succeeded or Failed"
Jun 23 01:15:39.036: INFO: Pod "alpine-nnp-nil-03273cf2-a864-466d-8259-820dba98e363": Phase="Pending", Reason="", readiness=false. Elapsed: 28.883358ms
Jun 23 01:15:41.072: INFO: Pod "alpine-nnp-nil-03273cf2-a864-466d-8259-820dba98e363": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064218784s
Jun 23 01:15:43.062: INFO: Pod "alpine-nnp-nil-03273cf2-a864-466d-8259-820dba98e363": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054090196s
Jun 23 01:15:45.062: INFO: Pod "alpine-nnp-nil-03273cf2-a864-466d-8259-820dba98e363": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054242969s
Jun 23 01:15:45.062: INFO: Pod "alpine-nnp-nil-03273cf2-a864-466d-8259-820dba98e363" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
Jun 23 01:15:45.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-6811" for this suite.
... skipping 2 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
when creating containers with AllowPrivilegeEscalation
[90mtest/e2e/common/node/security_context.go:298[0m
should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
[90mtest/e2e/common/node/security_context.go:337[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":17,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:45.181: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 103 lines ...
Jun 23 01:15:18.510: INFO: Pod "pvc-volume-tester-bxrs8": Phase="Running", Reason="", readiness=true. Elapsed: 10.048945916s
Jun 23 01:15:18.510: INFO: Pod "pvc-volume-tester-bxrs8" satisfied condition "running"
[1mSTEP[0m: Deleting the previously created pod
Jun 23 01:15:18.510: INFO: Deleting pod "pvc-volume-tester-bxrs8" in namespace "csi-mock-volumes-7508"
Jun 23 01:15:18.536: INFO: Wait up to 5m0s for pod "pvc-volume-tester-bxrs8" to be fully deleted
[1mSTEP[0m: Checking CSI driver logs
Jun 23 01:15:24.644: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"eaaf32be-f291-11ec-b404-0665aaa500f1","target_path":"/var/lib/kubelet/pods/56194bef-f98c-4a35-9d43-a85642d7ba09/volumes/kubernetes.io~csi/pvc-11c1a54a-8903-4302-a06e-67d7e3d1bc45/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
[1mSTEP[0m: Deleting pod pvc-volume-tester-bxrs8
Jun 23 01:15:24.644: INFO: Deleting pod "pvc-volume-tester-bxrs8" in namespace "csi-mock-volumes-7508"
[1mSTEP[0m: Deleting claim pvc-hdbmk
Jun 23 01:15:24.729: INFO: Waiting up to 2m0s for PersistentVolume pvc-11c1a54a-8903-4302-a06e-67d7e3d1bc45 to get deleted
Jun 23 01:15:24.758: INFO: PersistentVolume pvc-11c1a54a-8903-4302-a06e-67d7e3d1bc45 found and phase=Bound (27.982972ms)
Jun 23 01:15:26.783: INFO: PersistentVolume pvc-11c1a54a-8903-4302-a06e-67d7e3d1bc45 was removed
... skipping 45 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
CSI workload information using mock driver
[90mtest/e2e/storage/csi_mock_volume.go:467[0m
should not be passed when podInfoOnMount=false
[90mtest/e2e/storage/csi_mock_volume.go:517[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":1,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:46.016: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/framework/framework.go:187
... skipping 94 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: tmpfs]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 67 lines ...
[32m• [SLOW TEST:31.031 seconds][0m
[sig-apps] Deployment
[90mtest/e2e/apps/framework.go:23[0m
deployment should support rollover [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] Subpath
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 5 lines ...
test/e2e/storage/subpath.go:40
[1mSTEP[0m: Setting up data
[It] should support subpaths with projected pod [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating pod pod-subpath-test-projected-dmmh
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 23 01:15:22.176: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-dmmh" in namespace "subpath-6674" to be "Succeeded or Failed"
Jun 23 01:15:22.201: INFO: Pod "pod-subpath-test-projected-dmmh": Phase="Pending", Reason="", readiness=false. Elapsed: 24.893944ms
Jun 23 01:15:24.233: INFO: Pod "pod-subpath-test-projected-dmmh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057135636s
Jun 23 01:15:26.227: INFO: Pod "pod-subpath-test-projected-dmmh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050998932s
Jun 23 01:15:28.228: INFO: Pod "pod-subpath-test-projected-dmmh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051961564s
Jun 23 01:15:30.227: INFO: Pod "pod-subpath-test-projected-dmmh": Phase="Running", Reason="", readiness=true. Elapsed: 8.050614724s
Jun 23 01:15:32.233: INFO: Pod "pod-subpath-test-projected-dmmh": Phase="Running", Reason="", readiness=true. Elapsed: 10.056552183s
... skipping 3 lines ...
Jun 23 01:15:40.232: INFO: Pod "pod-subpath-test-projected-dmmh": Phase="Running", Reason="", readiness=true. Elapsed: 18.055802897s
Jun 23 01:15:42.227: INFO: Pod "pod-subpath-test-projected-dmmh": Phase="Running", Reason="", readiness=true. Elapsed: 20.051175853s
Jun 23 01:15:44.228: INFO: Pod "pod-subpath-test-projected-dmmh": Phase="Running", Reason="", readiness=false. Elapsed: 22.052003664s
Jun 23 01:15:46.232: INFO: Pod "pod-subpath-test-projected-dmmh": Phase="Running", Reason="", readiness=false. Elapsed: 24.055755289s
Jun 23 01:15:48.234: INFO: Pod "pod-subpath-test-projected-dmmh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.057411238s
[1mSTEP[0m: Saw pod success
Jun 23 01:15:48.234: INFO: Pod "pod-subpath-test-projected-dmmh" satisfied condition "Succeeded or Failed"
Jun 23 01:15:48.261: INFO: Trying to get logs from node nodes-us-west3-a-9jqc pod pod-subpath-test-projected-dmmh container test-container-subpath-projected-dmmh: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:15:48.331: INFO: Waiting for pod pod-subpath-test-projected-dmmh to disappear
Jun 23 01:15:48.357: INFO: Pod pod-subpath-test-projected-dmmh no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-projected-dmmh
Jun 23 01:15:48.357: INFO: Deleting pod "pod-subpath-test-projected-dmmh" in namespace "subpath-6674"
... skipping 8 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
Atomic writer volumes
[90mtest/e2e/storage/subpath.go:36[0m
should support subpaths with projected pod [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance]","total":-1,"completed":3,"skipped":80,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:48.498: INFO: Only supported for providers [aws] (not gce)
... skipping 44 lines ...
[32m• [SLOW TEST:14.525 seconds][0m
[sig-apps] DisruptionController
[90mtest/e2e/apps/framework.go:23[0m
evictions: enough pods, replicaSet, percentage => should allow an eviction
[90mtest/e2e/apps/disruption.go:289[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":2,"skipped":15,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:48.852: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 89 lines ...
[32m• [SLOW TEST:26.124 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
[90mtest/e2e/network/service.go:933[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":4,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:49.096: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 21 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name configmap-projected-all-test-volume-31edbb05-3060-482f-a7f8-ca0bc37e1a91
[1mSTEP[0m: Creating secret with name secret-projected-all-test-volume-ccaa2594-6f92-4bcd-a1bc-ea201854a70f
[1mSTEP[0m: Creating a pod to test Check all projections for projected volume plugin
Jun 23 01:15:40.567: INFO: Waiting up to 5m0s for pod "projected-volume-2699e02b-bad9-4867-9eb1-2b1196c61dc9" in namespace "projected-195" to be "Succeeded or Failed"
Jun 23 01:15:40.593: INFO: Pod "projected-volume-2699e02b-bad9-4867-9eb1-2b1196c61dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 26.108102ms
Jun 23 01:15:42.618: INFO: Pod "projected-volume-2699e02b-bad9-4867-9eb1-2b1196c61dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05137856s
Jun 23 01:15:44.617: INFO: Pod "projected-volume-2699e02b-bad9-4867-9eb1-2b1196c61dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050303296s
Jun 23 01:15:46.618: INFO: Pod "projected-volume-2699e02b-bad9-4867-9eb1-2b1196c61dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051104936s
Jun 23 01:15:48.617: INFO: Pod "projected-volume-2699e02b-bad9-4867-9eb1-2b1196c61dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050039067s
Jun 23 01:15:50.619: INFO: Pod "projected-volume-2699e02b-bad9-4867-9eb1-2b1196c61dc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.051835198s
[1mSTEP[0m: Saw pod success
Jun 23 01:15:50.619: INFO: Pod "projected-volume-2699e02b-bad9-4867-9eb1-2b1196c61dc9" satisfied condition "Succeeded or Failed"
Jun 23 01:15:50.647: INFO: Trying to get logs from node nodes-us-west3-a-s284 pod projected-volume-2699e02b-bad9-4867-9eb1-2b1196c61dc9 container projected-all-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:15:50.707: INFO: Waiting for pod projected-volume-2699e02b-bad9-4867-9eb1-2b1196c61dc9 to disappear
Jun 23 01:15:50.732: INFO: Pod projected-volume-2699e02b-bad9-4867-9eb1-2b1196c61dc9 no longer exists
[AfterEach] [sig-storage] Projected combined
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.486 seconds][0m
[sig-storage] Projected combined
[90mtest/e2e/common/storage/framework.go:23[0m
should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":29,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:50.829: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 77 lines ...
[32m• [SLOW TEST:72.439 seconds][0m
[sig-node] Probing container
[90mtest/e2e/common/node/framework.go:23[0m
should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
[90mtest/e2e/common/node/container_probe.go:244[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":2,"skipped":10,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:51.163: INFO: Only supported for providers [vsphere] (not gce)
... skipping 72 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: emptydir]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver emptydir doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 103 lines ...
[90mtest/e2e/kubectl/portforward.go:476[0m
that expects a client request
[90mtest/e2e/kubectl/portforward.go:477[0m
should support a client that connects, sends DATA, and disconnects
[90mtest/e2e/kubectl/portforward.go:481[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":3,"skipped":43,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:52.828: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 137 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
CSI attach test using mock driver
[90mtest/e2e/storage/csi_mock_volume.go:332[0m
should require VolumeAttach for drivers with attachment
[90mtest/e2e/storage/csi_mock_volume.go:360[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":3,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:53.924: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 11 lines ...
[36mDriver local doesn't support GenericEphemeralVolume -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":3,"skipped":23,"failed":0}
[BeforeEach] [sig-node] RuntimeClass
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:15:52.031: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename runtimeclass
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 4 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:15:54.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "runtimeclass-5469" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]","total":-1,"completed":4,"skipped":23,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] DNS
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 37 lines ...
[32m• [SLOW TEST:8.866 seconds][0m
[sig-network] DNS
[90mtest/e2e/network/common/framework.go:23[0m
should support configurable pod DNS nameservers [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:57.757: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 39 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:15:58.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "disruption-5465" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:58.245: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 37 lines ...
Jun 23 01:15:29.971: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048247052s
Jun 23 01:15:31.971: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.047948878s
Jun 23 01:15:31.971: INFO: Pod "test-pod" satisfied condition "running"
[1mSTEP[0m: Creating statefulset with conflicting port in namespace statefulset-1164
[1mSTEP[0m: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1164
Jun 23 01:15:32.065: INFO: Observed stateful pod in namespace: statefulset-1164, name: ss-0, uid: b83e1467-e29f-4bcb-bb3c-0caf45886d16, status phase: Pending. Waiting for statefulset controller to delete.
Jun 23 01:15:33.809: INFO: Observed stateful pod in namespace: statefulset-1164, name: ss-0, uid: b83e1467-e29f-4bcb-bb3c-0caf45886d16, status phase: Failed. Waiting for statefulset controller to delete.
Jun 23 01:15:33.858: INFO: Observed stateful pod in namespace: statefulset-1164, name: ss-0, uid: b83e1467-e29f-4bcb-bb3c-0caf45886d16, status phase: Failed. Waiting for statefulset controller to delete.
Jun 23 01:15:33.864: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1164
[1mSTEP[0m: Removing pod with conflicting port in namespace statefulset-1164
[1mSTEP[0m: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1164 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
test/e2e/apps/statefulset.go:122
Jun 23 01:15:48.129: INFO: Deleting all statefulset in ns statefulset-1164
... skipping 11 lines ...
[90mtest/e2e/apps/framework.go:23[0m
Basic StatefulSet functionality [StatefulSetBasic]
[90mtest/e2e/apps/statefulset.go:101[0m
Should recreate evicted statefulset [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:15:58.430: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 47 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide container's cpu limit [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 23 01:15:54.676: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1db9db6d-47c6-4751-af27-cd879acf83ef" in namespace "downward-api-5693" to be "Succeeded or Failed"
Jun 23 01:15:54.701: INFO: Pod "downwardapi-volume-1db9db6d-47c6-4751-af27-cd879acf83ef": Phase="Pending", Reason="", readiness=false. Elapsed: 24.965509ms
Jun 23 01:15:56.726: INFO: Pod "downwardapi-volume-1db9db6d-47c6-4751-af27-cd879acf83ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049965683s
Jun 23 01:15:58.727: INFO: Pod "downwardapi-volume-1db9db6d-47c6-4751-af27-cd879acf83ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050743584s
Jun 23 01:16:00.728: INFO: Pod "downwardapi-volume-1db9db6d-47c6-4751-af27-cd879acf83ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051536848s
Jun 23 01:16:02.727: INFO: Pod "downwardapi-volume-1db9db6d-47c6-4751-af27-cd879acf83ef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050503803s
Jun 23 01:16:04.731: INFO: Pod "downwardapi-volume-1db9db6d-47c6-4751-af27-cd879acf83ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055208622s
[1mSTEP[0m: Saw pod success
Jun 23 01:16:04.731: INFO: Pod "downwardapi-volume-1db9db6d-47c6-4751-af27-cd879acf83ef" satisfied condition "Succeeded or Failed"
Jun 23 01:16:04.756: INFO: Trying to get logs from node nodes-us-west3-a-l43j pod downwardapi-volume-1db9db6d-47c6-4751-af27-cd879acf83ef container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:16:04.834: INFO: Waiting for pod downwardapi-volume-1db9db6d-47c6-4751-af27-cd879acf83ef to disappear
Jun 23 01:16:04.860: INFO: Pod downwardapi-volume-1db9db6d-47c6-4751-af27-cd879acf83ef no longer exists
[AfterEach] [sig-storage] Downward API volume
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.461 seconds][0m
[sig-storage] Downward API volume
[90mtest/e2e/common/storage/framework.go:23[0m
should provide container's cpu limit [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":24,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] Subpath
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 5 lines ...
test/e2e/storage/subpath.go:40
[1mSTEP[0m: Setting up data
[It] should support subpaths with downward pod [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating pod pod-subpath-test-downwardapi-5fpw
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 23 01:15:42.083: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5fpw" in namespace "subpath-6792" to be "Succeeded or Failed"
Jun 23 01:15:42.107: INFO: Pod "pod-subpath-test-downwardapi-5fpw": Phase="Pending", Reason="", readiness=false. Elapsed: 23.457852ms
Jun 23 01:15:44.131: INFO: Pod "pod-subpath-test-downwardapi-5fpw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047816946s
Jun 23 01:15:46.132: INFO: Pod "pod-subpath-test-downwardapi-5fpw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048922179s
Jun 23 01:15:48.132: INFO: Pod "pod-subpath-test-downwardapi-5fpw": Phase="Running", Reason="", readiness=true. Elapsed: 6.048573864s
Jun 23 01:15:50.131: INFO: Pod "pod-subpath-test-downwardapi-5fpw": Phase="Running", Reason="", readiness=true. Elapsed: 8.047276454s
Jun 23 01:15:52.132: INFO: Pod "pod-subpath-test-downwardapi-5fpw": Phase="Running", Reason="", readiness=true. Elapsed: 10.048714317s
... skipping 2 lines ...
Jun 23 01:15:58.135: INFO: Pod "pod-subpath-test-downwardapi-5fpw": Phase="Running", Reason="", readiness=true. Elapsed: 16.051449914s
Jun 23 01:16:00.133: INFO: Pod "pod-subpath-test-downwardapi-5fpw": Phase="Running", Reason="", readiness=true. Elapsed: 18.04958854s
Jun 23 01:16:02.132: INFO: Pod "pod-subpath-test-downwardapi-5fpw": Phase="Running", Reason="", readiness=true. Elapsed: 20.047934492s
Jun 23 01:16:04.132: INFO: Pod "pod-subpath-test-downwardapi-5fpw": Phase="Running", Reason="", readiness=true. Elapsed: 22.048699852s
Jun 23 01:16:06.132: INFO: Pod "pod-subpath-test-downwardapi-5fpw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.048749217s
[1mSTEP[0m: Saw pod success
Jun 23 01:16:06.132: INFO: Pod "pod-subpath-test-downwardapi-5fpw" satisfied condition "Succeeded or Failed"
Jun 23 01:16:06.157: INFO: Trying to get logs from node nodes-us-west3-a-l43j pod pod-subpath-test-downwardapi-5fpw container test-container-subpath-downwardapi-5fpw: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:16:06.213: INFO: Waiting for pod pod-subpath-test-downwardapi-5fpw to disappear
Jun 23 01:16:06.236: INFO: Pod pod-subpath-test-downwardapi-5fpw no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-downwardapi-5fpw
Jun 23 01:16:06.236: INFO: Deleting pod "pod-subpath-test-downwardapi-5fpw" in namespace "subpath-6792"
... skipping 8 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
Atomic writer volumes
[90mtest/e2e/storage/subpath.go:36[0m
should support subpaths with downward pod [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 29 lines ...
Jun 23 01:15:45.989: INFO: PersistentVolumeClaim pvc-zsjpb found but phase is Pending instead of Bound.
Jun 23 01:15:48.015: INFO: PersistentVolumeClaim pvc-zsjpb found and phase=Bound (4.079928426s)
Jun 23 01:15:48.015: INFO: Waiting up to 3m0s for PersistentVolume local-b8f9z to have phase Bound
Jun 23 01:15:48.039: INFO: PersistentVolume local-b8f9z found and phase=Bound (24.284804ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-5b95
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:15:48.117: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5b95" in namespace "provisioning-8507" to be "Succeeded or Failed"
Jun 23 01:15:48.141: INFO: Pod "pod-subpath-test-preprovisionedpv-5b95": Phase="Pending", Reason="", readiness=false. Elapsed: 24.00829ms
Jun 23 01:15:50.168: INFO: Pod "pod-subpath-test-preprovisionedpv-5b95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050930015s
Jun 23 01:15:52.168: INFO: Pod "pod-subpath-test-preprovisionedpv-5b95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050444967s
Jun 23 01:15:54.169: INFO: Pod "pod-subpath-test-preprovisionedpv-5b95": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051902441s
Jun 23 01:15:56.170: INFO: Pod "pod-subpath-test-preprovisionedpv-5b95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052820691s
[1mSTEP[0m: Saw pod success
Jun 23 01:15:56.170: INFO: Pod "pod-subpath-test-preprovisionedpv-5b95" satisfied condition "Succeeded or Failed"
Jun 23 01:15:56.197: INFO: Trying to get logs from node nodes-us-west3-a-j1m9 pod pod-subpath-test-preprovisionedpv-5b95 container test-container-subpath-preprovisionedpv-5b95: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:15:56.270: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-5b95 to disappear
Jun 23 01:15:56.296: INFO: Pod pod-subpath-test-preprovisionedpv-5b95 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-5b95
Jun 23 01:15:56.296: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5b95" in namespace "provisioning-8507"
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-5b95
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:15:56.347: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5b95" in namespace "provisioning-8507" to be "Succeeded or Failed"
Jun 23 01:15:56.372: INFO: Pod "pod-subpath-test-preprovisionedpv-5b95": Phase="Pending", Reason="", readiness=false. Elapsed: 24.500696ms
Jun 23 01:15:58.397: INFO: Pod "pod-subpath-test-preprovisionedpv-5b95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049971321s
Jun 23 01:16:00.399: INFO: Pod "pod-subpath-test-preprovisionedpv-5b95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051372912s
Jun 23 01:16:02.397: INFO: Pod "pod-subpath-test-preprovisionedpv-5b95": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049776915s
Jun 23 01:16:04.397: INFO: Pod "pod-subpath-test-preprovisionedpv-5b95": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050157023s
Jun 23 01:16:06.397: INFO: Pod "pod-subpath-test-preprovisionedpv-5b95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.050260494s
[1mSTEP[0m: Saw pod success
Jun 23 01:16:06.398: INFO: Pod "pod-subpath-test-preprovisionedpv-5b95" satisfied condition "Succeeded or Failed"
Jun 23 01:16:06.422: INFO: Trying to get logs from node nodes-us-west3-a-j1m9 pod pod-subpath-test-preprovisionedpv-5b95 container test-container-subpath-preprovisionedpv-5b95: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:16:06.486: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-5b95 to disappear
Jun 23 01:16:06.510: INFO: Pod pod-subpath-test-preprovisionedpv-5b95 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-5b95
Jun 23 01:16:06.510: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5b95" in namespace "provisioning-8507"
... skipping 30 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directories when readOnly specified in the volumeSource
[90mtest/e2e/storage/testsuites/subpath.go:397[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":102,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] CSI mock volume
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 122 lines ...
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":-1,"completed":4,"skipped":22,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:16:07.442: INFO: Only supported for providers [openstack] (not gce)
... skipping 147 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
CSI FSGroupPolicy [LinuxOnly]
[90mtest/e2e/storage/csi_mock_volume.go:1636[0m
should not modify fsGroup if fsGroupPolicy=None
[90mtest/e2e/storage/csi_mock_volume.go:1660[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":-1,"completed":1,"skipped":0,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:16:07.619: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 99 lines ...
[32m• [SLOW TEST:22.583 seconds][0m
[sig-apps] Job
[90mtest/e2e/apps/framework.go:23[0m
should manage the lifecycle of a job [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Job should manage the lifecycle of a job [Conformance]","total":-1,"completed":5,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:16:07.847: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/framework/framework.go:187
... skipping 34 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:16:08.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "events-214" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] PersistentVolumes-local
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 88 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume one after the other
[90mtest/e2e/storage/persistent_volumes-local.go:256[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:257[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":7,"skipped":34,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:16:12.888: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 81 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should be able to unmount after the subpath directory is deleted [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:447[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":22,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:16:13.293: INFO: Only supported for providers [openstack] (not gce)
... skipping 77 lines ...
Jun 23 01:16:00.179: INFO: PersistentVolumeClaim pvc-lc4bt found but phase is Pending instead of Bound.
Jun 23 01:16:02.206: INFO: PersistentVolumeClaim pvc-lc4bt found and phase=Bound (2.050835618s)
Jun 23 01:16:02.206: INFO: Waiting up to 3m0s for PersistentVolume local-tmjxv to have phase Bound
Jun 23 01:16:02.230: INFO: PersistentVolume local-tmjxv found and phase=Bound (23.607807ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-wv74
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:16:02.308: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wv74" in namespace "provisioning-5207" to be "Succeeded or Failed"
Jun 23 01:16:02.332: INFO: Pod "pod-subpath-test-preprovisionedpv-wv74": Phase="Pending", Reason="", readiness=false. Elapsed: 24.186233ms
Jun 23 01:16:04.358: INFO: Pod "pod-subpath-test-preprovisionedpv-wv74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049683381s
Jun 23 01:16:06.365: INFO: Pod "pod-subpath-test-preprovisionedpv-wv74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056757801s
Jun 23 01:16:08.359: INFO: Pod "pod-subpath-test-preprovisionedpv-wv74": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050901446s
Jun 23 01:16:10.361: INFO: Pod "pod-subpath-test-preprovisionedpv-wv74": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053213974s
Jun 23 01:16:12.359: INFO: Pod "pod-subpath-test-preprovisionedpv-wv74": Phase="Pending", Reason="", readiness=false. Elapsed: 10.050293652s
Jun 23 01:16:14.360: INFO: Pod "pod-subpath-test-preprovisionedpv-wv74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.051235384s
[1mSTEP[0m: Saw pod success
Jun 23 01:16:14.360: INFO: Pod "pod-subpath-test-preprovisionedpv-wv74" satisfied condition "Succeeded or Failed"
Jun 23 01:16:14.384: INFO: Trying to get logs from node nodes-us-west3-a-s284 pod pod-subpath-test-preprovisionedpv-wv74 container test-container-subpath-preprovisionedpv-wv74: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:16:14.459: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wv74 to disappear
Jun 23 01:16:14.484: INFO: Pod pod-subpath-test-preprovisionedpv-wv74 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-wv74
Jun 23 01:16:14.484: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wv74" in namespace "provisioning-5207"
... skipping 67 lines ...
[32m• [SLOW TEST:11.409 seconds][0m
[sig-apps] ReplicaSet
[90mtest/e2e/apps/framework.go:23[0m
should serve a basic image on each replica with a public image [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":6,"skipped":29,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:16:16.393: INFO: Only supported for providers [azure] (not gce)
... skipping 198 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
One pod requesting one prebound PVC
[90mtest/e2e/storage/persistent_volumes-local.go:211[0m
should be able to mount volume and write from pod1
[90mtest/e2e/storage/persistent_volumes-local.go:240[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":4,"skipped":22,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 38 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:16:19.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-2219" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":7,"skipped":40,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] Garbage collector
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 17 lines ...
[32m• [SLOW TEST:38.243 seconds][0m
[sig-api-machinery] Garbage collector
[90mtest/e2e/apimachinery/framework.go:23[0m
should support orphan deletion of custom resources
[90mtest/e2e/apimachinery/garbage_collector.go:1040[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":-1,"completed":2,"skipped":17,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl Port forwarding
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 49 lines ...
[90mtest/e2e/kubectl/portforward.go:454[0m
that expects a client request
[90mtest/e2e/kubectl/portforward.go:455[0m
should support a client that connects, sends DATA, and disconnects
[90mtest/e2e/kubectl/portforward.go:459[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":6,"skipped":34,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:16:26.560: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 172 lines ...
[32m• [SLOW TEST:21.038 seconds][0m
[sig-api-machinery] Servers with support for API chunking
[90mtest/e2e/apimachinery/framework.go:23[0m
should return chunks of results for list calls
[90mtest/e2e/apimachinery/chunking.go:79[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":5,"skipped":31,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:16:28.541: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename crd-publish-openapi
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 27 lines ...
[32m• [SLOW TEST:8.099 seconds][0m
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
works for CRD without validation schema [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":6,"skipped":31,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:16:36.661: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 58 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:16:37.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "secrets-6689" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":7,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:16:37.109: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/framework/framework.go:187
... skipping 43 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap configmap-5321/configmap-test-d0aa70fa-4774-4768-b4d2-559896a2bdfe
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 23 01:16:13.153: INFO: Waiting up to 5m0s for pod "pod-configmaps-6b12a306-f5e1-427c-9f57-c8768f5cb6d7" in namespace "configmap-5321" to be "Succeeded or Failed"
Jun 23 01:16:13.177: INFO: Pod "pod-configmaps-6b12a306-f5e1-427c-9f57-c8768f5cb6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.351446ms
Jun 23 01:16:15.201: INFO: Pod "pod-configmaps-6b12a306-f5e1-427c-9f57-c8768f5cb6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048871727s
Jun 23 01:16:17.219: INFO: Pod "pod-configmaps-6b12a306-f5e1-427c-9f57-c8768f5cb6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066522361s
Jun 23 01:16:19.202: INFO: Pod "pod-configmaps-6b12a306-f5e1-427c-9f57-c8768f5cb6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049679334s
Jun 23 01:16:21.208: INFO: Pod "pod-configmaps-6b12a306-f5e1-427c-9f57-c8768f5cb6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055349894s
Jun 23 01:16:23.203: INFO: Pod "pod-configmaps-6b12a306-f5e1-427c-9f57-c8768f5cb6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.050188185s
... skipping 2 lines ...
Jun 23 01:16:29.206: INFO: Pod "pod-configmaps-6b12a306-f5e1-427c-9f57-c8768f5cb6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.053773517s
Jun 23 01:16:31.201: INFO: Pod "pod-configmaps-6b12a306-f5e1-427c-9f57-c8768f5cb6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.048437361s
Jun 23 01:16:33.201: INFO: Pod "pod-configmaps-6b12a306-f5e1-427c-9f57-c8768f5cb6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.048537018s
Jun 23 01:16:35.212: INFO: Pod "pod-configmaps-6b12a306-f5e1-427c-9f57-c8768f5cb6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.059555286s
Jun 23 01:16:37.203: INFO: Pod "pod-configmaps-6b12a306-f5e1-427c-9f57-c8768f5cb6d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.050519135s
[1mSTEP[0m: Saw pod success
Jun 23 01:16:37.203: INFO: Pod "pod-configmaps-6b12a306-f5e1-427c-9f57-c8768f5cb6d7" satisfied condition "Succeeded or Failed"
Jun 23 01:16:37.227: INFO: Trying to get logs from node nodes-us-west3-a-l43j pod pod-configmaps-6b12a306-f5e1-427c-9f57-c8768f5cb6d7 container env-test: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:16:37.281: INFO: Waiting for pod pod-configmaps-6b12a306-f5e1-427c-9f57-c8768f5cb6d7 to disappear
Jun 23 01:16:37.305: INFO: Pod pod-configmaps-6b12a306-f5e1-427c-9f57-c8768f5cb6d7 no longer exists
[AfterEach] [sig-node] ConfigMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:24.443 seconds][0m
[sig-node] ConfigMap
[90mtest/e2e/common/node/framework.go:23[0m
should be consumable via the environment [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":38,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:16:37.383: INFO: Only supported for providers [aws] (not gce)
... skipping 127 lines ...
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [sig-node] Probing container
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:15:10.447: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename container-probe
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 25 lines ...
[32m• [SLOW TEST:87.509 seconds][0m
[sig-node] Probing container
[90mtest/e2e/common/node/framework.go:23[0m
should be restarted with a GRPC liveness probe [NodeConformance]
[90mtest/e2e/common/node/container_probe.go:543[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Probing container should be restarted with a GRPC liveness probe [NodeConformance]","total":-1,"completed":3,"skipped":16,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:16:37.990: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 100 lines ...
Jun 23 01:16:15.879: INFO: PersistentVolumeClaim pvc-h2s7n found but phase is Pending instead of Bound.
Jun 23 01:16:17.973: INFO: PersistentVolumeClaim pvc-h2s7n found and phase=Bound (16.31253702s)
Jun 23 01:16:17.974: INFO: Waiting up to 3m0s for PersistentVolume local-jcvzd to have phase Bound
Jun 23 01:16:18.040: INFO: PersistentVolume local-jcvzd found and phase=Bound (65.947562ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-r4fm
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:16:18.178: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-r4fm" in namespace "provisioning-7402" to be "Succeeded or Failed"
Jun 23 01:16:18.216: INFO: Pod "pod-subpath-test-preprovisionedpv-r4fm": Phase="Pending", Reason="", readiness=false. Elapsed: 38.486344ms
Jun 23 01:16:20.241: INFO: Pod "pod-subpath-test-preprovisionedpv-r4fm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063346019s
Jun 23 01:16:22.241: INFO: Pod "pod-subpath-test-preprovisionedpv-r4fm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062895643s
Jun 23 01:16:24.242: INFO: Pod "pod-subpath-test-preprovisionedpv-r4fm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06421254s
Jun 23 01:16:26.241: INFO: Pod "pod-subpath-test-preprovisionedpv-r4fm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063222359s
Jun 23 01:16:28.240: INFO: Pod "pod-subpath-test-preprovisionedpv-r4fm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.062477664s
Jun 23 01:16:30.242: INFO: Pod "pod-subpath-test-preprovisionedpv-r4fm": Phase="Pending", Reason="", readiness=false. Elapsed: 12.06461926s
Jun 23 01:16:32.241: INFO: Pod "pod-subpath-test-preprovisionedpv-r4fm": Phase="Pending", Reason="", readiness=false. Elapsed: 14.063295959s
Jun 23 01:16:34.243: INFO: Pod "pod-subpath-test-preprovisionedpv-r4fm": Phase="Pending", Reason="", readiness=false. Elapsed: 16.065704624s
Jun 23 01:16:36.240: INFO: Pod "pod-subpath-test-preprovisionedpv-r4fm": Phase="Pending", Reason="", readiness=false. Elapsed: 18.062648805s
Jun 23 01:16:38.244: INFO: Pod "pod-subpath-test-preprovisionedpv-r4fm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.066531827s
[1mSTEP[0m: Saw pod success
Jun 23 01:16:38.244: INFO: Pod "pod-subpath-test-preprovisionedpv-r4fm" satisfied condition "Succeeded or Failed"
Jun 23 01:16:38.273: INFO: Trying to get logs from node nodes-us-west3-a-9jqc pod pod-subpath-test-preprovisionedpv-r4fm container test-container-subpath-preprovisionedpv-r4fm: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:16:38.330: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-r4fm to disappear
Jun 23 01:16:38.355: INFO: Pod pod-subpath-test-preprovisionedpv-r4fm no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-r4fm
Jun 23 01:16:38.355: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-r4fm" in namespace "provisioning-7402"
... skipping 26 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":5,"skipped":29,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 144 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should be able to unmount after the subpath directory is deleted [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:447[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":22,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:16:49.747: INFO: Only supported for providers [azure] (not gce)
... skipping 71 lines ...
Jun 23 01:16:15.009: INFO: PersistentVolumeClaim pvc-gljw5 found but phase is Pending instead of Bound.
Jun 23 01:16:17.039: INFO: PersistentVolumeClaim pvc-gljw5 found and phase=Bound (4.084845381s)
Jun 23 01:16:17.039: INFO: Waiting up to 3m0s for PersistentVolume local-tg6kh to have phase Bound
Jun 23 01:16:17.076: INFO: PersistentVolume local-tg6kh found and phase=Bound (36.499377ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-72q6
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:16:17.176: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-72q6" in namespace "provisioning-3611" to be "Succeeded or Failed"
Jun 23 01:16:17.219: INFO: Pod "pod-subpath-test-preprovisionedpv-72q6": Phase="Pending", Reason="", readiness=false. Elapsed: 42.981488ms
Jun 23 01:16:19.245: INFO: Pod "pod-subpath-test-preprovisionedpv-72q6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068530263s
Jun 23 01:16:21.244: INFO: Pod "pod-subpath-test-preprovisionedpv-72q6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067480972s
Jun 23 01:16:23.244: INFO: Pod "pod-subpath-test-preprovisionedpv-72q6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067332056s
Jun 23 01:16:25.244: INFO: Pod "pod-subpath-test-preprovisionedpv-72q6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068229772s
Jun 23 01:16:27.243: INFO: Pod "pod-subpath-test-preprovisionedpv-72q6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.066909908s
... skipping 2 lines ...
Jun 23 01:16:33.245: INFO: Pod "pod-subpath-test-preprovisionedpv-72q6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.068454861s
Jun 23 01:16:35.245: INFO: Pod "pod-subpath-test-preprovisionedpv-72q6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.068476774s
Jun 23 01:16:37.244: INFO: Pod "pod-subpath-test-preprovisionedpv-72q6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.067613768s
Jun 23 01:16:39.244: INFO: Pod "pod-subpath-test-preprovisionedpv-72q6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.067894088s
Jun 23 01:16:41.249: INFO: Pod "pod-subpath-test-preprovisionedpv-72q6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.073269218s
[1mSTEP[0m: Saw pod success
Jun 23 01:16:41.250: INFO: Pod "pod-subpath-test-preprovisionedpv-72q6" satisfied condition "Succeeded or Failed"
Jun 23 01:16:41.274: INFO: Trying to get logs from node nodes-us-west3-a-l43j pod pod-subpath-test-preprovisionedpv-72q6 container test-container-subpath-preprovisionedpv-72q6: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:16:41.329: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-72q6 to disappear
Jun 23 01:16:41.353: INFO: Pod pod-subpath-test-preprovisionedpv-72q6 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-72q6
Jun 23 01:16:41.353: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-72q6" in namespace "provisioning-3611"
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-72q6
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:16:41.406: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-72q6" in namespace "provisioning-3611" to be "Succeeded or Failed"
Jun 23 01:16:41.430: INFO: Pod "pod-subpath-test-preprovisionedpv-72q6": Phase="Pending", Reason="", readiness=false. Elapsed: 23.289497ms
Jun 23 01:16:43.455: INFO: Pod "pod-subpath-test-preprovisionedpv-72q6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048390271s
Jun 23 01:16:45.456: INFO: Pod "pod-subpath-test-preprovisionedpv-72q6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049330364s
Jun 23 01:16:47.454: INFO: Pod "pod-subpath-test-preprovisionedpv-72q6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047699537s
Jun 23 01:16:49.458: INFO: Pod "pod-subpath-test-preprovisionedpv-72q6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051507126s
[1mSTEP[0m: Saw pod success
Jun 23 01:16:49.458: INFO: Pod "pod-subpath-test-preprovisionedpv-72q6" satisfied condition "Succeeded or Failed"
Jun 23 01:16:49.482: INFO: Trying to get logs from node nodes-us-west3-a-l43j pod pod-subpath-test-preprovisionedpv-72q6 container test-container-subpath-preprovisionedpv-72q6: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:16:49.542: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-72q6 to disappear
Jun 23 01:16:49.566: INFO: Pod pod-subpath-test-preprovisionedpv-72q6 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-72q6
Jun 23 01:16:49.566: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-72q6" in namespace "provisioning-3611"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directories when readOnly specified in the volumeSource
[90mtest/e2e/storage/testsuites/subpath.go:397[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":5,"skipped":19,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:16:49.990: INFO: Only supported for providers [azure] (not gce)
... skipping 51 lines ...
[32m• [SLOW TEST:12.399 seconds][0m
[sig-api-machinery] Generated clientset
[90mtest/e2e/apimachinery/framework.go:23[0m
should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
[90mtest/e2e/apimachinery/generated_clientset.go:105[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod","total":-1,"completed":6,"skipped":32,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:16:51.449: INFO: Only supported for providers [azure] (not gce)
... skipping 214 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Update Demo
[90mtest/e2e/kubectl/kubectl.go:322[0m
should scale a replication controller [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":2,"skipped":23,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
... skipping 144 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support multiple inline ephemeral volumes
[90mtest/e2e/storage/testsuites/ephemeral.go:315[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":3,"skipped":28,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:16:56.849: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 158 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should verify that all csinodes have volume limits
[90mtest/e2e/storage/testsuites/volumelimits.go:249[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits","total":-1,"completed":8,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:16:57.795: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/framework/framework.go:187
... skipping 73 lines ...
[32m• [SLOW TEST:21.055 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
pod should support shared volumes between containers [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":9,"skipped":65,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:16:58.615: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 11 lines ...
[36mDriver local doesn't support GenericEphemeralVolume -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":4,"skipped":46,"failed":0}
[BeforeEach] [sig-api-machinery] Garbage collector
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:16:15.621: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename gc
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 141 lines ...
[32m• [SLOW TEST:44.453 seconds][0m
[sig-api-machinery] Garbage collector
[90mtest/e2e/apimachinery/framework.go:23[0m
should orphan pods created by rc if delete options say so [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":5,"skipped":46,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:00.114: INFO: Only supported for providers [openstack] (not gce)
... skipping 107 lines ...
[32m• [SLOW TEST:53.290 seconds][0m
[sig-api-machinery] Garbage collector
[90mtest/e2e/apimachinery/framework.go:23[0m
should delete jobs and pods created by cronjob
[90mtest/e2e/apimachinery/garbage_collector.go:1145[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","total":-1,"completed":5,"skipped":108,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:00.775: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 39 lines ...
[32m• [SLOW TEST:86.389 seconds][0m
[sig-apps] CronJob
[90mtest/e2e/apps/framework.go:23[0m
should schedule multiple jobs concurrently [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":4,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:00.846: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 443 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should store data
[90mtest/e2e/storage/testsuites/volumes.go:161[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:03.047: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/framework/framework.go:187
... skipping 55 lines ...
[36mOnly supported for providers [openstack] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:1092
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":5,"skipped":31,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:17:02.863: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename kubectl
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:17:03.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-6128" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":6,"skipped":31,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:03.202: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 79 lines ...
[90mtest/e2e/common/node/runtime.go:43[0m
when running a container with a new image
[90mtest/e2e/common/node/runtime.go:259[0m
should not be able to pull image from invalid registry [NodeConformance]
[90mtest/e2e/common/node/runtime.go:370[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":9,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:07.417: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/framework/framework.go:187
... skipping 463 lines ...
[90mtest/e2e/network/common/framework.go:23[0m
version v1
[90mtest/e2e/network/proxy.go:74[0m
should proxy through a service and a pod [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":8,"skipped":42,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:08.952: INFO: Only supported for providers [openstack] (not gce)
... skipping 167 lines ...
[1mSTEP[0m: Destroying namespace "apply-7727" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
test/e2e/apimachinery/apply.go:59
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for subresources","total":-1,"completed":9,"skipped":52,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:09.503: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 57 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:17:09.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "podtemplate-1303" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] PodTemplates should replace a pod template [Conformance]","total":-1,"completed":7,"skipped":59,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:09.765: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 121 lines ...
[36mOnly supported for node OS distro [gci ubuntu custom] (not debian)[0m
test/e2e/storage/drivers/in_tree.go:263
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":10,"skipped":72,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] InitContainer [NodeConformance]
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 16 lines ...
[32m• [SLOW TEST:16.181 seconds][0m
[sig-node] InitContainer [NodeConformance]
[90mtest/e2e/common/node/framework.go:23[0m
should invoke init containers on a RestartNever pod [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":3,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:11.055: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/framework/framework.go:187
... skipping 27 lines ...
Jun 23 01:15:40.062: INFO: Running '/logs/artifacts/5366fe45-f290-11ec-8dfe-daa417708791/kubectl --server=https://34.106.168.174 --kubeconfig=/root/.kube/config --namespace=kubectl-2957 create -f -'
Jun 23 01:15:41.488: INFO: stderr: ""
Jun 23 01:15:41.488: INFO: stdout: "pod/httpd created\n"
Jun 23 01:15:41.488: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jun 23 01:15:41.488: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-2957" to be "running and ready"
Jun 23 01:15:41.514: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 25.171375ms
Jun 23 01:15:41.514: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-j1m9' to be 'Running' but was 'Pending'
Jun 23 01:15:43.541: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052565175s
Jun 23 01:15:43.541: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-j1m9' to be 'Running' but was 'Pending'
Jun 23 01:15:45.539: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.050985139s
Jun 23 01:15:45.540: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west3-a-j1m9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC }]
Jun 23 01:15:47.540: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.051411811s
Jun 23 01:15:47.540: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west3-a-j1m9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC }]
Jun 23 01:15:49.539: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.05051587s
Jun 23 01:15:49.539: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west3-a-j1m9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC }]
Jun 23 01:15:51.547: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.058177816s
Jun 23 01:15:51.547: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west3-a-j1m9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC }]
Jun 23 01:15:53.542: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.053370238s
Jun 23 01:15:53.542: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west3-a-j1m9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:15:41 +0000 UTC }]
Jun 23 01:15:55.541: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 14.05230762s
Jun 23 01:15:55.541: INFO: Pod "httpd" satisfied condition "running and ready"
Jun 23 01:15:55.541: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should support inline execution and attach
test/e2e/kubectl/kubectl.go:591
[1mSTEP[0m: executing a command with run and attach with stdin
... skipping 45 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Simple pod
[90mtest/e2e/kubectl/kubectl.go:407[0m
should support inline execution and attach
[90mtest/e2e/kubectl/kubectl.go:591[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":7,"skipped":74,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:12.706: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 81 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:17:14.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "disruption-3151" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":11,"skipped":73,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 27 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:17:14.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "emptydir-4517" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":8,"skipped":77,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:14.599: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 9 lines ...
[90mtest/e2e/storage/testsuites/subpath.go:207[0m
[36mDriver hostPathSymlink doesn't support PreprovisionedPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":80,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:15:36.141: INFO: >>> kubeConfig: /root/.kube/config
... skipping 186 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (ext4)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should store data
[90mtest/e2e/storage/testsuites/volumes.go:161[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":7,"skipped":80,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] RuntimeClass
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 20 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:17:15.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "runtimeclass-9049" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":9,"skipped":78,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
... skipping 209 lines ...
Jun 23 01:16:25.463: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 01:16:27.461: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 10.061375456s
Jun 23 01:16:27.461: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 01:16:29.462: INFO: Pod "verify-service-down-host-exec-pod": Phase="Running", Reason="", readiness=true. Elapsed: 12.062926938s
Jun 23 01:16:29.462: INFO: The phase of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 23 01:16:29.462: INFO: Pod "verify-service-down-host-exec-pod" satisfied condition "running and ready"
Jun 23 01:16:29.462: INFO: Running '/logs/artifacts/5366fe45-f290-11ec-8dfe-daa417708791/kubectl --server=https://34.106.168.174 --kubeconfig=/root/.kube/config --namespace=services-127 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.245.192:80 && echo service-down-failed'
Jun 23 01:16:31.848: INFO: rc: 28
Jun 23 01:16:31.848: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.68.245.192:80 && echo service-down-failed" in pod services-127/verify-service-down-host-exec-pod: error running /logs/artifacts/5366fe45-f290-11ec-8dfe-daa417708791/kubectl --server=https://34.106.168.174 --kubeconfig=/root/.kube/config --namespace=services-127 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.245.192:80 && echo service-down-failed:
Command stdout:
stderr:
+ curl -g -s --connect-timeout 2 http://100.68.245.192:80
command terminated with exit code 28
error:
exit status 28
Output:
[1mSTEP[0m: Deleting pod verify-service-down-host-exec-pod in namespace services-127
[1mSTEP[0m: adding service-proxy-name label
[1mSTEP[0m: verifying service is not up
Jun 23 01:16:31.936: INFO: Creating new host exec pod
... skipping 22 lines ...
Jun 23 01:16:52.012: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 01:16:54.013: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 22.051340928s
Jun 23 01:16:54.014: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 01:16:56.011: INFO: Pod "verify-service-down-host-exec-pod": Phase="Running", Reason="", readiness=true. Elapsed: 24.049266316s
Jun 23 01:16:56.012: INFO: The phase of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 23 01:16:56.012: INFO: Pod "verify-service-down-host-exec-pod" satisfied condition "running and ready"
Jun 23 01:16:56.012: INFO: Running '/logs/artifacts/5366fe45-f290-11ec-8dfe-daa417708791/kubectl --server=https://34.106.168.174 --kubeconfig=/root/.kube/config --namespace=services-127 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.64.117.104:80 && echo service-down-failed'
Jun 23 01:16:58.437: INFO: rc: 28
Jun 23 01:16:58.437: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.64.117.104:80 && echo service-down-failed" in pod services-127/verify-service-down-host-exec-pod: error running /logs/artifacts/5366fe45-f290-11ec-8dfe-daa417708791/kubectl --server=https://34.106.168.174 --kubeconfig=/root/.kube/config --namespace=services-127 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.64.117.104:80 && echo service-down-failed:
Command stdout:
stderr:
+ curl -g -s --connect-timeout 2 http://100.64.117.104:80
command terminated with exit code 28
error:
exit status 28
Output:
[1mSTEP[0m: Deleting pod verify-service-down-host-exec-pod in namespace services-127
[1mSTEP[0m: removing service-proxy-name annotation
[1mSTEP[0m: verifying service is up
Jun 23 01:16:58.520: INFO: Creating new host exec pod
... skipping 32 lines ...
Jun 23 01:17:09.994: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 01:17:12.029: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059691373s
Jun 23 01:17:12.029: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 23 01:17:14.023: INFO: Pod "verify-service-down-host-exec-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.053894109s
Jun 23 01:17:14.023: INFO: The phase of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 23 01:17:14.023: INFO: Pod "verify-service-down-host-exec-pod" satisfied condition "running and ready"
Jun 23 01:17:14.024: INFO: Running '/logs/artifacts/5366fe45-f290-11ec-8dfe-daa417708791/kubectl --server=https://34.106.168.174 --kubeconfig=/root/.kube/config --namespace=services-127 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.245.192:80 && echo service-down-failed'
Jun 23 01:17:16.474: INFO: rc: 28
Jun 23 01:17:16.474: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.68.245.192:80 && echo service-down-failed" in pod services-127/verify-service-down-host-exec-pod: error running /logs/artifacts/5366fe45-f290-11ec-8dfe-daa417708791/kubectl --server=https://34.106.168.174 --kubeconfig=/root/.kube/config --namespace=services-127 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.245.192:80 && echo service-down-failed:
Command stdout:
stderr:
+ curl -g -s --connect-timeout 2 http://100.68.245.192:80
command terminated with exit code 28
error:
exit status 28
Output:
[1mSTEP[0m: Deleting pod verify-service-down-host-exec-pod in namespace services-127
[AfterEach] [sig-network] Services
test/e2e/framework/framework.go:187
Jun 23 01:17:16.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
[32m• [SLOW TEST:92.371 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should implement service.kubernetes.io/service-proxy-name
[90mtest/e2e/network/service.go:2156[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":5,"skipped":15,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:16.620: INFO: Only supported for providers [vsphere] (not gce)
... skipping 41 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:17:17.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "replicaset-3958" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":8,"skipped":85,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:17.670: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/framework/framework.go:187
... skipping 92 lines ...
test/e2e/storage/subpath.go:40
[1mSTEP[0m: Setting up data
[It] should support subpaths with configmap pod [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating pod pod-subpath-test-configmap-l9nz
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 23 01:16:51.746: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-l9nz" in namespace "subpath-1359" to be "Succeeded or Failed"
Jun 23 01:16:51.774: INFO: Pod "pod-subpath-test-configmap-l9nz": Phase="Pending", Reason="", readiness=false. Elapsed: 27.653698ms
Jun 23 01:16:53.802: INFO: Pod "pod-subpath-test-configmap-l9nz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056029648s
Jun 23 01:16:55.798: INFO: Pod "pod-subpath-test-configmap-l9nz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052203162s
Jun 23 01:16:57.800: INFO: Pod "pod-subpath-test-configmap-l9nz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053414043s
Jun 23 01:16:59.802: INFO: Pod "pod-subpath-test-configmap-l9nz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055756517s
Jun 23 01:17:01.800: INFO: Pod "pod-subpath-test-configmap-l9nz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.053309314s
... skipping 3 lines ...
Jun 23 01:17:09.799: INFO: Pod "pod-subpath-test-configmap-l9nz": Phase="Running", Reason="", readiness=true. Elapsed: 18.05235842s
Jun 23 01:17:11.823: INFO: Pod "pod-subpath-test-configmap-l9nz": Phase="Running", Reason="", readiness=true. Elapsed: 20.076374358s
Jun 23 01:17:13.799: INFO: Pod "pod-subpath-test-configmap-l9nz": Phase="Running", Reason="", readiness=true. Elapsed: 22.052347109s
Jun 23 01:17:15.818: INFO: Pod "pod-subpath-test-configmap-l9nz": Phase="Running", Reason="", readiness=true. Elapsed: 24.071501837s
Jun 23 01:17:17.808: INFO: Pod "pod-subpath-test-configmap-l9nz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.061780445s
[1mSTEP[0m: Saw pod success
Jun 23 01:17:17.808: INFO: Pod "pod-subpath-test-configmap-l9nz" satisfied condition "Succeeded or Failed"
Jun 23 01:17:17.834: INFO: Trying to get logs from node nodes-us-west3-a-s284 pod pod-subpath-test-configmap-l9nz container test-container-subpath-configmap-l9nz: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:17:17.913: INFO: Waiting for pod pod-subpath-test-configmap-l9nz to disappear
Jun 23 01:17:17.937: INFO: Pod pod-subpath-test-configmap-l9nz no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-configmap-l9nz
Jun 23 01:17:17.937: INFO: Deleting pod "pod-subpath-test-configmap-l9nz" in namespace "subpath-1359"
... skipping 8 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
Atomic writer volumes
[90mtest/e2e/storage/subpath.go:36[0m
should support subpaths with configmap pod [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance]","total":-1,"completed":7,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:18.030: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
test/e2e/framework/framework.go:187
... skipping 107 lines ...
Jun 23 01:15:56.178: INFO: PersistentVolumeClaim csi-hostpath4hhpn found but phase is Pending instead of Bound.
Jun 23 01:15:58.204: INFO: PersistentVolumeClaim csi-hostpath4hhpn found but phase is Pending instead of Bound.
Jun 23 01:16:00.230: INFO: PersistentVolumeClaim csi-hostpath4hhpn found but phase is Pending instead of Bound.
Jun 23 01:16:02.257: INFO: PersistentVolumeClaim csi-hostpath4hhpn found and phase=Bound (12.182317836s)
[1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-n2zd
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:16:02.337: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-n2zd" in namespace "provisioning-288" to be "Succeeded or Failed"
Jun 23 01:16:02.362: INFO: Pod "pod-subpath-test-dynamicpv-n2zd": Phase="Pending", Reason="", readiness=false. Elapsed: 25.108758ms
Jun 23 01:16:04.391: INFO: Pod "pod-subpath-test-dynamicpv-n2zd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053652916s
Jun 23 01:16:06.394: INFO: Pod "pod-subpath-test-dynamicpv-n2zd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056738177s
Jun 23 01:16:08.392: INFO: Pod "pod-subpath-test-dynamicpv-n2zd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054384542s
Jun 23 01:16:10.389: INFO: Pod "pod-subpath-test-dynamicpv-n2zd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051211057s
Jun 23 01:16:12.389: INFO: Pod "pod-subpath-test-dynamicpv-n2zd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.051843389s
... skipping 9 lines ...
Jun 23 01:16:32.390: INFO: Pod "pod-subpath-test-dynamicpv-n2zd": Phase="Pending", Reason="", readiness=false. Elapsed: 30.052655171s
Jun 23 01:16:34.391: INFO: Pod "pod-subpath-test-dynamicpv-n2zd": Phase="Pending", Reason="", readiness=false. Elapsed: 32.053356158s
Jun 23 01:16:36.390: INFO: Pod "pod-subpath-test-dynamicpv-n2zd": Phase="Pending", Reason="", readiness=false. Elapsed: 34.052361054s
Jun 23 01:16:38.391: INFO: Pod "pod-subpath-test-dynamicpv-n2zd": Phase="Pending", Reason="", readiness=false. Elapsed: 36.053647417s
Jun 23 01:16:40.389: INFO: Pod "pod-subpath-test-dynamicpv-n2zd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.051712827s
[1mSTEP[0m: Saw pod success
Jun 23 01:16:40.389: INFO: Pod "pod-subpath-test-dynamicpv-n2zd" satisfied condition "Succeeded or Failed"
Jun 23 01:16:40.416: INFO: Trying to get logs from node nodes-us-west3-a-s284 pod pod-subpath-test-dynamicpv-n2zd container test-container-subpath-dynamicpv-n2zd: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:16:40.476: INFO: Waiting for pod pod-subpath-test-dynamicpv-n2zd to disappear
Jun 23 01:16:40.501: INFO: Pod pod-subpath-test-dynamicpv-n2zd no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-n2zd
Jun 23 01:16:40.501: INFO: Deleting pod "pod-subpath-test-dynamicpv-n2zd" in namespace "provisioning-288"
... skipping 61 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":88,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-apps] Deployment
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 31 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:17:21.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "deployment-9732" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":6,"skipped":22,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] DNS
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 46 lines ...
Jun 23 01:17:12.599: INFO: Pod "dns-test-fe325042-1d37-49d6-b711-097949366976": Phase="Running", Reason="", readiness=true. Elapsed: 12.061161202s
Jun 23 01:17:12.599: INFO: Pod "dns-test-fe325042-1d37-49d6-b711-097949366976" satisfied condition "running"
[1mSTEP[0m: retrieving the pod
[1mSTEP[0m: looking for the results for each expected name from probers
Jun 23 01:17:12.655: INFO: File wheezy_udp@dns-test-service-3.dns-7912.svc.cluster.local from pod dns-7912/dns-test-fe325042-1d37-49d6-b711-097949366976 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 23 01:17:12.682: INFO: Lookups using dns-7912/dns-test-fe325042-1d37-49d6-b711-097949366976 failed for: [wheezy_udp@dns-test-service-3.dns-7912.svc.cluster.local]
Jun 23 01:17:17.738: INFO: DNS probes using dns-test-fe325042-1d37-49d6-b711-097949366976 succeeded
[1mSTEP[0m: deleting the pod
[1mSTEP[0m: changing the service to type=ClusterIP
[1mSTEP[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7912.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7912.svc.cluster.local; sleep 1; done
... skipping 22 lines ...
[32m• [SLOW TEST:44.190 seconds][0m
[sig-network] DNS
[90mtest/e2e/network/common/framework.go:23[0m
should provide DNS for ExternalName services [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:22.232: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 36 lines ...
[32m• [SLOW TEST:23.347 seconds][0m
[sig-api-machinery] Garbage collector
[90mtest/e2e/apimachinery/framework.go:23[0m
should support cascading deletion of custom resources
[90mtest/e2e/apimachinery/garbage_collector.go:905[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":-1,"completed":6,"skipped":55,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:23.510: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 133 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
storage capacity
[90mtest/e2e/storage/csi_mock_volume.go:1100[0m
unlimited
[90mtest/e2e/storage/csi_mock_volume.go:1158[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":6,"skipped":28,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:24.988: INFO: Only supported for providers [azure] (not gce)
... skipping 62 lines ...
[32m• [SLOW TEST:24.262 seconds][0m
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
updates the published spec when one version gets renamed [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":6,"skipped":115,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:25.084: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 34 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:17:25.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubelet-test-1346" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":95,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:25.295: INFO: Driver csi-hostpath doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
test/e2e/framework/framework.go:187
... skipping 192 lines ...
[90mtest/e2e/apimachinery/framework.go:23[0m
should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":8,"skipped":55,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:25.453: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 61 lines ...
[32m• [SLOW TEST:11.435 seconds][0m
[sig-api-machinery] ResourceQuota
[90mtest/e2e/apimachinery/framework.go:23[0m
should create a ResourceQuota and capture the life of a replica set. [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":12,"skipped":75,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:25.852: INFO: Only supported for providers [azure] (not gce)
... skipping 93 lines ...
[1mSTEP[0m: Building a namespace api object, basename svcaccounts
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should mount projected service account token [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test service account token:
Jun 23 01:17:15.864: INFO: Waiting up to 5m0s for pod "test-pod-b00f6d5b-733d-46e7-96be-4d552ed00591" in namespace "svcaccounts-8718" to be "Succeeded or Failed"
Jun 23 01:17:15.887: INFO: Pod "test-pod-b00f6d5b-733d-46e7-96be-4d552ed00591": Phase="Pending", Reason="", readiness=false. Elapsed: 22.754594ms
Jun 23 01:17:17.920: INFO: Pod "test-pod-b00f6d5b-733d-46e7-96be-4d552ed00591": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056174471s
Jun 23 01:17:19.913: INFO: Pod "test-pod-b00f6d5b-733d-46e7-96be-4d552ed00591": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048675583s
Jun 23 01:17:21.911: INFO: Pod "test-pod-b00f6d5b-733d-46e7-96be-4d552ed00591": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047140658s
Jun 23 01:17:23.931: INFO: Pod "test-pod-b00f6d5b-733d-46e7-96be-4d552ed00591": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066226408s
Jun 23 01:17:25.919: INFO: Pod "test-pod-b00f6d5b-733d-46e7-96be-4d552ed00591": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054816884s
[1mSTEP[0m: Saw pod success
Jun 23 01:17:25.919: INFO: Pod "test-pod-b00f6d5b-733d-46e7-96be-4d552ed00591" satisfied condition "Succeeded or Failed"
Jun 23 01:17:25.952: INFO: Trying to get logs from node nodes-us-west3-a-l43j pod test-pod-b00f6d5b-733d-46e7-96be-4d552ed00591 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:17:26.042: INFO: Waiting for pod test-pod-b00f6d5b-733d-46e7-96be-4d552ed00591 to disappear
Jun 23 01:17:26.083: INFO: Pod test-pod-b00f6d5b-733d-46e7-96be-4d552ed00591 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.483 seconds][0m
[sig-auth] ServiceAccounts
[90mtest/e2e/auth/framework.go:23[0m
should mount projected service account token [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":10,"skipped":98,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:26.185: INFO: Only supported for providers [azure] (not gce)
... skipping 273 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
[90mtest/e2e/storage/framework/testsuite.go:50[0m
Verify if offline PVC expansion works
[90mtest/e2e/storage/testsuites/volume_expand.go:176[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":5,"skipped":27,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:28.185: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 26 lines ...
[1mSTEP[0m: Building a namespace api object, basename volume
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should store data
test/e2e/storage/testsuites/volumes.go:161
Jun 23 01:16:08.329: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 23 01:16:08.389: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-5491" in namespace "volume-5491" to be "Succeeded or Failed"
Jun 23 01:16:08.414: INFO: Pod "hostpath-symlink-prep-volume-5491": Phase="Pending", Reason="", readiness=false. Elapsed: 24.380107ms
Jun 23 01:16:10.440: INFO: Pod "hostpath-symlink-prep-volume-5491": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050230604s
Jun 23 01:16:12.440: INFO: Pod "hostpath-symlink-prep-volume-5491": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051042547s
Jun 23 01:16:14.446: INFO: Pod "hostpath-symlink-prep-volume-5491": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056428653s
Jun 23 01:16:16.443: INFO: Pod "hostpath-symlink-prep-volume-5491": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053977706s
[1mSTEP[0m: Saw pod success
Jun 23 01:16:16.443: INFO: Pod "hostpath-symlink-prep-volume-5491" satisfied condition "Succeeded or Failed"
Jun 23 01:16:16.443: INFO: Deleting pod "hostpath-symlink-prep-volume-5491" in namespace "volume-5491"
Jun 23 01:16:16.479: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-5491" to be fully deleted
Jun 23 01:16:16.521: INFO: Creating resource for inline volume
[1mSTEP[0m: starting hostpathsymlink-injector
Jun 23 01:16:16.548: INFO: Waiting up to 5m0s for pod "hostpathsymlink-injector" in namespace "volume-5491" to be "running"
Jun 23 01:16:16.572: INFO: Pod "hostpathsymlink-injector": Phase="Pending", Reason="", readiness=false. Elapsed: 24.50691ms
... skipping 84 lines ...
Jun 23 01:17:11.980: INFO: Pod hostpathsymlink-client still exists
Jun 23 01:17:13.982: INFO: Waiting for pod hostpathsymlink-client to disappear
Jun 23 01:17:14.008: INFO: Pod hostpathsymlink-client still exists
Jun 23 01:17:15.982: INFO: Waiting for pod hostpathsymlink-client to disappear
Jun 23 01:17:16.007: INFO: Pod hostpathsymlink-client no longer exists
[1mSTEP[0m: cleaning the environment after hostpathsymlink
Jun 23 01:17:16.040: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-5491" in namespace "volume-5491" to be "Succeeded or Failed"
Jun 23 01:17:16.070: INFO: Pod "hostpath-symlink-prep-volume-5491": Phase="Pending", Reason="", readiness=false. Elapsed: 30.288248ms
Jun 23 01:17:18.106: INFO: Pod "hostpath-symlink-prep-volume-5491": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066028962s
Jun 23 01:17:20.108: INFO: Pod "hostpath-symlink-prep-volume-5491": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068065831s
Jun 23 01:17:22.100: INFO: Pod "hostpath-symlink-prep-volume-5491": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06008014s
Jun 23 01:17:24.125: INFO: Pod "hostpath-symlink-prep-volume-5491": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085130023s
Jun 23 01:17:26.097: INFO: Pod "hostpath-symlink-prep-volume-5491": Phase="Pending", Reason="", readiness=false. Elapsed: 10.056806741s
Jun 23 01:17:28.095: INFO: Pod "hostpath-symlink-prep-volume-5491": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.055115904s
[1mSTEP[0m: Saw pod success
Jun 23 01:17:28.095: INFO: Pod "hostpath-symlink-prep-volume-5491" satisfied condition "Succeeded or Failed"
Jun 23 01:17:28.095: INFO: Deleting pod "hostpath-symlink-prep-volume-5491" in namespace "volume-5491"
Jun 23 01:17:28.125: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-5491" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/framework/framework.go:187
Jun 23 01:17:28.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "volume-5491" for this suite.
... skipping 29 lines ...
[36mOnly supported for node OS distro [gci ubuntu custom] (not debian)[0m
test/e2e/storage/drivers/in_tree.go:263
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":3,"skipped":11,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:28.231: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 66 lines ...
Jun 23 01:17:28.404: INFO: Creating a PV followed by a PVC
Jun 23 01:17:28.458: INFO: Waiting for PV local-pvkbvm6 to bind to PVC pvc-hjtc4
Jun 23 01:17:28.458: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-hjtc4] to have phase Bound
Jun 23 01:17:28.483: INFO: PersistentVolumeClaim pvc-hjtc4 found and phase=Bound (25.156737ms)
Jun 23 01:17:28.483: INFO: Waiting up to 3m0s for PersistentVolume local-pvkbvm6 to have phase Bound
Jun 23 01:17:28.509: INFO: PersistentVolume local-pvkbvm6 found and phase=Bound (26.569387ms)
[It] should fail scheduling due to different NodeAffinity
test/e2e/storage/persistent_volumes-local.go:377
[1mSTEP[0m: local-volume-type: dir
Jun 23 01:17:28.590: INFO: Waiting up to 5m0s for pod "pod-86373267-f798-41d7-822b-c46bc6e798fc" in namespace "persistent-local-volumes-test-5183" to be "Unschedulable"
Jun 23 01:17:28.617: INFO: Pod "pod-86373267-f798-41d7-822b-c46bc6e798fc": Phase="Pending", Reason="", readiness=false. Elapsed: 26.862455ms
Jun 23 01:17:28.617: INFO: Pod "pod-86373267-f798-41d7-822b-c46bc6e798fc" satisfied condition "Unschedulable"
[AfterEach] Pod with node different from PV's NodeAffinity
... skipping 14 lines ...
[32m• [SLOW TEST:11.228 seconds][0m
[sig-storage] PersistentVolumes-local
[90mtest/e2e/storage/utils/framework.go:23[0m
Pod with node different from PV's NodeAffinity
[90mtest/e2e/storage/persistent_volumes-local.go:349[0m
should fail scheduling due to different NodeAffinity
[90mtest/e2e/storage/persistent_volumes-local.go:377[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":9,"skipped":97,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:28.991: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 134 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
One pod requesting one prebound PVC
[90mtest/e2e/storage/persistent_volumes-local.go:211[0m
should be able to mount volume and write from pod1
[90mtest/e2e/storage/persistent_volumes-local.go:240[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":8,"skipped":93,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:29.568: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 112 lines ...
[32m• [SLOW TEST:22.397 seconds][0m
[sig-node] PreStop
[90mtest/e2e/node/framework.go:23[0m
graceful pod terminated should wait until preStop hook completes the process
[90mtest/e2e/node/pre_stop.go:172[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":10,"skipped":64,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] Garbage collector
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 39 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:17:30.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "gc-8990" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":10,"skipped":99,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:30.524: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/framework/framework.go:187
... skipping 47 lines ...
Jun 23 01:17:16.140: INFO: PersistentVolumeClaim pvc-qvl6b found but phase is Pending instead of Bound.
Jun 23 01:17:18.166: INFO: PersistentVolumeClaim pvc-qvl6b found and phase=Bound (6.107904956s)
Jun 23 01:17:18.166: INFO: Waiting up to 3m0s for PersistentVolume local-wxctp to have phase Bound
Jun 23 01:17:18.193: INFO: PersistentVolume local-wxctp found and phase=Bound (27.356096ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-mxm8
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:17:18.310: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mxm8" in namespace "provisioning-8333" to be "Succeeded or Failed"
Jun 23 01:17:18.349: INFO: Pod "pod-subpath-test-preprovisionedpv-mxm8": Phase="Pending", Reason="", readiness=false. Elapsed: 38.656938ms
Jun 23 01:17:20.380: INFO: Pod "pod-subpath-test-preprovisionedpv-mxm8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069766497s
Jun 23 01:17:22.377: INFO: Pod "pod-subpath-test-preprovisionedpv-mxm8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066528832s
Jun 23 01:17:24.381: INFO: Pod "pod-subpath-test-preprovisionedpv-mxm8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070969537s
Jun 23 01:17:26.376: INFO: Pod "pod-subpath-test-preprovisionedpv-mxm8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065967416s
Jun 23 01:17:28.374: INFO: Pod "pod-subpath-test-preprovisionedpv-mxm8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063646178s
Jun 23 01:17:30.390: INFO: Pod "pod-subpath-test-preprovisionedpv-mxm8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.080132967s
[1mSTEP[0m: Saw pod success
Jun 23 01:17:30.390: INFO: Pod "pod-subpath-test-preprovisionedpv-mxm8" satisfied condition "Succeeded or Failed"
Jun 23 01:17:30.417: INFO: Trying to get logs from node nodes-us-west3-a-s284 pod pod-subpath-test-preprovisionedpv-mxm8 container test-container-volume-preprovisionedpv-mxm8: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:17:30.490: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mxm8 to disappear
Jun 23 01:17:30.517: INFO: Pod pod-subpath-test-preprovisionedpv-mxm8 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-mxm8
Jun 23 01:17:30.517: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mxm8" in namespace "provisioning-8333"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directory
[90mtest/e2e/storage/testsuites/subpath.go:207[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":7,"skipped":40,"failed":0}
[BeforeEach] [sig-api-machinery] health handlers
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:17:31.050: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename health
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:17:31.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "health-8036" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] health handlers should contain necessary checks","total":-1,"completed":8,"skipped":40,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Containers
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:17:22.246: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename containers
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test override all
Jun 23 01:17:22.455: INFO: Waiting up to 5m0s for pod "client-containers-ed5bc8d2-c688-400d-982e-c3c6966a6777" in namespace "containers-4727" to be "Succeeded or Failed"
Jun 23 01:17:22.479: INFO: Pod "client-containers-ed5bc8d2-c688-400d-982e-c3c6966a6777": Phase="Pending", Reason="", readiness=false. Elapsed: 24.201292ms
Jun 23 01:17:24.506: INFO: Pod "client-containers-ed5bc8d2-c688-400d-982e-c3c6966a6777": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050588794s
Jun 23 01:17:26.512: INFO: Pod "client-containers-ed5bc8d2-c688-400d-982e-c3c6966a6777": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056760412s
Jun 23 01:17:28.507: INFO: Pod "client-containers-ed5bc8d2-c688-400d-982e-c3c6966a6777": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052295338s
Jun 23 01:17:30.514: INFO: Pod "client-containers-ed5bc8d2-c688-400d-982e-c3c6966a6777": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059236956s
Jun 23 01:17:32.506: INFO: Pod "client-containers-ed5bc8d2-c688-400d-982e-c3c6966a6777": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.050753867s
[1mSTEP[0m: Saw pod success
Jun 23 01:17:32.506: INFO: Pod "client-containers-ed5bc8d2-c688-400d-982e-c3c6966a6777" satisfied condition "Succeeded or Failed"
Jun 23 01:17:32.531: INFO: Trying to get logs from node nodes-us-west3-a-9jqc pod client-containers-ed5bc8d2-c688-400d-982e-c3c6966a6777 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:17:32.596: INFO: Waiting for pod client-containers-ed5bc8d2-c688-400d-982e-c3c6966a6777 to disappear
Jun 23 01:17:32.622: INFO: Pod client-containers-ed5bc8d2-c688-400d-982e-c3c6966a6777 no longer exists
[AfterEach] [sig-node] Containers
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.435 seconds][0m
[sig-node] Containers
[90mtest/e2e/common/node/framework.go:23[0m
should be able to override the image's default command and arguments [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":25,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:32.702: INFO: Only supported for providers [azure] (not gce)
... skipping 52 lines ...
[It] should support readOnly directory specified in the volumeMount
test/e2e/storage/testsuites/subpath.go:367
Jun 23 01:17:25.674: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 23 01:17:25.674: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-wp4d
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:17:25.709: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-wp4d" in namespace "provisioning-5678" to be "Succeeded or Failed"
Jun 23 01:17:25.739: INFO: Pod "pod-subpath-test-inlinevolume-wp4d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.318768ms
Jun 23 01:17:27.765: INFO: Pod "pod-subpath-test-inlinevolume-wp4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056617292s
Jun 23 01:17:29.764: INFO: Pod "pod-subpath-test-inlinevolume-wp4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054967874s
Jun 23 01:17:31.765: INFO: Pod "pod-subpath-test-inlinevolume-wp4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056271811s
Jun 23 01:17:33.768: INFO: Pod "pod-subpath-test-inlinevolume-wp4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059450906s
[1mSTEP[0m: Saw pod success
Jun 23 01:17:33.768: INFO: Pod "pod-subpath-test-inlinevolume-wp4d" satisfied condition "Succeeded or Failed"
Jun 23 01:17:33.794: INFO: Trying to get logs from node nodes-us-west3-a-9jqc pod pod-subpath-test-inlinevolume-wp4d container test-container-subpath-inlinevolume-wp4d: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:17:33.856: INFO: Waiting for pod pod-subpath-test-inlinevolume-wp4d to disappear
Jun 23 01:17:33.882: INFO: Pod pod-subpath-test-inlinevolume-wp4d no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-wp4d
Jun 23 01:17:33.882: INFO: Deleting pod "pod-subpath-test-inlinevolume-wp4d" in namespace "provisioning-5678"
... skipping 24 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide container's memory request [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 23 01:17:23.797: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2f66975-90a8-40b5-b525-d95b89baa0f3" in namespace "projected-93" to be "Succeeded or Failed"
Jun 23 01:17:23.916: INFO: Pod "downwardapi-volume-f2f66975-90a8-40b5-b525-d95b89baa0f3": Phase="Pending", Reason="", readiness=false. Elapsed: 119.550499ms
Jun 23 01:17:25.945: INFO: Pod "downwardapi-volume-f2f66975-90a8-40b5-b525-d95b89baa0f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148619092s
Jun 23 01:17:27.941: INFO: Pod "downwardapi-volume-f2f66975-90a8-40b5-b525-d95b89baa0f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143841316s
Jun 23 01:17:29.947: INFO: Pod "downwardapi-volume-f2f66975-90a8-40b5-b525-d95b89baa0f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150765842s
Jun 23 01:17:31.943: INFO: Pod "downwardapi-volume-f2f66975-90a8-40b5-b525-d95b89baa0f3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.146730961s
Jun 23 01:17:33.942: INFO: Pod "downwardapi-volume-f2f66975-90a8-40b5-b525-d95b89baa0f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.145339456s
[1mSTEP[0m: Saw pod success
Jun 23 01:17:33.942: INFO: Pod "downwardapi-volume-f2f66975-90a8-40b5-b525-d95b89baa0f3" satisfied condition "Succeeded or Failed"
Jun 23 01:17:33.968: INFO: Trying to get logs from node nodes-us-west3-a-l43j pod downwardapi-volume-f2f66975-90a8-40b5-b525-d95b89baa0f3 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:17:34.048: INFO: Waiting for pod downwardapi-volume-f2f66975-90a8-40b5-b525-d95b89baa0f3 to disappear
Jun 23 01:17:34.083: INFO: Pod downwardapi-volume-f2f66975-90a8-40b5-b525-d95b89baa0f3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.595 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should provide container's memory request [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":62,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:34.148: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/framework/framework.go:187
... skipping 34 lines ...
[36mOnly supported for providers [azure] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:2079
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":6,"skipped":120,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:17:34.013: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename kubectl
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:17:34.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-5987" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":7,"skipped":120,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] Garbage collector
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 43 lines ...
[32m• [SLOW TEST:10.774 seconds][0m
[sig-api-machinery] Garbage collector
[90mtest/e2e/apimachinery/framework.go:23[0m
should delete pods created by rc when not orphaning [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":7,"skipped":37,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:35.831: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 177 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should create read-only inline ephemeral volume
[90mtest/e2e/storage/testsuites/ephemeral.go:175[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":3,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:39.869: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 44 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/projected_downwardapi.go:108
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 23 01:17:31.891: INFO: Waiting up to 5m0s for pod "metadata-volume-97f5bd71-2673-4db5-8701-e04cbe794f47" in namespace "projected-2566" to be "Succeeded or Failed"
Jun 23 01:17:31.916: INFO: Pod "metadata-volume-97f5bd71-2673-4db5-8701-e04cbe794f47": Phase="Pending", Reason="", readiness=false. Elapsed: 24.587391ms
Jun 23 01:17:33.942: INFO: Pod "metadata-volume-97f5bd71-2673-4db5-8701-e04cbe794f47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0509794s
Jun 23 01:17:35.942: INFO: Pod "metadata-volume-97f5bd71-2673-4db5-8701-e04cbe794f47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050560623s
Jun 23 01:17:37.948: INFO: Pod "metadata-volume-97f5bd71-2673-4db5-8701-e04cbe794f47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057147043s
Jun 23 01:17:39.944: INFO: Pod "metadata-volume-97f5bd71-2673-4db5-8701-e04cbe794f47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052537245s
[1mSTEP[0m: Saw pod success
Jun 23 01:17:39.944: INFO: Pod "metadata-volume-97f5bd71-2673-4db5-8701-e04cbe794f47" satisfied condition "Succeeded or Failed"
Jun 23 01:17:39.969: INFO: Trying to get logs from node nodes-us-west3-a-9jqc pod metadata-volume-97f5bd71-2673-4db5-8701-e04cbe794f47 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:17:40.035: INFO: Waiting for pod metadata-volume-97f5bd71-2673-4db5-8701-e04cbe794f47 to disappear
Jun 23 01:17:40.060: INFO: Pod metadata-volume-97f5bd71-2673-4db5-8701-e04cbe794f47 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.437 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/projected_downwardapi.go:108[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":9,"skipped":41,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:40.143: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 62 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:17:40.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "metrics-grabber-9887" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":4,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:40.587: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/framework/framework.go:187
... skipping 36 lines ...
[32m• [SLOW TEST:11.360 seconds][0m
[sig-node] InitContainer [NodeConformance]
[90mtest/e2e/common/node/framework.go:23[0m
should invoke init containers on a RestartAlways pod [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":11,"skipped":101,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:41.911: INFO: Driver csi-hostpath doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/framework/framework.go:187
... skipping 43 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating secret with name secret-test-00bcb60b-c253-4e08-b76c-1fb78c955026
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 23 01:17:30.277: INFO: Waiting up to 5m0s for pod "pod-secrets-78f5322f-c0a8-4741-af1f-ed65f3261311" in namespace "secrets-438" to be "Succeeded or Failed"
Jun 23 01:17:30.308: INFO: Pod "pod-secrets-78f5322f-c0a8-4741-af1f-ed65f3261311": Phase="Pending", Reason="", readiness=false. Elapsed: 30.883413ms
Jun 23 01:17:32.331: INFO: Pod "pod-secrets-78f5322f-c0a8-4741-af1f-ed65f3261311": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054153636s
Jun 23 01:17:34.334: INFO: Pod "pod-secrets-78f5322f-c0a8-4741-af1f-ed65f3261311": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057299615s
Jun 23 01:17:36.333: INFO: Pod "pod-secrets-78f5322f-c0a8-4741-af1f-ed65f3261311": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056517529s
Jun 23 01:17:38.333: INFO: Pod "pod-secrets-78f5322f-c0a8-4741-af1f-ed65f3261311": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056220312s
Jun 23 01:17:40.337: INFO: Pod "pod-secrets-78f5322f-c0a8-4741-af1f-ed65f3261311": Phase="Pending", Reason="", readiness=false. Elapsed: 10.060472905s
Jun 23 01:17:42.332: INFO: Pod "pod-secrets-78f5322f-c0a8-4741-af1f-ed65f3261311": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.055429612s
[1mSTEP[0m: Saw pod success
Jun 23 01:17:42.332: INFO: Pod "pod-secrets-78f5322f-c0a8-4741-af1f-ed65f3261311" satisfied condition "Succeeded or Failed"
Jun 23 01:17:42.355: INFO: Trying to get logs from node nodes-us-west3-a-l43j pod pod-secrets-78f5322f-c0a8-4741-af1f-ed65f3261311 container secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:17:42.413: INFO: Waiting for pod pod-secrets-78f5322f-c0a8-4741-af1f-ed65f3261311 to disappear
Jun 23 01:17:42.436: INFO: Pod pod-secrets-78f5322f-c0a8-4741-af1f-ed65f3261311 no longer exists
[AfterEach] [sig-storage] Secrets
test/e2e/framework/framework.go:187
... skipping 5 lines ...
[32m• [SLOW TEST:12.600 seconds][0m
[sig-storage] Secrets
[90mtest/e2e/common/storage/framework.go:23[0m
should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":65,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:42.565: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 201 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (filesystem volmode)] volumeMode
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90mtest/e2e/storage/testsuites/volumemode.go:354[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":5,"skipped":36,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:44.309: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 155 lines ...
&Pod{ObjectMeta:{webserver-deployment-5fd5c5f98f-4llrv webserver-deployment-5fd5c5f98f- deployment-5096 2ce06878-7f5f-41d6-944d-b602e8cbb92a 10116 0 2022-06-23 01:17:40 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5fd5c5f98f] map[] [{apps/v1 ReplicaSet webserver-deployment-5fd5c5f98f 78c7ab45-f693-42fa-b633-1c530f63f886 0xc001a68180 0xc001a68181}] [] [{kube-controller-manager Update v1 2022-06-23 01:17:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78c7ab45-f693-42fa-b633-1c530f63f886\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-06-23 01:17:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gvtqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gvtqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-j1m9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:17:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:17:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:17:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:17:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.16.3,PodIP:,StartTime:2022-06-23 01:17:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 23 01:17:45.271: INFO: Pod "webserver-deployment-5fd5c5f98f-6x2nd" is not available:
&Pod{ObjectMeta:{webserver-deployment-5fd5c5f98f-6x2nd webserver-deployment-5fd5c5f98f- deployment-5096 b331f834-3b24-4da4-8060-ae74e21a213d 10168 0 2022-06-23 01:17:43 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5fd5c5f98f] map[] [{apps/v1 ReplicaSet webserver-deployment-5fd5c5f98f 78c7ab45-f693-42fa-b633-1c530f63f886 0xc001a68350 0xc001a68351}] [] [{kube-controller-manager Update v1 2022-06-23 01:17:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78c7ab45-f693-42fa-b633-1c530f63f886\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qds5n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qds5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-s284,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:17:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 23 01:17:45.271: INFO: Pod "webserver-deployment-5fd5c5f98f-7t24g" is not available:
&Pod{ObjectMeta:{webserver-deployment-5fd5c5f98f-7t24g webserver-deployment-5fd5c5f98f- deployment-5096 e61b7fcb-fc8d-4220-98dc-059d0d190cb7 10175 0 2022-06-23 01:17:43 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5fd5c5f98f] map[] [{apps/v1 ReplicaSet webserver-deployment-5fd5c5f98f 78c7ab45-f693-42fa-b633-1c530f63f886 0xc001a684b0 0xc001a684b1}] [] [{kube-controller-manager Update v1 2022-06-23 01:17:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78c7ab45-f693-42fa-b633-1c530f63f886\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zq978,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zq978,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-9jqc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:17:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 23 01:17:45.272: INFO: Pod "webserver-deployment-5fd5c5f98f-8gfjs" is not available:
&Pod{ObjectMeta:{webserver-deployment-5fd5c5f98f-8gfjs webserver-deployment-5fd5c5f98f- deployment-5096 21a8cd4b-a11f-47da-bd8f-827b517cba97 10201 0 2022-06-23 01:17:40 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5fd5c5f98f] map[] [{apps/v1 ReplicaSet webserver-deployment-5fd5c5f98f 78c7ab45-f693-42fa-b633-1c530f63f886 0xc001a68610 0xc001a68611}] [] [{kube-controller-manager Update v1 2022-06-23 01:17:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78c7ab45-f693-42fa-b633-1c530f63f886\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-06-23 01:17:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.2.107\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-k94qz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k94qz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-9jqc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:17:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:17:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:17:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:17:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.16.6,PodIP:100.96.2.107,StartTime:2022-06-23 01:17:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.2.107,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 23 01:17:45.272: INFO: Pod "webserver-deployment-5fd5c5f98f-9p7dp" is not available:
&Pod{ObjectMeta:{webserver-deployment-5fd5c5f98f-9p7dp webserver-deployment-5fd5c5f98f- deployment-5096 b405ecc9-bc6f-46d5-b054-24b873821174 10153 0 2022-06-23 01:17:43 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5fd5c5f98f] map[] [{apps/v1 ReplicaSet webserver-deployment-5fd5c5f98f 78c7ab45-f693-42fa-b633-1c530f63f886 0xc001a68810 0xc001a68811}] [] [{kube-controller-manager Update v1 2022-06-23 01:17:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78c7ab45-f693-42fa-b633-1c530f63f886\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6k9lz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6k9lz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-j1m9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:17:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 23 01:17:45.272: INFO: Pod "webserver-deployment-5fd5c5f98f-h2hjm" is not available:
&Pod{ObjectMeta:{webserver-deployment-5fd5c5f98f-h2hjm webserver-deployment-5fd5c5f98f- deployment-5096 66a44275-0feb-48e4-9898-35de05b704c2 10210 0 2022-06-23 01:17:40 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5fd5c5f98f] map[] [{apps/v1 ReplicaSet webserver-deployment-5fd5c5f98f 78c7ab45-f693-42fa-b633-1c530f63f886 0xc001a68970 0xc001a68971}] [] [{kube-controller-manager Update v1 2022-06-23 01:17:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78c7ab45-f693-42fa-b633-1c530f63f886\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-06-23 01:17:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.4.113\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zmfwf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zmfwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-s284,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:17:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:17:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:17:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:17:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.16.4,PodIP:100.96.4.113,StartTime:2022-06-23 01:17:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.4.113,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 23 01:17:45.272: INFO: Pod "webserver-deployment-5fd5c5f98f-kpjql" is not available:
&Pod{ObjectMeta:{webserver-deployment-5fd5c5f98f-kpjql webserver-deployment-5fd5c5f98f- deployment-5096 0839195a-b89b-4b11-9fa2-dd0de5a1b37f 10167 0 2022-06-23 01:17:43 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5fd5c5f98f] map[] [{apps/v1 ReplicaSet webserver-deployment-5fd5c5f98f 78c7ab45-f693-42fa-b633-1c530f63f886 0xc001a68b80 0xc001a68b81}] [] [{kube-controller-manager Update v1 2022-06-23 01:17:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78c7ab45-f693-42fa-b633-1c530f63f886\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pst78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pst78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-9jqc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:17:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 23 01:17:45.272: INFO: Pod "webserver-deployment-5fd5c5f98f-rkwcd" is not available:
&Pod{ObjectMeta:{webserver-deployment-5fd5c5f98f-rkwcd webserver-deployment-5fd5c5f98f- deployment-5096 7d946e5c-eb64-4ce5-9e5f-e9e795c154ec 10177 0 2022-06-23 01:17:43 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5fd5c5f98f] map[] [{apps/v1 ReplicaSet webserver-deployment-5fd5c5f98f 78c7ab45-f693-42fa-b633-1c530f63f886 0xc001a68ce0 0xc001a68ce1}] [] [{kube-controller-manager Update v1 2022-06-23 01:17:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78c7ab45-f693-42fa-b633-1c530f63f886\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-l8dh2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l8dh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-j1m9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:17:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 23 01:17:45.272: INFO: Pod "webserver-deployment-5fd5c5f98f-rrfpx" is not available:
&Pod{ObjectMeta:{webserver-deployment-5fd5c5f98f-rrfpx webserver-deployment-5fd5c5f98f- deployment-5096 2905e04a-01b9-4d9e-a7aa-2fecd6717eba 10091 0 2022-06-23 01:17:40 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5fd5c5f98f] map[] [{apps/v1 ReplicaSet webserver-deployment-5fd5c5f98f 78c7ab45-f693-42fa-b633-1c530f63f886 0xc001a68e40 0xc001a68e41}] [] [{kube-controller-manager Update v1 2022-06-23 01:17:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78c7ab45-f693-42fa-b633-1c530f63f886\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zbn7w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zbn7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-s284,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:17:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 48 lines ...
[32m• [SLOW TEST:19.064 seconds][0m
[sig-apps] Deployment
[90mtest/e2e/apps/framework.go:23[0m
deployment should support proportional scaling [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":11,"skipped":117,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 93 lines ...
Jun 23 01:17:25.298: INFO: Running '/logs/artifacts/5366fe45-f290-11ec-8dfe-daa417708791/kubectl --server=https://34.106.168.174 --kubeconfig=/root/.kube/config --namespace=kubectl-8597 create -f -'
Jun 23 01:17:26.205: INFO: stderr: ""
Jun 23 01:17:26.205: INFO: stdout: "pod/httpd created\n"
Jun 23 01:17:26.205: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jun 23 01:17:26.205: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-8597" to be "running and ready"
Jun 23 01:17:26.231: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 26.007962ms
Jun 23 01:17:26.231: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-l43j' to be 'Running' but was 'Pending'
Jun 23 01:17:28.256: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050692023s
Jun 23 01:17:28.256: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-l43j' to be 'Running' but was 'Pending'
Jun 23 01:17:30.258: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052746664s
Jun 23 01:17:30.258: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-l43j' to be 'Running' but was 'Pending'
Jun 23 01:17:32.256: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050673865s
Jun 23 01:17:32.256: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-l43j' to be 'Running' but was 'Pending'
Jun 23 01:17:34.264: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058749641s
Jun 23 01:17:34.264: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-l43j' to be 'Running' but was 'Pending'
Jun 23 01:17:36.256: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.050612903s
Jun 23 01:17:36.256: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-l43j' to be 'Running' but was 'Pending'
Jun 23 01:17:38.257: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.051958346s
Jun 23 01:17:38.257: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-l43j' to be 'Running' but was 'Pending'
Jun 23 01:17:40.266: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.060878204s
Jun 23 01:17:40.266: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-l43j' to be 'Running' but was 'Pending'
Jun 23 01:17:42.256: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 16.050927s
Jun 23 01:17:42.256: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west3-a-l43j' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:26 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:26 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:26 +0000 UTC }]
Jun 23 01:17:44.258: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 18.053337327s
Jun 23 01:17:44.258: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west3-a-l43j' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:26 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:26 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:26 +0000 UTC }]
Jun 23 01:17:46.257: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 20.051511165s
Jun 23 01:17:46.257: INFO: Pod "httpd" satisfied condition "running and ready"
Jun 23 01:17:46.257: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] execing into a container with a failing command
test/e2e/kubectl/kubectl.go:533
Jun 23 01:17:46.257: INFO: Running '/logs/artifacts/5366fe45-f290-11ec-8dfe-daa417708791/kubectl --server=https://34.106.168.174 --kubeconfig=/root/.kube/config --namespace=kubectl-8597 exec httpd --pod-running-timeout=2m0s -- /bin/sh -c exit 42'
... skipping 23 lines ...
[90mtest/e2e/kubectl/kubectl.go:407[0m
should return command exit codes
[90mtest/e2e/kubectl/kubectl.go:527[0m
execing into a container with a failing command
[90mtest/e2e/kubectl/kubectl.go:533[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command","total":-1,"completed":7,"skipped":122,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:47.196: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 48 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-49f2c875-1c75-40ef-b188-fadb7852a9ec
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 23 01:17:42.828: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d8322241-4d35-4019-a3aa-4d53e77c4f4b" in namespace "projected-1097" to be "Succeeded or Failed"
Jun 23 01:17:42.854: INFO: Pod "pod-projected-configmaps-d8322241-4d35-4019-a3aa-4d53e77c4f4b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.309412ms
Jun 23 01:17:44.879: INFO: Pod "pod-projected-configmaps-d8322241-4d35-4019-a3aa-4d53e77c4f4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050794589s
Jun 23 01:17:46.879: INFO: Pod "pod-projected-configmaps-d8322241-4d35-4019-a3aa-4d53e77c4f4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050629152s
Jun 23 01:17:48.881: INFO: Pod "pod-projected-configmaps-d8322241-4d35-4019-a3aa-4d53e77c4f4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053031816s
[1mSTEP[0m: Saw pod success
Jun 23 01:17:48.881: INFO: Pod "pod-projected-configmaps-d8322241-4d35-4019-a3aa-4d53e77c4f4b" satisfied condition "Succeeded or Failed"
Jun 23 01:17:48.905: INFO: Trying to get logs from node nodes-us-west3-a-j1m9 pod pod-projected-configmaps-d8322241-4d35-4019-a3aa-4d53e77c4f4b container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:17:48.972: INFO: Waiting for pod pod-projected-configmaps-d8322241-4d35-4019-a3aa-4d53e77c4f4b to disappear
Jun 23 01:17:48.995: INFO: Pod pod-projected-configmaps-d8322241-4d35-4019-a3aa-4d53e77c4f4b no longer exists
[AfterEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:6.435 seconds][0m
[sig-storage] Projected configMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":78,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:49.108: INFO: Only supported for providers [aws] (not gce)
... skipping 26 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide container's memory limit [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 23 01:17:40.479: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31c4b952-dd92-4058-8524-e4daa7fa86aa" in namespace "projected-5500" to be "Succeeded or Failed"
Jun 23 01:17:40.505: INFO: Pod "downwardapi-volume-31c4b952-dd92-4058-8524-e4daa7fa86aa": Phase="Pending", Reason="", readiness=false. Elapsed: 25.863618ms
Jun 23 01:17:42.531: INFO: Pod "downwardapi-volume-31c4b952-dd92-4058-8524-e4daa7fa86aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051448565s
Jun 23 01:17:44.534: INFO: Pod "downwardapi-volume-31c4b952-dd92-4058-8524-e4daa7fa86aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05400568s
Jun 23 01:17:46.531: INFO: Pod "downwardapi-volume-31c4b952-dd92-4058-8524-e4daa7fa86aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051125821s
Jun 23 01:17:48.533: INFO: Pod "downwardapi-volume-31c4b952-dd92-4058-8524-e4daa7fa86aa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053781218s
Jun 23 01:17:50.532: INFO: Pod "downwardapi-volume-31c4b952-dd92-4058-8524-e4daa7fa86aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.052774259s
[1mSTEP[0m: Saw pod success
Jun 23 01:17:50.532: INFO: Pod "downwardapi-volume-31c4b952-dd92-4058-8524-e4daa7fa86aa" satisfied condition "Succeeded or Failed"
Jun 23 01:17:50.557: INFO: Trying to get logs from node nodes-us-west3-a-s284 pod downwardapi-volume-31c4b952-dd92-4058-8524-e4daa7fa86aa container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:17:50.628: INFO: Waiting for pod downwardapi-volume-31c4b952-dd92-4058-8524-e4daa7fa86aa to disappear
Jun 23 01:17:50.654: INFO: Pod downwardapi-volume-31c4b952-dd92-4058-8524-e4daa7fa86aa no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.461 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should provide container's memory limit [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":62,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 3 lines ...
[1mSTEP[0m: Building a namespace api object, basename provisioning
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support existing directory
test/e2e/storage/testsuites/subpath.go:207
Jun 23 01:17:21.453: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 23 01:17:21.507: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2359" in namespace "provisioning-2359" to be "Succeeded or Failed"
Jun 23 01:17:21.531: INFO: Pod "hostpath-symlink-prep-provisioning-2359": Phase="Pending", Reason="", readiness=false. Elapsed: 23.605665ms
Jun 23 01:17:23.556: INFO: Pod "hostpath-symlink-prep-provisioning-2359": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048519451s
Jun 23 01:17:25.561: INFO: Pod "hostpath-symlink-prep-provisioning-2359": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053414481s
Jun 23 01:17:27.556: INFO: Pod "hostpath-symlink-prep-provisioning-2359": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048238651s
Jun 23 01:17:29.557: INFO: Pod "hostpath-symlink-prep-provisioning-2359": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049496249s
[1mSTEP[0m: Saw pod success
Jun 23 01:17:29.557: INFO: Pod "hostpath-symlink-prep-provisioning-2359" satisfied condition "Succeeded or Failed"
Jun 23 01:17:29.557: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2359" in namespace "provisioning-2359"
Jun 23 01:17:29.586: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2359" to be fully deleted
Jun 23 01:17:29.609: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-jtqz
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:17:29.634: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-jtqz" in namespace "provisioning-2359" to be "Succeeded or Failed"
Jun 23 01:17:29.660: INFO: Pod "pod-subpath-test-inlinevolume-jtqz": Phase="Pending", Reason="", readiness=false. Elapsed: 25.216024ms
Jun 23 01:17:31.685: INFO: Pod "pod-subpath-test-inlinevolume-jtqz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050270681s
Jun 23 01:17:33.685: INFO: Pod "pod-subpath-test-inlinevolume-jtqz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050655482s
Jun 23 01:17:35.696: INFO: Pod "pod-subpath-test-inlinevolume-jtqz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061698349s
Jun 23 01:17:37.686: INFO: Pod "pod-subpath-test-inlinevolume-jtqz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051480522s
[1mSTEP[0m: Saw pod success
Jun 23 01:17:37.686: INFO: Pod "pod-subpath-test-inlinevolume-jtqz" satisfied condition "Succeeded or Failed"
Jun 23 01:17:37.711: INFO: Trying to get logs from node nodes-us-west3-a-s284 pod pod-subpath-test-inlinevolume-jtqz container test-container-volume-inlinevolume-jtqz: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:17:37.779: INFO: Waiting for pod pod-subpath-test-inlinevolume-jtqz to disappear
Jun 23 01:17:37.804: INFO: Pod pod-subpath-test-inlinevolume-jtqz no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-jtqz
Jun 23 01:17:37.804: INFO: Deleting pod "pod-subpath-test-inlinevolume-jtqz" in namespace "provisioning-2359"
[1mSTEP[0m: Deleting pod
Jun 23 01:17:37.828: INFO: Deleting pod "pod-subpath-test-inlinevolume-jtqz" in namespace "provisioning-2359"
Jun 23 01:17:37.878: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2359" in namespace "provisioning-2359" to be "Succeeded or Failed"
Jun 23 01:17:37.902: INFO: Pod "hostpath-symlink-prep-provisioning-2359": Phase="Pending", Reason="", readiness=false. Elapsed: 23.754234ms
Jun 23 01:17:39.928: INFO: Pod "hostpath-symlink-prep-provisioning-2359": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049859097s
Jun 23 01:17:41.927: INFO: Pod "hostpath-symlink-prep-provisioning-2359": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048411796s
Jun 23 01:17:43.927: INFO: Pod "hostpath-symlink-prep-provisioning-2359": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04940054s
Jun 23 01:17:45.927: INFO: Pod "hostpath-symlink-prep-provisioning-2359": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04897594s
Jun 23 01:17:47.932: INFO: Pod "hostpath-symlink-prep-provisioning-2359": Phase="Pending", Reason="", readiness=false. Elapsed: 10.053834818s
Jun 23 01:17:49.928: INFO: Pod "hostpath-symlink-prep-provisioning-2359": Phase="Pending", Reason="", readiness=false. Elapsed: 12.049624053s
Jun 23 01:17:51.927: INFO: Pod "hostpath-symlink-prep-provisioning-2359": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.048614765s
[1mSTEP[0m: Saw pod success
Jun 23 01:17:51.927: INFO: Pod "hostpath-symlink-prep-provisioning-2359" satisfied condition "Succeeded or Failed"
Jun 23 01:17:51.927: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2359" in namespace "provisioning-2359"
Jun 23 01:17:51.955: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2359" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:187
Jun 23 01:17:51.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "provisioning-2359" for this suite.
... skipping 6 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directory
[90mtest/e2e/storage/testsuites/subpath.go:207[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":7,"skipped":29,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-apps] Job
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 28 lines ...
[32m• [SLOW TEST:15.528 seconds][0m
[sig-apps] Job
[90mtest/e2e/apps/framework.go:23[0m
should adopt matching orphans and release non-matching pods [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":12,"skipped":105,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:17:57.481: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/framework/framework.go:187
... skipping 49 lines ...
[It] should support non-existent path
test/e2e/storage/testsuites/subpath.go:196
Jun 23 01:17:47.447: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 23 01:17:47.447: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-kwgk
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:17:47.478: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-kwgk" in namespace "provisioning-95" to be "Succeeded or Failed"
Jun 23 01:17:47.505: INFO: Pod "pod-subpath-test-inlinevolume-kwgk": Phase="Pending", Reason="", readiness=false. Elapsed: 27.54212ms
Jun 23 01:17:49.532: INFO: Pod "pod-subpath-test-inlinevolume-kwgk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053880809s
Jun 23 01:17:51.534: INFO: Pod "pod-subpath-test-inlinevolume-kwgk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056016922s
Jun 23 01:17:53.532: INFO: Pod "pod-subpath-test-inlinevolume-kwgk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053839671s
Jun 23 01:17:55.530: INFO: Pod "pod-subpath-test-inlinevolume-kwgk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052433983s
Jun 23 01:17:57.532: INFO: Pod "pod-subpath-test-inlinevolume-kwgk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053695685s
[1mSTEP[0m: Saw pod success
Jun 23 01:17:57.532: INFO: Pod "pod-subpath-test-inlinevolume-kwgk" satisfied condition "Succeeded or Failed"
Jun 23 01:17:57.557: INFO: Trying to get logs from node nodes-us-west3-a-l43j pod pod-subpath-test-inlinevolume-kwgk container test-container-volume-inlinevolume-kwgk: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:17:57.636: INFO: Waiting for pod pod-subpath-test-inlinevolume-kwgk to disappear
Jun 23 01:17:57.659: INFO: Pod pod-subpath-test-inlinevolume-kwgk no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-kwgk
Jun 23 01:17:57.660: INFO: Deleting pod "pod-subpath-test-inlinevolume-kwgk" in namespace "provisioning-95"
... skipping 51 lines ...
Jun 23 01:17:29.999: INFO: PersistentVolumeClaim pvc-9qh85 found but phase is Pending instead of Bound.
Jun 23 01:17:32.024: INFO: PersistentVolumeClaim pvc-9qh85 found and phase=Bound (14.311748703s)
Jun 23 01:17:32.024: INFO: Waiting up to 3m0s for PersistentVolume local-fz6bj to have phase Bound
Jun 23 01:17:32.048: INFO: PersistentVolume local-fz6bj found and phase=Bound (23.693251ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-fzn2
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 23 01:17:32.124: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fzn2" in namespace "provisioning-1890" to be "Succeeded or Failed"
Jun 23 01:17:32.149: INFO: Pod "pod-subpath-test-preprovisionedpv-fzn2": Phase="Pending", Reason="", readiness=false. Elapsed: 25.820513ms
Jun 23 01:17:34.175: INFO: Pod "pod-subpath-test-preprovisionedpv-fzn2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050947372s
Jun 23 01:17:36.178: INFO: Pod "pod-subpath-test-preprovisionedpv-fzn2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054040166s
Jun 23 01:17:38.177: INFO: Pod "pod-subpath-test-preprovisionedpv-fzn2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052944319s
Jun 23 01:17:40.188: INFO: Pod "pod-subpath-test-preprovisionedpv-fzn2": Phase="Running", Reason="", readiness=true. Elapsed: 8.06482204s
Jun 23 01:17:42.178: INFO: Pod "pod-subpath-test-preprovisionedpv-fzn2": Phase="Running", Reason="", readiness=true. Elapsed: 10.054322453s
... skipping 4 lines ...
Jun 23 01:17:52.188: INFO: Pod "pod-subpath-test-preprovisionedpv-fzn2": Phase="Running", Reason="", readiness=true. Elapsed: 20.064650343s
Jun 23 01:17:54.176: INFO: Pod "pod-subpath-test-preprovisionedpv-fzn2": Phase="Running", Reason="", readiness=true. Elapsed: 22.051993484s
Jun 23 01:17:56.175: INFO: Pod "pod-subpath-test-preprovisionedpv-fzn2": Phase="Running", Reason="", readiness=true. Elapsed: 24.051293322s
Jun 23 01:17:58.180: INFO: Pod "pod-subpath-test-preprovisionedpv-fzn2": Phase="Running", Reason="", readiness=false. Elapsed: 26.056165512s
Jun 23 01:18:00.176: INFO: Pod "pod-subpath-test-preprovisionedpv-fzn2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.052791482s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:00.176: INFO: Pod "pod-subpath-test-preprovisionedpv-fzn2" satisfied condition "Succeeded or Failed"
Jun 23 01:18:00.202: INFO: Trying to get logs from node nodes-us-west3-a-j1m9 pod pod-subpath-test-preprovisionedpv-fzn2 container test-container-subpath-preprovisionedpv-fzn2: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:00.267: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fzn2 to disappear
Jun 23 01:18:00.293: INFO: Pod pod-subpath-test-preprovisionedpv-fzn2 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-fzn2
Jun 23 01:18:00.293: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fzn2" in namespace "provisioning-1890"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":31,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:00.748: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 39 lines ...
[32m• [SLOW TEST:62.385 seconds][0m
[sig-apps] CronJob
[90mtest/e2e/apps/framework.go:23[0m
should replace jobs when ReplaceConcurrent [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":10,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:01.038: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 51 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:18:01.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "runtimeclass-1630" for this suite.
[32m•[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":35,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:01.138: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 65 lines ...
[32m• [SLOW TEST:11.532 seconds][0m
[sig-api-machinery] ResourceQuota
[90mtest/e2e/apimachinery/framework.go:23[0m
should create a ResourceQuota and capture the life of a replication controller. [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":11,"skipped":63,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:02.304: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 97 lines ...
Jun 23 01:17:34.357: INFO: Running '/logs/artifacts/5366fe45-f290-11ec-8dfe-daa417708791/kubectl --server=https://34.106.168.174 --kubeconfig=/root/.kube/config --namespace=kubectl-6518 create -f -'
Jun 23 01:17:34.654: INFO: stderr: ""
Jun 23 01:17:34.654: INFO: stdout: "pod/httpd created\n"
Jun 23 01:17:34.654: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jun 23 01:17:34.654: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-6518" to be "running and ready"
Jun 23 01:17:34.680: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 26.038681ms
Jun 23 01:17:34.680: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-9jqc' to be 'Running' but was 'Pending'
Jun 23 01:17:36.708: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054261998s
Jun 23 01:17:36.708: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-9jqc' to be 'Running' but was 'Pending'
Jun 23 01:17:38.707: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053213979s
Jun 23 01:17:38.707: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-9jqc' to be 'Running' but was 'Pending'
Jun 23 01:17:40.708: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053570526s
Jun 23 01:17:40.708: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'nodes-us-west3-a-9jqc' to be 'Running' but was 'Pending'
Jun 23 01:17:42.706: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.052104012s
Jun 23 01:17:42.706: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west3-a-9jqc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:34 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:34 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:34 +0000 UTC }]
Jun 23 01:17:44.705: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.051387113s
Jun 23 01:17:44.706: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west3-a-9jqc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:34 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:34 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:34 +0000 UTC }]
Jun 23 01:17:46.709: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.05532165s
Jun 23 01:17:46.709: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west3-a-9jqc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:34 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:34 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:34 +0000 UTC }]
Jun 23 01:17:48.706: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.052004728s
Jun 23 01:17:48.706: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'nodes-us-west3-a-9jqc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:34 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:34 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 01:17:34 +0000 UTC }]
Jun 23 01:17:50.706: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 16.051636204s
Jun 23 01:17:50.706: INFO: Pod "httpd" satisfied condition "running and ready"
Jun 23 01:17:50.706: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should contain last line of the log
test/e2e/kubectl/kubectl.go:651
[1mSTEP[0m: executing a command with run
Jun 23 01:17:50.706: INFO: Running '/logs/artifacts/5366fe45-f290-11ec-8dfe-daa417708791/kubectl --server=https://34.106.168.174 --kubeconfig=/root/.kube/config --namespace=kubectl-6518 run run-log-test --image=registry.k8s.io/e2e-test-images/busybox:1.29-2 --restart=OnFailure --pod-running-timeout=2m0s -- sh -c sleep 10; seq 100 | while read i; do echo $i; sleep 0.01; done; echo EOF'
Jun 23 01:17:50.866: INFO: stderr: ""
Jun 23 01:17:50.866: INFO: stdout: "pod/run-log-test created\n"
Jun 23 01:17:50.866: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [run-log-test]
Jun 23 01:17:50.866: INFO: Waiting up to 5m0s for pod "run-log-test" in namespace "kubectl-6518" to be "running and ready, or succeeded"
Jun 23 01:17:50.891: INFO: Pod "run-log-test": Phase="Pending", Reason="", readiness=false. Elapsed: 25.231077ms
Jun 23 01:17:50.892: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'run-log-test' on 'nodes-us-west3-a-s284' to be 'Running' but was 'Pending'
Jun 23 01:17:52.916: INFO: Pod "run-log-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050099896s
Jun 23 01:17:52.916: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'run-log-test' on 'nodes-us-west3-a-s284' to be 'Running' but was 'Pending'
Jun 23 01:17:54.917: INFO: Pod "run-log-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050961179s
Jun 23 01:17:54.917: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'run-log-test' on 'nodes-us-west3-a-s284' to be 'Running' but was 'Pending'
Jun 23 01:17:56.918: INFO: Pod "run-log-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051656645s
Jun 23 01:17:56.918: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'run-log-test' on 'nodes-us-west3-a-s284' to be 'Running' but was 'Pending'
Jun 23 01:17:58.918: INFO: Pod "run-log-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05146471s
Jun 23 01:17:58.918: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'run-log-test' on 'nodes-us-west3-a-s284' to be 'Running' but was 'Pending'
Jun 23 01:18:00.916: INFO: Pod "run-log-test": Phase="Running", Reason="", readiness=true. Elapsed: 10.04987886s
Jun 23 01:18:00.916: INFO: Pod "run-log-test" satisfied condition "running and ready, or succeeded"
Jun 23 01:18:00.916: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [run-log-test]
Jun 23 01:18:00.916: INFO: Running '/logs/artifacts/5366fe45-f290-11ec-8dfe-daa417708791/kubectl --server=https://34.106.168.174 --kubeconfig=/root/.kube/config --namespace=kubectl-6518 logs -f run-log-test'
Jun 23 01:18:04.784: INFO: stderr: ""
Jun 23 01:18:04.784: INFO: stdout: "1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60\n61\n62\n63\n64\n65\n66\n67\n68\n69\n70\n71\n72\n73\n74\n75\n76\n77\n78\n79\n80\n81\n82\n83\n84\n85\n86\n87\n88\n89\n90\n91\n92\n93\n94\n95\n96\n97\n98\n99\n100\nEOF\n"
... skipping 20 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Simple pod
[90mtest/e2e/kubectl/kubectl.go:407[0m
should contain last line of the log
[90mtest/e2e/kubectl/kubectl.go:651[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":8,"skipped":67,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:05.358: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
[36mOnly supported for node OS distro [gci ubuntu custom] (not debian)[0m
test/e2e/storage/drivers/in_tree.go:263
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":8,"skipped":131,"failed":0}
[BeforeEach] [sig-apps] DisruptionController
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:17:57.776: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename disruption
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 17 lines ...
[32m• [SLOW TEST:8.595 seconds][0m
[sig-apps] DisruptionController
[90mtest/e2e/apps/framework.go:23[0m
evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
[90mtest/e2e/apps/disruption.go:289[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage =\u003e should allow an eviction","total":-1,"completed":9,"skipped":131,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:06.384: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/framework/framework.go:187
... skipping 83 lines ...
[32m• [SLOW TEST:20.853 seconds][0m
[sig-apps] DisruptionController
[90mtest/e2e/apps/framework.go:23[0m
should block an eviction until the PDB is updated to allow it [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":12,"skipped":132,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:06.591: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/framework/framework.go:187
... skipping 21 lines ...
[1mSTEP[0m: Building a namespace api object, basename downward-api
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test downward api env vars
Jun 23 01:18:02.565: INFO: Waiting up to 5m0s for pod "downward-api-1ead5d6f-fb7c-4029-b1f9-caa22e7dfe80" in namespace "downward-api-4317" to be "Succeeded or Failed"
Jun 23 01:18:02.590: INFO: Pod "downward-api-1ead5d6f-fb7c-4029-b1f9-caa22e7dfe80": Phase="Pending", Reason="", readiness=false. Elapsed: 24.506916ms
Jun 23 01:18:04.615: INFO: Pod "downward-api-1ead5d6f-fb7c-4029-b1f9-caa22e7dfe80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050291207s
Jun 23 01:18:06.615: INFO: Pod "downward-api-1ead5d6f-fb7c-4029-b1f9-caa22e7dfe80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049678992s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:06.615: INFO: Pod "downward-api-1ead5d6f-fb7c-4029-b1f9-caa22e7dfe80" satisfied condition "Succeeded or Failed"
Jun 23 01:18:06.641: INFO: Trying to get logs from node nodes-us-west3-a-9jqc pod downward-api-1ead5d6f-fb7c-4029-b1f9-caa22e7dfe80 container dapi-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:06.727: INFO: Waiting for pod downward-api-1ead5d6f-fb7c-4029-b1f9-caa22e7dfe80 to disappear
Jun 23 01:18:06.752: INFO: Pod downward-api-1ead5d6f-fb7c-4029-b1f9-caa22e7dfe80 no longer exists
[AfterEach] [sig-node] Downward API
test/e2e/framework/framework.go:187
Jun 23 01:18:06.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "downward-api-4317" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":76,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-apps] ReplicationController
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 15 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:18:07.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "replication-controller-8906" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":10,"skipped":135,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 30 lines ...
Jun 23 01:18:01.011: INFO: PersistentVolumeClaim pvc-gctg7 found but phase is Pending instead of Bound.
Jun 23 01:18:03.035: INFO: PersistentVolumeClaim pvc-gctg7 found and phase=Bound (4.07510301s)
Jun 23 01:18:03.035: INFO: Waiting up to 3m0s for PersistentVolume local-wzhjz to have phase Bound
Jun 23 01:18:03.059: INFO: PersistentVolume local-wzhjz found and phase=Bound (23.382157ms)
[1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-pptx
[1mSTEP[0m: Creating a pod to test exec-volume-test
Jun 23 01:18:03.136: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-pptx" in namespace "volume-826" to be "Succeeded or Failed"
Jun 23 01:18:03.160: INFO: Pod "exec-volume-test-preprovisionedpv-pptx": Phase="Pending", Reason="", readiness=false. Elapsed: 23.227261ms
Jun 23 01:18:05.186: INFO: Pod "exec-volume-test-preprovisionedpv-pptx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049542926s
Jun 23 01:18:07.186: INFO: Pod "exec-volume-test-preprovisionedpv-pptx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049511216s
Jun 23 01:18:09.186: INFO: Pod "exec-volume-test-preprovisionedpv-pptx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049484192s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:09.186: INFO: Pod "exec-volume-test-preprovisionedpv-pptx" satisfied condition "Succeeded or Failed"
Jun 23 01:18:09.210: INFO: Trying to get logs from node nodes-us-west3-a-l43j pod exec-volume-test-preprovisionedpv-pptx container exec-container-preprovisionedpv-pptx: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:09.267: INFO: Waiting for pod exec-volume-test-preprovisionedpv-pptx to disappear
Jun 23 01:18:09.291: INFO: Pod exec-volume-test-preprovisionedpv-pptx no longer exists
[1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-pptx
Jun 23 01:18:09.291: INFO: Deleting pod "exec-volume-test-preprovisionedpv-pptx" in namespace "volume-826"
... skipping 28 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should allow exec of files on the volume
[90mtest/e2e/storage/testsuites/volumes.go:198[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":8,"skipped":39,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 26 lines ...
Jun 23 01:17:59.675: INFO: PersistentVolumeClaim pvc-r8tz2 found but phase is Pending instead of Bound.
Jun 23 01:18:01.699: INFO: PersistentVolumeClaim pvc-r8tz2 found and phase=Bound (2.047479183s)
Jun 23 01:18:01.699: INFO: Waiting up to 3m0s for PersistentVolume local-bkzbc to have phase Bound
Jun 23 01:18:01.723: INFO: PersistentVolume local-bkzbc found and phase=Bound (23.673138ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-q7ln
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:18:01.796: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-q7ln" in namespace "provisioning-2006" to be "Succeeded or Failed"
Jun 23 01:18:01.819: INFO: Pod "pod-subpath-test-preprovisionedpv-q7ln": Phase="Pending", Reason="", readiness=false. Elapsed: 23.690803ms
Jun 23 01:18:03.845: INFO: Pod "pod-subpath-test-preprovisionedpv-q7ln": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049261518s
Jun 23 01:18:05.846: INFO: Pod "pod-subpath-test-preprovisionedpv-q7ln": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050684648s
Jun 23 01:18:07.860: INFO: Pod "pod-subpath-test-preprovisionedpv-q7ln": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063836561s
Jun 23 01:18:09.845: INFO: Pod "pod-subpath-test-preprovisionedpv-q7ln": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049294522s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:09.845: INFO: Pod "pod-subpath-test-preprovisionedpv-q7ln" satisfied condition "Succeeded or Failed"
Jun 23 01:18:09.870: INFO: Trying to get logs from node nodes-us-west3-a-l43j pod pod-subpath-test-preprovisionedpv-q7ln container test-container-subpath-preprovisionedpv-q7ln: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:09.933: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-q7ln to disappear
Jun 23 01:18:09.957: INFO: Pod pod-subpath-test-preprovisionedpv-q7ln no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-q7ln
Jun 23 01:18:09.957: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-q7ln" in namespace "provisioning-2006"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":13,"skipped":87,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:10.400: INFO: Only supported for providers [azure] (not gce)
... skipping 38 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:18:10.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "tables-1355" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":14,"skipped":103,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 38 lines ...
Jun 23 01:18:01.001: INFO: PersistentVolumeClaim pvc-qrn74 found but phase is Pending instead of Bound.
Jun 23 01:18:03.028: INFO: PersistentVolumeClaim pvc-qrn74 found and phase=Bound (10.147726151s)
Jun 23 01:18:03.028: INFO: Waiting up to 3m0s for PersistentVolume local-bwcgm to have phase Bound
Jun 23 01:18:03.050: INFO: PersistentVolume local-bwcgm found and phase=Bound (22.601656ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-tprx
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:18:03.126: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tprx" in namespace "provisioning-3289" to be "Succeeded or Failed"
Jun 23 01:18:03.150: INFO: Pod "pod-subpath-test-preprovisionedpv-tprx": Phase="Pending", Reason="", readiness=false. Elapsed: 24.157778ms
Jun 23 01:18:05.177: INFO: Pod "pod-subpath-test-preprovisionedpv-tprx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050464544s
Jun 23 01:18:07.176: INFO: Pod "pod-subpath-test-preprovisionedpv-tprx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049970764s
Jun 23 01:18:09.175: INFO: Pod "pod-subpath-test-preprovisionedpv-tprx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048620555s
Jun 23 01:18:11.175: INFO: Pod "pod-subpath-test-preprovisionedpv-tprx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048639046s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:11.175: INFO: Pod "pod-subpath-test-preprovisionedpv-tprx" satisfied condition "Succeeded or Failed"
Jun 23 01:18:11.200: INFO: Trying to get logs from node nodes-us-west3-a-l43j pod pod-subpath-test-preprovisionedpv-tprx container test-container-volume-preprovisionedpv-tprx: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:11.256: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tprx to disappear
Jun 23 01:18:11.283: INFO: Pod pod-subpath-test-preprovisionedpv-tprx no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-tprx
Jun 23 01:18:11.283: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tprx" in namespace "provisioning-3289"
... skipping 30 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directory
[90mtest/e2e/storage/testsuites/subpath.go:207[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":8,"skipped":50,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:12.553: INFO: Only supported for providers [vsphere] (not gce)
... skipping 24 lines ...
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test emptydir 0644 on node default medium
Jun 23 01:18:05.615: INFO: Waiting up to 5m0s for pod "pod-24a85a2f-7276-43af-ae46-f1c9633b7b49" in namespace "emptydir-3580" to be "Succeeded or Failed"
Jun 23 01:18:05.639: INFO: Pod "pod-24a85a2f-7276-43af-ae46-f1c9633b7b49": Phase="Pending", Reason="", readiness=false. Elapsed: 24.213314ms
Jun 23 01:18:07.664: INFO: Pod "pod-24a85a2f-7276-43af-ae46-f1c9633b7b49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048973145s
Jun 23 01:18:09.676: INFO: Pod "pod-24a85a2f-7276-43af-ae46-f1c9633b7b49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060902383s
Jun 23 01:18:11.665: INFO: Pod "pod-24a85a2f-7276-43af-ae46-f1c9633b7b49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05036246s
Jun 23 01:18:13.664: INFO: Pod "pod-24a85a2f-7276-43af-ae46-f1c9633b7b49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048929428s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:13.664: INFO: Pod "pod-24a85a2f-7276-43af-ae46-f1c9633b7b49" satisfied condition "Succeeded or Failed"
Jun 23 01:18:13.688: INFO: Trying to get logs from node nodes-us-west3-a-s284 pod pod-24a85a2f-7276-43af-ae46-f1c9633b7b49 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:13.744: INFO: Waiting for pod pod-24a85a2f-7276-43af-ae46-f1c9633b7b49 to disappear
Jun 23 01:18:13.768: INFO: Pod pod-24a85a2f-7276-43af-ae46-f1c9633b7b49 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.403 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":77,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:13.847: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 82 lines ...
Jun 23 01:17:56.672: INFO: Pod "pvc-volume-tester-hnzwx": Phase="Running", Reason="", readiness=true. Elapsed: 18.048394997s
Jun 23 01:17:56.672: INFO: Pod "pvc-volume-tester-hnzwx" satisfied condition "running"
[1mSTEP[0m: Deleting the previously created pod
Jun 23 01:17:56.672: INFO: Deleting pod "pvc-volume-tester-hnzwx" in namespace "csi-mock-volumes-8411"
Jun 23 01:17:56.697: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hnzwx" to be fully deleted
[1mSTEP[0m: Checking CSI driver logs
Jun 23 01:17:58.775: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"44300b96-f292-11ec-981c-a23ffe04c775","target_path":"/var/lib/kubelet/pods/2cadf529-a7ff-4468-a793-239737cf4e2c/volumes/kubernetes.io~csi/pvc-a15d4b2a-2db4-49ae-a1f9-21d9d1f8152b/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
[1mSTEP[0m: Deleting pod pvc-volume-tester-hnzwx
Jun 23 01:17:58.775: INFO: Deleting pod "pvc-volume-tester-hnzwx" in namespace "csi-mock-volumes-8411"
[1mSTEP[0m: Deleting claim pvc-vcwpp
Jun 23 01:17:58.852: INFO: Waiting up to 2m0s for PersistentVolume pvc-a15d4b2a-2db4-49ae-a1f9-21d9d1f8152b to get deleted
Jun 23 01:17:58.879: INFO: PersistentVolume pvc-a15d4b2a-2db4-49ae-a1f9-21d9d1f8152b found and phase=Released (26.74934ms)
Jun 23 01:18:00.904: INFO: PersistentVolume pvc-a15d4b2a-2db4-49ae-a1f9-21d9d1f8152b was removed
... skipping 45 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
CSIServiceAccountToken
[90mtest/e2e/storage/csi_mock_volume.go:1574[0m
token should not be plumbed down when csiServiceAccountTokenEnabled=false
[90mtest/e2e/storage/csi_mock_volume.go:1602[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":9,"skipped":56,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:13.972: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 21 lines ...
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:18:13.861: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename secrets
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating projection with secret that has name secret-emptykey-test-e43a4244-f74f-4a90-a767-7624b4910b14
[AfterEach] [sig-node] Secrets
test/e2e/framework/framework.go:187
Jun 23 01:18:14.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "secrets-1742" for this suite.
... skipping 36 lines ...
[32m• [SLOW TEST:77.208 seconds][0m
[sig-node] Probing container
[90mtest/e2e/common/node/framework.go:23[0m
should be restarted startup probe fails
[90mtest/e2e/common/node/container_probe.go:317[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":4,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:14.151: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 34 lines ...
[36mOnly supported for providers [openstack] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:1092
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":10,"skipped":80,"failed":0}
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:18:14.122: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename tables
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:18:14.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "tables-961" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":11,"skipped":80,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:14.435: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 71 lines ...
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:18:14.189: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename job
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail when exceeds active deadline
test/e2e/apps/job.go:293
[1mSTEP[0m: Creating a job
[1mSTEP[0m: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
test/e2e/framework/framework.go:187
Jun 23 01:18:16.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "job-4554" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":5,"skipped":58,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:16.483: INFO: Only supported for providers [azure] (not gce)
... skipping 64 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
should use the image defaults if command and args are blank [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":43,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:16.574: INFO: Driver csi-hostpath doesn't support ext3 -- skipping
... skipping 69 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Kubectl server-side dry-run
[90mtest/e2e/kubectl/kubectl.go:954[0m
should check if kubectl can dry-run update Pods [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":12,"skipped":86,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:18.771: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
test/e2e/framework/framework.go:187
... skipping 9 lines ...
[90mtest/e2e/storage/testsuites/volume_expand.go:159[0m
[36mDriver hostPathSymlink doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":9,"skipped":55,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:18.795: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 68 lines ...
[1mSTEP[0m: Building a namespace api object, basename security-context
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser [LinuxOnly]
test/e2e/node/security_context.go:111
[1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jun 23 01:18:11.033: INFO: Waiting up to 5m0s for pod "security-context-71e5d6f2-5549-4e89-a161-0594b3a5da7b" in namespace "security-context-3576" to be "Succeeded or Failed"
Jun 23 01:18:11.057: INFO: Pod "security-context-71e5d6f2-5549-4e89-a161-0594b3a5da7b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.541761ms
Jun 23 01:18:13.091: INFO: Pod "security-context-71e5d6f2-5549-4e89-a161-0594b3a5da7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057628038s
Jun 23 01:18:15.083: INFO: Pod "security-context-71e5d6f2-5549-4e89-a161-0594b3a5da7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049285361s
Jun 23 01:18:17.088: INFO: Pod "security-context-71e5d6f2-5549-4e89-a161-0594b3a5da7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0541797s
Jun 23 01:18:19.082: INFO: Pod "security-context-71e5d6f2-5549-4e89-a161-0594b3a5da7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048416989s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:19.082: INFO: Pod "security-context-71e5d6f2-5549-4e89-a161-0594b3a5da7b" satisfied condition "Succeeded or Failed"
Jun 23 01:18:19.105: INFO: Trying to get logs from node nodes-us-west3-a-l43j pod security-context-71e5d6f2-5549-4e89-a161-0594b3a5da7b container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:19.164: INFO: Waiting for pod security-context-71e5d6f2-5549-4e89-a161-0594b3a5da7b to disappear
Jun 23 01:18:19.189: INFO: Pod security-context-71e5d6f2-5549-4e89-a161-0594b3a5da7b no longer exists
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
... skipping 72 lines ...
Jun 23 01:18:00.946: INFO: PersistentVolumeClaim pvc-5gdwp found but phase is Pending instead of Bound.
Jun 23 01:18:02.974: INFO: PersistentVolumeClaim pvc-5gdwp found and phase=Bound (2.053486281s)
Jun 23 01:18:02.974: INFO: Waiting up to 3m0s for PersistentVolume local-md9p6 to have phase Bound
Jun 23 01:18:02.999: INFO: PersistentVolume local-md9p6 found and phase=Bound (24.440633ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-z8lp
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:18:03.075: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-z8lp" in namespace "provisioning-3542" to be "Succeeded or Failed"
Jun 23 01:18:03.103: INFO: Pod "pod-subpath-test-preprovisionedpv-z8lp": Phase="Pending", Reason="", readiness=false. Elapsed: 27.731184ms
Jun 23 01:18:05.130: INFO: Pod "pod-subpath-test-preprovisionedpv-z8lp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05504648s
Jun 23 01:18:07.129: INFO: Pod "pod-subpath-test-preprovisionedpv-z8lp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053882899s
Jun 23 01:18:09.131: INFO: Pod "pod-subpath-test-preprovisionedpv-z8lp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055848929s
Jun 23 01:18:11.129: INFO: Pod "pod-subpath-test-preprovisionedpv-z8lp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054488054s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:11.130: INFO: Pod "pod-subpath-test-preprovisionedpv-z8lp" satisfied condition "Succeeded or Failed"
Jun 23 01:18:11.155: INFO: Trying to get logs from node nodes-us-west3-a-j1m9 pod pod-subpath-test-preprovisionedpv-z8lp container test-container-subpath-preprovisionedpv-z8lp: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:11.218: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-z8lp to disappear
Jun 23 01:18:11.243: INFO: Pod pod-subpath-test-preprovisionedpv-z8lp no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-z8lp
Jun 23 01:18:11.244: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-z8lp" in namespace "provisioning-3542"
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-z8lp
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:18:11.295: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-z8lp" in namespace "provisioning-3542" to be "Succeeded or Failed"
Jun 23 01:18:11.320: INFO: Pod "pod-subpath-test-preprovisionedpv-z8lp": Phase="Pending", Reason="", readiness=false. Elapsed: 24.589274ms
Jun 23 01:18:13.349: INFO: Pod "pod-subpath-test-preprovisionedpv-z8lp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053098082s
Jun 23 01:18:15.346: INFO: Pod "pod-subpath-test-preprovisionedpv-z8lp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05047195s
Jun 23 01:18:17.346: INFO: Pod "pod-subpath-test-preprovisionedpv-z8lp": Phase="Running", Reason="", readiness=true. Elapsed: 6.051004066s
Jun 23 01:18:19.354: INFO: Pod "pod-subpath-test-preprovisionedpv-z8lp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058875307s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:19.355: INFO: Pod "pod-subpath-test-preprovisionedpv-z8lp" satisfied condition "Succeeded or Failed"
Jun 23 01:18:19.381: INFO: Trying to get logs from node nodes-us-west3-a-j1m9 pod pod-subpath-test-preprovisionedpv-z8lp container test-container-subpath-preprovisionedpv-z8lp: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:19.440: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-z8lp to disappear
Jun 23 01:18:19.466: INFO: Pod pod-subpath-test-preprovisionedpv-z8lp no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-z8lp
Jun 23 01:18:19.467: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-z8lp" in namespace "provisioning-3542"
... skipping 34 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directories when readOnly specified in the volumeSource
[90mtest/e2e/storage/testsuites/subpath.go:397[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":13,"skipped":114,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:20.630: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 112 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
One pod requesting one prebound PVC
[90mtest/e2e/storage/persistent_volumes-local.go:211[0m
should be able to mount volume and read from pod1
[90mtest/e2e/storage/persistent_volumes-local.go:234[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":11,"skipped":136,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 2 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name configmap-test-volume-map-1286f4eb-9409-40fc-8864-568e8a420828
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 23 01:18:14.208: INFO: Waiting up to 5m0s for pod "pod-configmaps-8cef89d2-c653-4f27-8202-93ef19631d18" in namespace "configmap-2185" to be "Succeeded or Failed"
Jun 23 01:18:14.231: INFO: Pod "pod-configmaps-8cef89d2-c653-4f27-8202-93ef19631d18": Phase="Pending", Reason="", readiness=false. Elapsed: 23.45204ms
Jun 23 01:18:16.256: INFO: Pod "pod-configmaps-8cef89d2-c653-4f27-8202-93ef19631d18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047875952s
Jun 23 01:18:18.256: INFO: Pod "pod-configmaps-8cef89d2-c653-4f27-8202-93ef19631d18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048442674s
Jun 23 01:18:20.258: INFO: Pod "pod-configmaps-8cef89d2-c653-4f27-8202-93ef19631d18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050489665s
Jun 23 01:18:22.256: INFO: Pod "pod-configmaps-8cef89d2-c653-4f27-8202-93ef19631d18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047944053s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:22.256: INFO: Pod "pod-configmaps-8cef89d2-c653-4f27-8202-93ef19631d18" satisfied condition "Succeeded or Failed"
Jun 23 01:18:22.279: INFO: Trying to get logs from node nodes-us-west3-a-9jqc pod pod-configmaps-8cef89d2-c653-4f27-8202-93ef19631d18 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:22.347: INFO: Waiting for pod pod-configmaps-8cef89d2-c653-4f27-8202-93ef19631d18 to disappear
Jun 23 01:18:22.373: INFO: Pod pod-configmaps-8cef89d2-c653-4f27-8202-93ef19631d18 no longer exists
[AfterEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:8.437 seconds][0m
[sig-storage] ConfigMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":59,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:22.446: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 14 lines ...
[36mDriver hostPath doesn't support PreprovisionedPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":5,"skipped":25,"failed":0}
[BeforeEach] [sig-network] Services
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:17:07.410: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename services
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 117 lines ...
[32m• [SLOW TEST:75.498 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true
[90mtest/e2e/network/service.go:1803[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true","total":-1,"completed":6,"skipped":25,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:22.928: INFO: Only supported for providers [aws] (not gce)
... skipping 71 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating projection with secret that has name projected-secret-test-map-8cced1e9-fb8c-4e8b-9bbb-845a176e72a5
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 23 01:18:16.819: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-318f7f5e-1bdc-431c-86e8-7b2a57a79f73" in namespace "projected-8374" to be "Succeeded or Failed"
Jun 23 01:18:16.844: INFO: Pod "pod-projected-secrets-318f7f5e-1bdc-431c-86e8-7b2a57a79f73": Phase="Pending", Reason="", readiness=false. Elapsed: 25.005374ms
Jun 23 01:18:18.872: INFO: Pod "pod-projected-secrets-318f7f5e-1bdc-431c-86e8-7b2a57a79f73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052797705s
Jun 23 01:18:20.869: INFO: Pod "pod-projected-secrets-318f7f5e-1bdc-431c-86e8-7b2a57a79f73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050002984s
Jun 23 01:18:22.871: INFO: Pod "pod-projected-secrets-318f7f5e-1bdc-431c-86e8-7b2a57a79f73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051706553s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:22.871: INFO: Pod "pod-projected-secrets-318f7f5e-1bdc-431c-86e8-7b2a57a79f73" satisfied condition "Succeeded or Failed"
Jun 23 01:18:22.908: INFO: Trying to get logs from node nodes-us-west3-a-9jqc pod pod-projected-secrets-318f7f5e-1bdc-431c-86e8-7b2a57a79f73 container projected-secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:22.976: INFO: Waiting for pod pod-projected-secrets-318f7f5e-1bdc-431c-86e8-7b2a57a79f73 to disappear
Jun 23 01:18:23.001: INFO: Pod pod-projected-secrets-318f7f5e-1bdc-431c-86e8-7b2a57a79f73 no longer exists
[AfterEach] [sig-storage] Projected secret
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:6.450 seconds][0m
[sig-storage] Projected secret
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":50,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:23.080: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 115 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name configmap-test-volume-cfff1b8b-1755-427a-a1de-b9db1874b6e2
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 23 01:18:19.034: INFO: Waiting up to 5m0s for pod "pod-configmaps-31162455-035a-4ed8-bcd8-4bd72d630bfb" in namespace "configmap-9584" to be "Succeeded or Failed"
Jun 23 01:18:19.060: INFO: Pod "pod-configmaps-31162455-035a-4ed8-bcd8-4bd72d630bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 25.336812ms
Jun 23 01:18:21.084: INFO: Pod "pod-configmaps-31162455-035a-4ed8-bcd8-4bd72d630bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049392813s
Jun 23 01:18:23.083: INFO: Pod "pod-configmaps-31162455-035a-4ed8-bcd8-4bd72d630bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048949677s
Jun 23 01:18:25.090: INFO: Pod "pod-configmaps-31162455-035a-4ed8-bcd8-4bd72d630bfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055809343s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:25.090: INFO: Pod "pod-configmaps-31162455-035a-4ed8-bcd8-4bd72d630bfb" satisfied condition "Succeeded or Failed"
Jun 23 01:18:25.130: INFO: Trying to get logs from node nodes-us-west3-a-l43j pod pod-configmaps-31162455-035a-4ed8-bcd8-4bd72d630bfb container configmap-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:25.214: INFO: Waiting for pod pod-configmaps-31162455-035a-4ed8-bcd8-4bd72d630bfb to disappear
Jun 23 01:18:25.239: INFO: Pod pod-configmaps-31162455-035a-4ed8-bcd8-4bd72d630bfb no longer exists
[AfterEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:6.491 seconds][0m
[sig-storage] ConfigMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":59,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:25.327: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 190 lines ...
[32m• [SLOW TEST:161.291 seconds][0m
[sig-apps] Deployment
[90mtest/e2e/apps/framework.go:23[0m
should not disrupt a cloud load-balancer's connectivity during rollout
[90mtest/e2e/apps/deployment.go:163[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","total":-1,"completed":4,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:25.699: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/framework/framework.go:187
... skipping 32 lines ...
[36mDriver local doesn't support GenericEphemeralVolume -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":15,"skipped":107,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:18:19.257: INFO: >>> kubeConfig: /root/.kube/config
... skipping 3 lines ...
[It] should support existing single file [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:221
Jun 23 01:18:19.431: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 23 01:18:19.431: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-7njv
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:18:19.461: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-7njv" in namespace "provisioning-9183" to be "Succeeded or Failed"
Jun 23 01:18:19.485: INFO: Pod "pod-subpath-test-inlinevolume-7njv": Phase="Pending", Reason="", readiness=false. Elapsed: 23.099248ms
Jun 23 01:18:21.511: INFO: Pod "pod-subpath-test-inlinevolume-7njv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04987048s
Jun 23 01:18:23.514: INFO: Pod "pod-subpath-test-inlinevolume-7njv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052891334s
Jun 23 01:18:25.511: INFO: Pod "pod-subpath-test-inlinevolume-7njv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049784716s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:25.511: INFO: Pod "pod-subpath-test-inlinevolume-7njv" satisfied condition "Succeeded or Failed"
Jun 23 01:18:25.536: INFO: Trying to get logs from node nodes-us-west3-a-l43j pod pod-subpath-test-inlinevolume-7njv container test-container-subpath-inlinevolume-7njv: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:25.598: INFO: Waiting for pod pod-subpath-test-inlinevolume-7njv to disappear
Jun 23 01:18:25.621: INFO: Pod pod-subpath-test-inlinevolume-7njv no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-7njv
Jun 23 01:18:25.621: INFO: Deleting pod "pod-subpath-test-inlinevolume-7njv" in namespace "provisioning-9183"
... skipping 14 lines ...
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing single file [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:221[0m
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":16,"skipped":107,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:25.749: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 183 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume at the same time
[90mtest/e2e/storage/persistent_volumes-local.go:250[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:251[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":43,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 68 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:18:27.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "discovery-9866" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":17,"skipped":123,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:27.213: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 26 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
test/e2e/common/storage/projected_downwardapi.go:43
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 23 01:18:22.642: INFO: Waiting up to 5m0s for pod "downwardapi-volume-26f5b0b7-70f5-466c-9c89-5369d580f12e" in namespace "projected-8922" to be "Succeeded or Failed"
Jun 23 01:18:22.667: INFO: Pod "downwardapi-volume-26f5b0b7-70f5-466c-9c89-5369d580f12e": Phase="Pending", Reason="", readiness=false. Elapsed: 24.95164ms
Jun 23 01:18:24.694: INFO: Pod "downwardapi-volume-26f5b0b7-70f5-466c-9c89-5369d580f12e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051266012s
Jun 23 01:18:26.693: INFO: Pod "downwardapi-volume-26f5b0b7-70f5-466c-9c89-5369d580f12e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050426932s
Jun 23 01:18:28.692: INFO: Pod "downwardapi-volume-26f5b0b7-70f5-466c-9c89-5369d580f12e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049432066s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:28.692: INFO: Pod "downwardapi-volume-26f5b0b7-70f5-466c-9c89-5369d580f12e" satisfied condition "Succeeded or Failed"
Jun 23 01:18:28.716: INFO: Trying to get logs from node nodes-us-west3-a-9jqc pod downwardapi-volume-26f5b0b7-70f5-466c-9c89-5369d580f12e container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:28.774: INFO: Waiting for pod downwardapi-volume-26f5b0b7-70f5-466c-9c89-5369d580f12e to disappear
Jun 23 01:18:28.798: INFO: Pod downwardapi-volume-26f5b0b7-70f5-466c-9c89-5369d580f12e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:6.422 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":139,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:28.872: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 98 lines ...
[32m• [SLOW TEST:6.563 seconds][0m
[sig-network] DNS
[90mtest/e2e/network/common/framework.go:23[0m
should provide DNS for the cluster [Provider:GCE]
[90mtest/e2e/network/dns.go:70[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Provider:GCE]","total":-1,"completed":7,"skipped":40,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:29.604: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 85 lines ...
Jun 23 01:17:02.191: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5867
Jun 23 01:17:02.222: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5867
Jun 23 01:17:02.254: INFO: creating *v1.StatefulSet: csi-mock-volumes-5867-58/csi-mockplugin
Jun 23 01:17:02.294: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5867
Jun 23 01:17:02.322: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5867"
Jun 23 01:17:02.373: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5867 to register on node nodes-us-west3-a-j1m9
I0623 01:17:10.014522 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0623 01:17:10.038519 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5867","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0623 01:17:10.074136 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I0623 01:17:10.098591 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0623 01:17:10.162256 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5867","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0623 01:17:10.310634 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-5867"},"Error":"","FullError":null}
[1mSTEP[0m: Creating pod
Jun 23 01:17:18.816: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jun 23 01:17:18.880: INFO: Waiting up to 5m0s for pod "pvc-volume-tester-gzczs" in namespace "csi-mock-volumes-5867" to be "running"
Jun 23 01:17:18.912: INFO: Pod "pvc-volume-tester-gzczs": Phase="Pending", Reason="", readiness=false. Elapsed: 32.244399ms
I0623 01:17:18.929756 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-3918ff7a-6120-43aa-a1cf-a12c85514b3e","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I0623 01:17:20.002238 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-3918ff7a-6120-43aa-a1cf-a12c85514b3e","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-3918ff7a-6120-43aa-a1cf-a12c85514b3e"}}},"Error":"","FullError":null}
Jun 23 01:17:20.937: INFO: Pod "pvc-volume-tester-gzczs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05741658s
I0623 01:17:22.848943 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0623 01:17:22.881075 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0623 01:17:22.906997 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 23 01:17:22.932: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:17:22.933: INFO: ExecWithOptions: Clientset creation
Jun 23 01:17:22.933: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/csi-mock-volumes-5867-58/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-5867%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-5867%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
Jun 23 01:17:22.938: INFO: Pod "pvc-volume-tester-gzczs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05818684s
I0623 01:17:23.171092 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-5867/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-3918ff7a-6120-43aa-a1cf-a12c85514b3e","storage.kubernetes.io/csiProvisionerIdentity":"1655947030095-8081-csi-mock-csi-mock-volumes-5867"}},"Response":{},"Error":"","FullError":null}
I0623 01:17:23.203158 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0623 01:17:23.230464 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0623 01:17:23.254406 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 23 01:17:23.280: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:17:23.281: INFO: ExecWithOptions: Clientset creation
Jun 23 01:17:23.281: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/csi-mock-volumes-5867-58/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2Faf6df839-8adc-498c-8761-8b793eac8859%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-3918ff7a-6120-43aa-a1cf-a12c85514b3e%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2Faf6df839-8adc-498c-8761-8b793eac8859%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-3918ff7a-6120-43aa-a1cf-a12c85514b3e%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
Jun 23 01:17:23.511: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:17:23.512: INFO: ExecWithOptions: Clientset creation
Jun 23 01:17:23.512: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/csi-mock-volumes-5867-58/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2Faf6df839-8adc-498c-8761-8b793eac8859%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-3918ff7a-6120-43aa-a1cf-a12c85514b3e%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2Faf6df839-8adc-498c-8761-8b793eac8859%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-3918ff7a-6120-43aa-a1cf-a12c85514b3e%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
Jun 23 01:17:23.737: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:17:23.738: INFO: ExecWithOptions: Clientset creation
Jun 23 01:17:23.738: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/csi-mock-volumes-5867-58/pods/csi-mockplugin-0/exec?command=mkdir&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2Faf6df839-8adc-498c-8761-8b793eac8859%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-3918ff7a-6120-43aa-a1cf-a12c85514b3e%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true)
I0623 01:17:24.111110 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-5867/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","target_path":"/var/lib/kubelet/pods/af6df839-8adc-498c-8761-8b793eac8859/volumes/kubernetes.io~csi/pvc-3918ff7a-6120-43aa-a1cf-a12c85514b3e/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-3918ff7a-6120-43aa-a1cf-a12c85514b3e","storage.kubernetes.io/csiProvisionerIdentity":"1655947030095-8081-csi-mock-csi-mock-volumes-5867"}},"Response":{},"Error":"","FullError":null}
Jun 23 01:17:24.937: INFO: Pod "pvc-volume-tester-gzczs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057349885s
Jun 23 01:17:26.939: INFO: Pod "pvc-volume-tester-gzczs": Phase="Running", Reason="", readiness=true. Elapsed: 8.059517037s
Jun 23 01:17:26.939: INFO: Pod "pvc-volume-tester-gzczs" satisfied condition "running"
Jun 23 01:17:26.939: INFO: Deleting pod "pvc-volume-tester-gzczs" in namespace "csi-mock-volumes-5867"
Jun 23 01:17:26.967: INFO: Wait up to 5m0s for pod "pvc-volume-tester-gzczs" to be fully deleted
Jun 23 01:17:28.606: INFO: >>> kubeConfig: /root/.kube/config
Jun 23 01:17:28.606: INFO: ExecWithOptions: Clientset creation
Jun 23 01:17:28.607: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/csi-mock-volumes-5867-58/pods/csi-mockplugin-0/exec?command=rm&command=-rf&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2Faf6df839-8adc-498c-8761-8b793eac8859%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-3918ff7a-6120-43aa-a1cf-a12c85514b3e%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true)
I0623 01:17:28.831663 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/af6df839-8adc-498c-8761-8b793eac8859/volumes/kubernetes.io~csi/pvc-3918ff7a-6120-43aa-a1cf-a12c85514b3e/mount"},"Response":{},"Error":"","FullError":null}
I0623 01:17:28.907963 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0623 01:17:28.932079 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-5867/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount"},"Response":{},"Error":"","FullError":null}
I0623 01:17:31.073258 7232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
[1mSTEP[0m: Checking PVC events
Jun 23 01:17:32.044: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5bmtn", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5867", SelfLink:"", UID:"3918ff7a-6120-43aa-a1cf-a12c85514b3e", ResourceVersion:"8965", Generation:0, CreationTimestamp:time.Date(2022, time.June, 23, 1, 17, 18, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 1, 17, 18, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0005db188), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00172e890), VolumeMode:(*v1.PersistentVolumeMode)(0xc00172e8a0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 23 01:17:32.044: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5bmtn", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5867", SelfLink:"", UID:"3918ff7a-6120-43aa-a1cf-a12c85514b3e", ResourceVersion:"8968", Generation:0, CreationTimestamp:time.Date(2022, time.June, 23, 1, 17, 18, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"nodes-us-west3-a-j1m9"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 1, 17, 18, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002015230), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 1, 17, 18, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002015260), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002350910), VolumeMode:(*v1.PersistentVolumeMode)(0xc002350920), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 23 01:17:32.044: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5bmtn", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5867", SelfLink:"", UID:"3918ff7a-6120-43aa-a1cf-a12c85514b3e", ResourceVersion:"8969", Generation:0, CreationTimestamp:time.Date(2022, time.June, 23, 1, 17, 18, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5867", "volume.kubernetes.io/selected-node":"nodes-us-west3-a-j1m9", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5867"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 1, 17, 18, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001b45ed8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 1, 17, 18, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001b45f08), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 1, 17, 18, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001b45f38), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002617ef0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002617f00), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 23 01:17:32.044: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5bmtn", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5867", SelfLink:"", UID:"3918ff7a-6120-43aa-a1cf-a12c85514b3e", ResourceVersion:"9004", Generation:0, CreationTimestamp:time.Date(2022, time.June, 23, 1, 17, 18, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5867", "volume.kubernetes.io/selected-node":"nodes-us-west3-a-j1m9", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5867"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 1, 17, 18, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002710828), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 1, 17, 18, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002710858), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 1, 17, 20, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002710888), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-3918ff7a-6120-43aa-a1cf-a12c85514b3e", StorageClassName:(*string)(0xc002724480), VolumeMode:(*v1.PersistentVolumeMode)(0xc002724490), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 23 01:17:32.045: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5bmtn", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5867", SelfLink:"", UID:"3918ff7a-6120-43aa-a1cf-a12c85514b3e", ResourceVersion:"9005", Generation:0, CreationTimestamp:time.Date(2022, time.June, 23, 1, 17, 18, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5867", "volume.kubernetes.io/selected-node":"nodes-us-west3-a-j1m9", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5867"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 1, 17, 18, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0027108d0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 1, 17, 18, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002710900), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 1, 17, 20, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002710930), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 1, 17, 20, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002710960), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-3918ff7a-6120-43aa-a1cf-a12c85514b3e", StorageClassName:(*string)(0xc0027244c0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0027244d0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
... skipping 49 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
storage capacity
[90mtest/e2e/storage/csi_mock_volume.go:1100[0m
exhausted, late binding, no topology
[90mtest/e2e/storage/csi_mock_volume.go:1158[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":5,"skipped":69,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:30.360: INFO: Only supported for providers [azure] (not gce)
... skipping 39 lines ...
Jun 23 01:18:03.442: INFO: ExecWithOptions: Clientset creation
Jun 23 01:18:03.442: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/sctp-7261/pods/hostexec-nodes-us-west3-a-9jqc-77zcm/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
Jun 23 01:18:03.646: INFO: exec nodes-us-west3-a-9jqc: command: lsmod | grep sctp
Jun 23 01:18:03.646: INFO: exec nodes-us-west3-a-9jqc: stdout: ""
Jun 23 01:18:03.646: INFO: exec nodes-us-west3-a-9jqc: stderr: ""
Jun 23 01:18:03.646: INFO: exec nodes-us-west3-a-9jqc: exit code: 0
Jun 23 01:18:03.646: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1
Jun 23 01:18:03.646: INFO: the sctp module is not loaded on node: nodes-us-west3-a-9jqc
Jun 23 01:18:03.646: INFO: Executing cmd "lsmod | grep sctp" on node nodes-us-west3-a-s284
Jun 23 01:18:03.672: INFO: Waiting up to 5m0s for pod "hostexec-nodes-us-west3-a-s284-frtqz" in namespace "sctp-7261" to be "running"
Jun 23 01:18:03.696: INFO: Pod "hostexec-nodes-us-west3-a-s284-frtqz": Phase="Pending", Reason="", readiness=false. Elapsed: 23.866867ms
Jun 23 01:18:05.722: INFO: Pod "hostexec-nodes-us-west3-a-s284-frtqz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050541059s
Jun 23 01:18:07.720: INFO: Pod "hostexec-nodes-us-west3-a-s284-frtqz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048619113s
... skipping 5 lines ...
Jun 23 01:18:11.725: INFO: ExecWithOptions: Clientset creation
Jun 23 01:18:11.725: INFO: ExecWithOptions: execute(POST https://34.106.168.174/api/v1/namespaces/sctp-7261/pods/hostexec-nodes-us-west3-a-s284-frtqz/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
Jun 23 01:18:11.955: INFO: exec nodes-us-west3-a-s284: command: lsmod | grep sctp
Jun 23 01:18:11.955: INFO: exec nodes-us-west3-a-s284: stdout: ""
Jun 23 01:18:11.955: INFO: exec nodes-us-west3-a-s284: stderr: ""
Jun 23 01:18:11.955: INFO: exec nodes-us-west3-a-s284: exit code: 0
Jun 23 01:18:11.955: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1
Jun 23 01:18:11.955: INFO: the sctp module is not loaded on node: nodes-us-west3-a-s284
[1mSTEP[0m: Deleting pod hostexec-nodes-us-west3-a-9jqc-77zcm in namespace sctp-7261
[1mSTEP[0m: Deleting pod hostexec-nodes-us-west3-a-s284-frtqz in namespace sctp-7261
[1mSTEP[0m: creating service sctp-endpoint-test in namespace sctp-7261
Jun 23 01:18:12.083: INFO: Service sctp-endpoint-test in namespace sctp-7261 found.
[1mSTEP[0m: validating endpoints do not exist yet
... skipping 56 lines ...
[32m• [SLOW TEST:30.154 seconds][0m
[sig-network] SCTP [LinuxOnly]
[90mtest/e2e/network/common/framework.go:23[0m
should allow creating a basic SCTP service with pod and endpoints
[90mtest/e2e/network/service.go:4070[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] SCTP [LinuxOnly] should allow creating a basic SCTP service with pod and endpoints","total":-1,"completed":11,"skipped":82,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:31.328: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 73 lines ...
Jun 23 01:18:25.949: INFO: Running '/logs/artifacts/5366fe45-f290-11ec-8dfe-daa417708791/kubectl --server=https://34.106.168.174 --kubeconfig=/root/.kube/config --namespace=kubectl-1598 create -f -'
Jun 23 01:18:26.260: INFO: stderr: ""
Jun 23 01:18:26.260: INFO: stdout: "pod/pause created\n"
Jun 23 01:18:26.260: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jun 23 01:18:26.260: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1598" to be "running and ready"
Jun 23 01:18:26.286: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 25.59435ms
Jun 23 01:18:26.286: INFO: Error evaluating pod condition running and ready: want pod 'pause' on 'nodes-us-west3-a-l43j' to be 'Running' but was 'Pending'
Jun 23 01:18:28.314: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053574823s
Jun 23 01:18:28.314: INFO: Error evaluating pod condition running and ready: want pod 'pause' on 'nodes-us-west3-a-l43j' to be 'Running' but was 'Pending'
Jun 23 01:18:30.311: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.05098201s
Jun 23 01:18:30.311: INFO: Pod "pause" satisfied condition "running and ready"
Jun 23 01:18:30.311: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: adding the label testing-label with value testing-label-value to a pod
... skipping 35 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Kubectl label
[90mtest/e2e/kubectl/kubectl.go:1481[0m
should update the label on a resource [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":5,"skipped":37,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":13,"skipped":77,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:18:15.339: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename resourcequota
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 23 lines ...
[32m• [SLOW TEST:16.644 seconds][0m
[sig-api-machinery] ResourceQuota
[90mtest/e2e/apimachinery/framework.go:23[0m
should verify ResourceQuota with best effort scope. [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":14,"skipped":77,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:32.007: INFO: Only supported for providers [aws] (not gce)
... skipping 26 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
test/e2e/common/node/sysctl.go:67
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod with the kernel.shm_rmid_forced sysctl
[1mSTEP[0m: Watching for error events or started pod
[1mSTEP[0m: Waiting for pod completion
Jun 23 01:18:31.186: INFO: Waiting up to 3m0s for pod "sysctl-ca6792d8-e7e4-4986-9024-33c378a4eaf5" in namespace "sysctl-7635" to be "completed"
Jun 23 01:18:31.211: INFO: Pod "sysctl-ca6792d8-e7e4-4986-9024-33c378a4eaf5": Phase="Pending", Reason="", readiness=false. Elapsed: 25.468198ms
Jun 23 01:18:33.240: INFO: Pod "sysctl-ca6792d8-e7e4-4986-9024-33c378a4eaf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053896488s
Jun 23 01:18:35.238: INFO: Pod "sysctl-ca6792d8-e7e4-4986-9024-33c378a4eaf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051520414s
Jun 23 01:18:35.238: INFO: Pod "sysctl-ca6792d8-e7e4-4986-9024-33c378a4eaf5" satisfied condition "completed"
... skipping 9 lines ...
[32m• [SLOW TEST:6.379 seconds][0m
[sig-node] Sysctls [LinuxOnly] [NodeConformance]
[90mtest/e2e/common/node/framework.go:23[0m
should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":13,"skipped":155,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 2 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-5f341b42-158e-4c48-9a3b-d3c72027184a
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 23 01:18:30.610: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7842b7ee-4db2-49f1-aa14-91e6eb50e31c" in namespace "projected-6116" to be "Succeeded or Failed"
Jun 23 01:18:30.636: INFO: Pod "pod-projected-configmaps-7842b7ee-4db2-49f1-aa14-91e6eb50e31c": Phase="Pending", Reason="", readiness=false. Elapsed: 25.784109ms
Jun 23 01:18:32.662: INFO: Pod "pod-projected-configmaps-7842b7ee-4db2-49f1-aa14-91e6eb50e31c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051425695s
Jun 23 01:18:34.663: INFO: Pod "pod-projected-configmaps-7842b7ee-4db2-49f1-aa14-91e6eb50e31c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052555486s
Jun 23 01:18:36.663: INFO: Pod "pod-projected-configmaps-7842b7ee-4db2-49f1-aa14-91e6eb50e31c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05280008s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:36.664: INFO: Pod "pod-projected-configmaps-7842b7ee-4db2-49f1-aa14-91e6eb50e31c" satisfied condition "Succeeded or Failed"
Jun 23 01:18:36.688: INFO: Trying to get logs from node nodes-us-west3-a-9jqc pod pod-projected-configmaps-7842b7ee-4db2-49f1-aa14-91e6eb50e31c container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:36.749: INFO: Waiting for pod pod-projected-configmaps-7842b7ee-4db2-49f1-aa14-91e6eb50e31c to disappear
Jun 23 01:18:36.774: INFO: Pod pod-projected-configmaps-7842b7ee-4db2-49f1-aa14-91e6eb50e31c no longer exists
[AfterEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:6.453 seconds][0m
[sig-storage] Projected configMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":73,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 3 lines ...
[1mSTEP[0m: Building a namespace api object, basename provisioning
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:382
Jun 23 01:18:23.400: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 23 01:18:23.457: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2135" in namespace "provisioning-2135" to be "Succeeded or Failed"
Jun 23 01:18:23.481: INFO: Pod "hostpath-symlink-prep-provisioning-2135": Phase="Pending", Reason="", readiness=false. Elapsed: 23.921782ms
Jun 23 01:18:25.505: INFO: Pod "hostpath-symlink-prep-provisioning-2135": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04778087s
Jun 23 01:18:27.506: INFO: Pod "hostpath-symlink-prep-provisioning-2135": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048097069s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:27.506: INFO: Pod "hostpath-symlink-prep-provisioning-2135" satisfied condition "Succeeded or Failed"
Jun 23 01:18:27.506: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2135" in namespace "provisioning-2135"
Jun 23 01:18:27.540: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2135" to be fully deleted
Jun 23 01:18:27.564: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-422b
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:18:27.591: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-422b" in namespace "provisioning-2135" to be "Succeeded or Failed"
Jun 23 01:18:27.615: INFO: Pod "pod-subpath-test-inlinevolume-422b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.260031ms
Jun 23 01:18:29.642: INFO: Pod "pod-subpath-test-inlinevolume-422b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051024285s
Jun 23 01:18:31.640: INFO: Pod "pod-subpath-test-inlinevolume-422b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049527641s
Jun 23 01:18:33.643: INFO: Pod "pod-subpath-test-inlinevolume-422b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052414454s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:33.643: INFO: Pod "pod-subpath-test-inlinevolume-422b" satisfied condition "Succeeded or Failed"
Jun 23 01:18:33.668: INFO: Trying to get logs from node nodes-us-west3-a-j1m9 pod pod-subpath-test-inlinevolume-422b container test-container-subpath-inlinevolume-422b: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:33.737: INFO: Waiting for pod pod-subpath-test-inlinevolume-422b to disappear
Jun 23 01:18:33.765: INFO: Pod pod-subpath-test-inlinevolume-422b no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-422b
Jun 23 01:18:33.765: INFO: Deleting pod "pod-subpath-test-inlinevolume-422b" in namespace "provisioning-2135"
[1mSTEP[0m: Deleting pod
Jun 23 01:18:33.789: INFO: Deleting pod "pod-subpath-test-inlinevolume-422b" in namespace "provisioning-2135"
Jun 23 01:18:33.838: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2135" in namespace "provisioning-2135" to be "Succeeded or Failed"
Jun 23 01:18:33.867: INFO: Pod "hostpath-symlink-prep-provisioning-2135": Phase="Pending", Reason="", readiness=false. Elapsed: 28.512861ms
Jun 23 01:18:35.893: INFO: Pod "hostpath-symlink-prep-provisioning-2135": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054173888s
Jun 23 01:18:37.893: INFO: Pod "hostpath-symlink-prep-provisioning-2135": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05491007s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:37.893: INFO: Pod "hostpath-symlink-prep-provisioning-2135" satisfied condition "Succeeded or Failed"
Jun 23 01:18:37.894: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2135" in namespace "provisioning-2135"
Jun 23 01:18:37.931: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2135" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:187
Jun 23 01:18:37.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "provisioning-2135" for this suite.
... skipping 6 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":11,"skipped":73,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:38.034: INFO: Only supported for providers [vsphere] (not gce)
... skipping 44 lines ...
[1mSTEP[0m: Destroying namespace "services-3934" for this suite.
[AfterEach] [sig-network] Services
test/e2e/network/service.go:762
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":12,"skipped":77,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:38.623: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/framework/framework.go:187
... skipping 96 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:18:39.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "podtemplate-7249" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":13,"skipped":95,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:39.161: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 45 lines ...
[1mSTEP[0m: Building a namespace api object, basename security-context
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jun 23 01:18:27.474: INFO: Waiting up to 5m0s for pod "security-context-6cb4a25c-9610-445a-b2a0-0631e61b1881" in namespace "security-context-7532" to be "Succeeded or Failed"
Jun 23 01:18:27.497: INFO: Pod "security-context-6cb4a25c-9610-445a-b2a0-0631e61b1881": Phase="Pending", Reason="", readiness=false. Elapsed: 23.386645ms
Jun 23 01:18:29.529: INFO: Pod "security-context-6cb4a25c-9610-445a-b2a0-0631e61b1881": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055407173s
Jun 23 01:18:31.525: INFO: Pod "security-context-6cb4a25c-9610-445a-b2a0-0631e61b1881": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051187583s
Jun 23 01:18:33.532: INFO: Pod "security-context-6cb4a25c-9610-445a-b2a0-0631e61b1881": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058050353s
Jun 23 01:18:35.521: INFO: Pod "security-context-6cb4a25c-9610-445a-b2a0-0631e61b1881": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047318452s
Jun 23 01:18:37.522: INFO: Pod "security-context-6cb4a25c-9610-445a-b2a0-0631e61b1881": Phase="Pending", Reason="", readiness=false. Elapsed: 10.048137285s
Jun 23 01:18:39.525: INFO: Pod "security-context-6cb4a25c-9610-445a-b2a0-0631e61b1881": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.050498962s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:39.525: INFO: Pod "security-context-6cb4a25c-9610-445a-b2a0-0631e61b1881" satisfied condition "Succeeded or Failed"
Jun 23 01:18:39.549: INFO: Trying to get logs from node nodes-us-west3-a-s284 pod security-context-6cb4a25c-9610-445a-b2a0-0631e61b1881 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:39.609: INFO: Waiting for pod security-context-6cb4a25c-9610-445a-b2a0-0631e61b1881 to disappear
Jun 23 01:18:39.632: INFO: Pod security-context-6cb4a25c-9610-445a-b2a0-0631e61b1881 no longer exists
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
... skipping 41 lines ...
Jun 23 01:18:30.512: INFO: PersistentVolumeClaim pvc-7kxmd found but phase is Pending instead of Bound.
Jun 23 01:18:32.536: INFO: PersistentVolumeClaim pvc-7kxmd found and phase=Bound (4.073654409s)
Jun 23 01:18:32.536: INFO: Waiting up to 3m0s for PersistentVolume local-8glgx to have phase Bound
Jun 23 01:18:32.559: INFO: PersistentVolume local-8glgx found and phase=Bound (23.513593ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-k8q7
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:18:32.635: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-k8q7" in namespace "provisioning-4361" to be "Succeeded or Failed"
Jun 23 01:18:32.659: INFO: Pod "pod-subpath-test-preprovisionedpv-k8q7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.434752ms
Jun 23 01:18:34.687: INFO: Pod "pod-subpath-test-preprovisionedpv-k8q7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051736801s
Jun 23 01:18:36.684: INFO: Pod "pod-subpath-test-preprovisionedpv-k8q7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049147173s
Jun 23 01:18:38.685: INFO: Pod "pod-subpath-test-preprovisionedpv-k8q7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049920235s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:38.685: INFO: Pod "pod-subpath-test-preprovisionedpv-k8q7" satisfied condition "Succeeded or Failed"
Jun 23 01:18:38.710: INFO: Trying to get logs from node nodes-us-west3-a-l43j pod pod-subpath-test-preprovisionedpv-k8q7 container test-container-subpath-preprovisionedpv-k8q7: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:38.773: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-k8q7 to disappear
Jun 23 01:18:38.797: INFO: Pod pod-subpath-test-preprovisionedpv-k8q7 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-k8q7
Jun 23 01:18:38.797: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-k8q7" in namespace "provisioning-4361"
... skipping 30 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly directory specified in the volumeMount
[90mtest/e2e/storage/testsuites/subpath.go:367[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":11,"skipped":77,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:39.744: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 71 lines ...
[32m• [SLOW TEST:8.440 seconds][0m
[sig-node] Events
[90mtest/e2e/node/framework.go:23[0m
should be sent by kubelets and the scheduler about pods scheduling and running
[90mtest/e2e/node/events.go:41[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running ","total":-1,"completed":12,"skipped":92,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:39.875: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 37 lines ...
[36mDriver csi-hostpath doesn't support InlineVolume -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":18,"skipped":134,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:18:39.694: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename custom-resource-definition
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:18:39.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "custom-resource-definition-1956" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":19,"skipped":134,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:40.005: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 68 lines ...
[1mSTEP[0m: Building a namespace api object, basename var-expansion
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
test/e2e/framework/framework.go:647
[1mSTEP[0m: Creating a pod to test substitution in container's args
Jun 23 01:18:31.769: INFO: Waiting up to 5m0s for pod "var-expansion-71f2ebba-1888-41b2-82a2-420efbde10df" in namespace "var-expansion-443" to be "Succeeded or Failed"
Jun 23 01:18:31.832: INFO: Pod "var-expansion-71f2ebba-1888-41b2-82a2-420efbde10df": Phase="Pending", Reason="", readiness=false. Elapsed: 62.806714ms
Jun 23 01:18:33.858: INFO: Pod "var-expansion-71f2ebba-1888-41b2-82a2-420efbde10df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089002996s
Jun 23 01:18:35.858: INFO: Pod "var-expansion-71f2ebba-1888-41b2-82a2-420efbde10df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08920034s
Jun 23 01:18:37.861: INFO: Pod "var-expansion-71f2ebba-1888-41b2-82a2-420efbde10df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091415734s
Jun 23 01:18:39.859: INFO: Pod "var-expansion-71f2ebba-1888-41b2-82a2-420efbde10df": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089343179s
Jun 23 01:18:41.859: INFO: Pod "var-expansion-71f2ebba-1888-41b2-82a2-420efbde10df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089711433s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:41.859: INFO: Pod "var-expansion-71f2ebba-1888-41b2-82a2-420efbde10df" satisfied condition "Succeeded or Failed"
Jun 23 01:18:41.884: INFO: Trying to get logs from node nodes-us-west3-a-s284 pod var-expansion-71f2ebba-1888-41b2-82a2-420efbde10df container dapi-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:41.943: INFO: Waiting for pod var-expansion-71f2ebba-1888-41b2-82a2-420efbde10df to disappear
Jun 23 01:18:41.968: INFO: Pod var-expansion-71f2ebba-1888-41b2-82a2-420efbde10df no longer exists
[AfterEach] [sig-node] Variable Expansion
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.468 seconds][0m
[sig-node] Variable Expansion
[90mtest/e2e/common/node/framework.go:23[0m
should allow substituting values in a container's args [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":39,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Services
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 51 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/projected_configmap.go:61
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-de16a524-e6a9-4216-971e-f7d3fed0bf02
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 23 01:18:32.254: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-53504708-dd9e-447d-9555-38d0db12697c" in namespace "projected-2831" to be "Succeeded or Failed"
Jun 23 01:18:32.278: INFO: Pod "pod-projected-configmaps-53504708-dd9e-447d-9555-38d0db12697c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.286474ms
Jun 23 01:18:34.304: INFO: Pod "pod-projected-configmaps-53504708-dd9e-447d-9555-38d0db12697c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050103344s
Jun 23 01:18:36.304: INFO: Pod "pod-projected-configmaps-53504708-dd9e-447d-9555-38d0db12697c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050232571s
Jun 23 01:18:38.304: INFO: Pod "pod-projected-configmaps-53504708-dd9e-447d-9555-38d0db12697c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050214084s
Jun 23 01:18:40.306: INFO: Pod "pod-projected-configmaps-53504708-dd9e-447d-9555-38d0db12697c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051948104s
Jun 23 01:18:42.317: INFO: Pod "pod-projected-configmaps-53504708-dd9e-447d-9555-38d0db12697c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063222359s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:42.317: INFO: Pod "pod-projected-configmaps-53504708-dd9e-447d-9555-38d0db12697c" satisfied condition "Succeeded or Failed"
Jun 23 01:18:42.342: INFO: Trying to get logs from node nodes-us-west3-a-s284 pod pod-projected-configmaps-53504708-dd9e-447d-9555-38d0db12697c container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:42.406: INFO: Waiting for pod pod-projected-configmaps-53504708-dd9e-447d-9555-38d0db12697c to disappear
Jun 23 01:18:42.433: INFO: Pod pod-projected-configmaps-53504708-dd9e-447d-9555-38d0db12697c no longer exists
[AfterEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:187
... skipping 4 lines ...
[32m• [SLOW TEST:10.474 seconds][0m
[sig-storage] Projected configMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/projected_configmap.go:61[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":15,"skipped":80,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] Watchers
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 19 lines ...
test/e2e/framework/framework.go:187
Jun 23 01:18:42.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "watch-5963" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":7,"skipped":40,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:42.543: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 164 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should create read/write inline ephemeral volume
[90mtest/e2e/storage/testsuites/ephemeral.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":9,"skipped":104,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:43.085: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 83 lines ...
[32m• [SLOW TEST:60.320 seconds][0m
[sig-node] Probing container
[90mtest/e2e/common/node/framework.go:23[0m
with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:44.707: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/framework/framework.go:187
... skipping 97 lines ...
[32m• [SLOW TEST:8.537 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":20,"skipped":152,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:48.695: INFO: Only supported for providers [azure] (not gce)
... skipping 37 lines ...
[36mDriver hostPathSymlink doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":11,"skipped":63,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
Jun 23 01:18:42.426: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename projected
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 19 lines ...
[32m• [SLOW TEST:7.065 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should update annotations on modification [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":63,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:49.504: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/framework/framework.go:187
... skipping 89 lines ...
[32m• [SLOW TEST:251.738 seconds][0m
[sig-node] Probing container
[90mtest/e2e/common/node/framework.go:23[0m
should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:49.710: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:187
... skipping 70 lines ...
[32m• [SLOW TEST:85.924 seconds][0m
[sig-storage] Projected secret
[90mtest/e2e/common/storage/framework.go:23[0m
optional updates should be reflected in volume [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":90,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:51.911: INFO: Only supported for providers [azure] (not gce)
... skipping 201 lines ...
Jun 23 01:18:46.593: INFO: PersistentVolumeClaim pvc-ztxfn found but phase is Pending instead of Bound.
Jun 23 01:18:48.654: INFO: PersistentVolumeClaim pvc-ztxfn found and phase=Bound (6.152491985s)
Jun 23 01:18:48.655: INFO: Waiting up to 3m0s for PersistentVolume local-fts4g to have phase Bound
Jun 23 01:18:48.680: INFO: PersistentVolume local-fts4g found and phase=Bound (25.575444ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-5gwn
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:18:48.760: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5gwn" in namespace "provisioning-4223" to be "Succeeded or Failed"
Jun 23 01:18:48.799: INFO: Pod "pod-subpath-test-preprovisionedpv-5gwn": Phase="Pending", Reason="", readiness=false. Elapsed: 39.058557ms
Jun 23 01:18:50.826: INFO: Pod "pod-subpath-test-preprovisionedpv-5gwn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065591237s
Jun 23 01:18:52.825: INFO: Pod "pod-subpath-test-preprovisionedpv-5gwn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0646082s
Jun 23 01:18:54.826: INFO: Pod "pod-subpath-test-preprovisionedpv-5gwn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06550375s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:54.826: INFO: Pod "pod-subpath-test-preprovisionedpv-5gwn" satisfied condition "Succeeded or Failed"
Jun 23 01:18:54.850: INFO: Trying to get logs from node nodes-us-west3-a-j1m9 pod pod-subpath-test-preprovisionedpv-5gwn container test-container-subpath-preprovisionedpv-5gwn: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:54.913: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-5gwn to disappear
Jun 23 01:18:54.937: INFO: Pod pod-subpath-test-preprovisionedpv-5gwn no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-5gwn
Jun 23 01:18:54.937: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5gwn" in namespace "provisioning-4223"
... skipping 26 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing single file [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:221[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":13,"skipped":107,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:55.638: INFO: Only supported for providers [openstack] (not gce)
... skipping 60 lines ...
Jun 23 01:18:45.023: INFO: PersistentVolumeClaim pvc-v65w7 found but phase is Pending instead of Bound.
Jun 23 01:18:47.061: INFO: PersistentVolumeClaim pvc-v65w7 found and phase=Bound (12.196526751s)
Jun 23 01:18:47.061: INFO: Waiting up to 3m0s for PersistentVolume local-g67fb to have phase Bound
Jun 23 01:18:47.089: INFO: PersistentVolume local-g67fb found and phase=Bound (27.099504ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-f9mf
[1mSTEP[0m: Creating a pod to test subpath
Jun 23 01:18:47.210: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-f9mf" in namespace "provisioning-8131" to be "Succeeded or Failed"
Jun 23 01:18:47.248: INFO: Pod "pod-subpath-test-preprovisionedpv-f9mf": Phase="Pending", Reason="", readiness=false. Elapsed: 38.660089ms
Jun 23 01:18:49.274: INFO: Pod "pod-subpath-test-preprovisionedpv-f9mf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064133192s
Jun 23 01:18:51.273: INFO: Pod "pod-subpath-test-preprovisionedpv-f9mf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063265585s
Jun 23 01:18:53.273: INFO: Pod "pod-subpath-test-preprovisionedpv-f9mf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062994653s
Jun 23 01:18:55.273: INFO: Pod "pod-subpath-test-preprovisionedpv-f9mf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062798224s
[1mSTEP[0m: Saw pod success
Jun 23 01:18:55.273: INFO: Pod "pod-subpath-test-preprovisionedpv-f9mf" satisfied condition "Succeeded or Failed"
Jun 23 01:18:55.297: INFO: Trying to get logs from node nodes-us-west3-a-9jqc pod pod-subpath-test-preprovisionedpv-f9mf container test-container-volume-preprovisionedpv-f9mf: <nil>
[1mSTEP[0m: delete the pod
Jun 23 01:18:55.354: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-f9mf to disappear
Jun 23 01:18:55.376: INFO: Pod pod-subpath-test-preprovisionedpv-f9mf no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-f9mf
Jun 23 01:18:55.376: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-f9mf" in namespace "provisioning-8131"
... skipping 34 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directory
[90mtest/e2e/storage/testsuites/subpath.go:207[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":8,"skipped":51,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:18:56.530: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 196 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
NodeLease
[90mtest/e2e/common/node/node_lease.go:51[0m
the kubelet should report node status infrequently
[90mtest/e2e/common/node/node_lease.go:114[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] NodeLease NodeLease the kubelet should report node status infrequently","total":-1,"completed":6,"skipped":67,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Pods
test/e2e/framework/framework.go:186
[1mSTEP[0m: Creating a kubernetes client
... skipping 19 lines ...
[1mSTEP[0m: updating the pod
Jun 23 01:18:54.607: INFO: Successfully updated pod "pod-update-activedeadlineseconds-65ecacc3-05b5-4eeb-8b14-6075fa6b50c5"
Jun 23 01:18:54.607: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-65ecacc3-05b5-4eeb-8b14-6075fa6b50c5" in namespace "pods-5575" to be "terminated with reason DeadlineExceeded"
Jun 23 01:18:54.632: INFO: Pod "pod-update-activedeadlineseconds-65ecacc3-05b5-4eeb-8b14-6075fa6b50c5": Phase="Running", Reason="", readiness=true. Elapsed: 24.600507ms
Jun 23 01:18:56.657: INFO: Pod "pod-update-activedeadlineseconds-65ecacc3-05b5-4eeb-8b14-6075fa6b50c5": Phase="Running", Reason="", readiness=true. Elapsed: 2.049882434s
Jun 23 01:18:58.657: INFO: Pod "pod-update-activedeadlineseconds-65ecacc3-05b5-4eeb-8b14-6075fa6b50c5": Phase="Running", Reason="", readiness=true. Elapsed: 4.049871535s
Jun 23 01:19:00.657: INFO: Pod "pod-update-activedeadlineseconds-65ecacc3-05b5-4eeb-8b14-6075fa6b50c5": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 6.050079178s
Jun 23 01:19:00.658: INFO: Pod "pod-update-activedeadlineseconds-65ecacc3-05b5-4eeb-8b14-6075fa6b50c5" satisfied condition "terminated with reason DeadlineExceeded"
[AfterEach] [sig-node] Pods
test/e2e/framework/framework.go:187
Jun 23 01:19:00.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "pods-5575" for this suite.
[32m• [SLOW TEST:10.980 seconds][0m
[sig-node] Pods
[90mtest/e2e/common/node/framework.go:23[0m
should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:647[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:19:00.757: INFO: Only supported for providers [azure] (not gce)
... skipping 123 lines ...
[32m• [SLOW TEST:12.333 seconds][0m
[sig-apps] CronJob
[90mtest/e2e/apps/framework.go:23[0m
should be able to schedule after more than 100 missed schedule
[90mtest/e2e/apps/cronjob.go:191[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","total":-1,"completed":21,"skipped":164,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 23 01:19:01.093: INFO: Only supported for providers [aws] (not gce)
... skipping 28 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: azure-file]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mOnly supported for providers [azure] (not gce)[0m
test/e2e/storage/drivers/in_tree.go:2079
[90m------------------------------[0m
... skipping 52388 lines ...
dwhjx\"\nI0623 01:25:51.186555 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7909/pod-bac29f27-0df0-4b97-b252-6daddf2d4dfd\" PVC=\"persistent-local-volumes-test-7909/pvc-p9v6d\"\nI0623 01:25:51.186904 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7909/pvc-p9v6d\"\nI0623 01:25:51.337469 11 job_controller.go:504] enqueueing job job-434/foo\nI0623 01:25:51.369614 11 pv_controller.go:890] volume \"local-pv7gnq2\" entered phase \"Available\"\nI0623 01:25:51.376569 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3035/pod-469dc704-550a-4be2-be2f-8a050a46001f\" PVC=\"persistent-local-volumes-test-3035/pvc-dwhjx\"\nI0623 01:25:51.376597 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3035/pvc-dwhjx\"\nI0623 01:25:51.379898 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3035/pod-e339862f-9171-45a8-91ce-89fc315a45da\" PVC=\"persistent-local-volumes-test-3035/pvc-dwhjx\"\nI0623 01:25:51.379922 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3035/pvc-dwhjx\"\nI0623 01:25:51.384268 11 pv_controller.go:941] claim \"persistent-local-volumes-test-2687/pvc-8pclx\" bound to volume \"local-pv7gnq2\"\nI0623 01:25:51.400721 11 pv_controller.go:890] volume \"local-pv7gnq2\" entered phase \"Bound\"\nI0623 01:25:51.401272 11 pv_controller.go:993] volume \"local-pv7gnq2\" bound to claim \"persistent-local-volumes-test-2687/pvc-8pclx\"\nI0623 01:25:51.418182 11 pv_controller.go:834] claim \"persistent-local-volumes-test-2687/pvc-8pclx\" entered phase \"Bound\"\nI0623 01:25:51.836517 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-7195/pvc-6kqjq\"\nI0623 01:25:51.863021 11 pv_controller.go:651] volume \"local-6hwbz\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:25:51.872671 11 pv_controller.go:651] volume \"local-6hwbz\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:25:51.880237 11 pv_controller.go:890] volume \"local-6hwbz\" entered phase \"Released\"\nI0623 01:25:51.887543 11 pv_controller_base.go:582] deletion of claim \"provisioning-7195/pvc-6kqjq\" was already processed\nE0623 01:25:51.890096 11 namespace_controller.go:162] deletion of namespace kubectl-5981 failed: unexpected items still remain in namespace: kubectl-5981 for gvr: /v1, Resource=pods\nI0623 01:25:52.000990 11 deployment_controller.go:590] \"Deployment has been deleted\" deployment=\"gc-4738/simpletest.deployment\"\nI0623 01:25:52.010193 11 garbagecollector.go:504] \"Processing object\" object=\"job-434/foo-slb5p\" objectUID=fb403a6b-4bb9-4da6-ac78-beefa425a0ab kind=\"Pod\" virtual=false\nI0623 01:25:52.010871 11 job_controller.go:504] enqueueing job job-434/foo\nE0623 01:25:52.011361 11 tracking_utils.go:109] \"deleting tracking annotation UID expectations\" err=\"couldn't create key for object job-434/foo: could not find key for obj \\\"job-434/foo\\\"\" job=\"job-434/foo\"\nI0623 01:25:52.011286 11 garbagecollector.go:504] \"Processing object\" object=\"job-434/foo-p64q8\" objectUID=9b971ece-06a9-47ab-b911-c227e5fd9b39 kind=\"Pod\" virtual=false\nI0623 01:25:52.017226 11 garbagecollector.go:616] \"Deleting object\" object=\"job-434/foo-slb5p\" objectUID=fb403a6b-4bb9-4da6-ac78-beefa425a0ab kind=\"Pod\" propagationPolicy=Background\nI0623 01:25:52.017793 11 garbagecollector.go:616] \"Deleting object\" object=\"job-434/foo-p64q8\" objectUID=9b971ece-06a9-47ab-b911-c227e5fd9b39 kind=\"Pod\" propagationPolicy=Background\nE0623 01:25:52.246637 11 pv_controller.go:1501] error finding provisioning plugin for claim volume-6226/pvc-blxxm: storageclass.storage.k8s.io \"volume-6226\" not found\nI0623 01:25:52.246932 11 event.go:294] \"Event occurred\" object=\"volume-6226/pvc-blxxm\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-6226\\\" not found\"\nI0623 01:25:52.250251 11 namespace_controller.go:185] Namespace has been deleted conntrack-2188\nI0623 01:25:52.281864 11 pv_controller.go:890] volume \"local-rltjv\" entered phase \"Available\"\nW0623 01:25:52.390840 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:25:52.391331 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nW0623 01:25:53.165381 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:25:53.165414 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:25:53.386590 11 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-9466-9954/csi-mockplugin\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0623 01:25:53.742135 11 garbagecollector.go:504] \"Processing object\" object=\"webhook-2336/e2e-test-webhook-q525p\" objectUID=829d8e4d-84c2-4931-8832-2e6db2a00ac8 kind=\"EndpointSlice\" virtual=false\nI0623 01:25:53.754874 11 garbagecollector.go:616] \"Deleting object\" object=\"webhook-2336/e2e-test-webhook-q525p\" objectUID=829d8e4d-84c2-4931-8832-2e6db2a00ac8 kind=\"EndpointSlice\" propagationPolicy=Background\nI0623 01:25:53.790803 11 garbagecollector.go:504] \"Processing object\" object=\"webhook-2336/sample-webhook-deployment-5f8b6c9658\" objectUID=6e20c435-2a15-4d42-b4cb-23cdbc85bee7 kind=\"ReplicaSet\" virtual=false\nI0623 01:25:53.791134 11 deployment_controller.go:590] \"Deployment has been deleted\" deployment=\"webhook-2336/sample-webhook-deployment\"\nI0623 01:25:53.796232 11 garbagecollector.go:616] \"Deleting object\" object=\"webhook-2336/sample-webhook-deployment-5f8b6c9658\" objectUID=6e20c435-2a15-4d42-b4cb-23cdbc85bee7 kind=\"ReplicaSet\" propagationPolicy=Background\nI0623 01:25:53.802778 11 garbagecollector.go:504] \"Processing object\" object=\"webhook-2336/sample-webhook-deployment-5f8b6c9658-6kchj\" objectUID=db422733-e85f-4cfd-8bab-54447db4f6ed kind=\"Pod\" virtual=false\nI0623 01:25:53.807578 11 garbagecollector.go:616] \"Deleting object\" object=\"webhook-2336/sample-webhook-deployment-5f8b6c9658-6kchj\" objectUID=db422733-e85f-4cfd-8bab-54447db4f6ed kind=\"Pod\" propagationPolicy=Background\nI0623 01:25:53.995817 11 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-9267-2039\nW0623 01:25:54.021867 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:25:54.021900 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:25:55.172485 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7909/pod-a0954485-43dd-49ff-8a37-1ef1140f2e04\" PVC=\"persistent-local-volumes-test-7909/pvc-p9v6d\"\nI0623 01:25:55.172511 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7909/pvc-p9v6d\"\nI0623 01:25:55.771506 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7909/pod-bac29f27-0df0-4b97-b252-6daddf2d4dfd\" PVC=\"persistent-local-volumes-test-7909/pvc-p9v6d\"\nI0623 01:25:55.771532 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7909/pvc-p9v6d\"\nI0623 01:25:55.780395 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7909/pod-a0954485-43dd-49ff-8a37-1ef1140f2e04\" PVC=\"persistent-local-volumes-test-7909/pvc-p9v6d\"\nI0623 01:25:55.780423 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7909/pvc-p9v6d\"\nI0623 01:25:56.282035 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-2687/pod-960cbb4b-a13b-4829-ac55-ac5b75f869bb\" PVC=\"persistent-local-volumes-test-2687/pvc-8pclx\"\nI0623 01:25:56.282061 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-2687/pvc-8pclx\"\nI0623 01:25:56.325447 11 namespace_controller.go:185] Namespace has been deleted statefulset-1255\nI0623 01:25:56.950877 11 namespace_controller.go:185] Namespace has been deleted csistoragecapacity-3016\nI0623 01:25:57.569887 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"webhook-7952/sample-webhook-deployment-5f8b6c9658\" need=1 creating=1\nI0623 01:25:57.574869 11 event.go:294] \"Event occurred\" object=\"webhook-7952/sample-webhook-deployment\" fieldPath=\"\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-5f8b6c9658 to 1\"\nI0623 01:25:57.594424 11 event.go:294] \"Event occurred\" object=\"webhook-7952/sample-webhook-deployment-5f8b6c9658\" fieldPath=\"\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-5f8b6c9658-zmc8n\"\nI0623 01:25:57.605239 11 deployment_controller.go:497] \"Error syncing deployment\" deployment=\"webhook-7952/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0623 01:25:57.612244 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-2687/pod-960cbb4b-a13b-4829-ac55-ac5b75f869bb\" PVC=\"persistent-local-volumes-test-2687/pvc-8pclx\"\nI0623 01:25:57.614781 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-2687/pvc-8pclx\"\nI0623 01:25:57.622732 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-2687/pod-960cbb4b-a13b-4829-ac55-ac5b75f869bb\" PVC=\"persistent-local-volumes-test-2687/pvc-8pclx\"\nI0623 01:25:57.623142 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-2687/pvc-8pclx\"\nI0623 01:25:57.653494 11 deployment_controller.go:497] \"Error syncing deployment\" deployment=\"webhook-7952/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0623 01:25:57.655015 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"persistent-local-volumes-test-2687/pvc-8pclx\"\nI0623 01:25:57.678239 11 pv_controller.go:651] volume \"local-pv7gnq2\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:25:57.688980 11 pv_controller.go:890] volume \"local-pv7gnq2\" entered phase \"Released\"\nI0623 01:25:57.711918 11 pv_controller_base.go:582] deletion of claim \"persistent-local-volumes-test-2687/pvc-8pclx\" was already processed\nI0623 01:25:58.437331 11 reconciler.go:250] \"attacherDetacher.DetachVolume started\" volume={AttachedVolume:{VolumeName:kubernetes.io/csi/csi-mock-csi-mock-volumes-8870^5cf77542-f293-11ec-b89f-fe1ba80b49ec VolumeSpec:0xc0025cdb18 NodeName:nodes-us-west3-a-j1m9 PluginIsAttachable:true DevicePath: DeviceMountPath: PluginName:} MountedByNode:false DetachRequestedTime:2022-06-23 01:25:50.538178285 +0000 UTC m=+948.484880806}\nI0623 01:25:58.443041 11 operation_generator.go:1603] Verified volume is safe to detach for volume \"pvc-73097fc8-a61c-4985-8037-2ae6f78779ce\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8870^5cf77542-f293-11ec-b89f-fe1ba80b49ec\") on node \"nodes-us-west3-a-j1m9\" \nI0623 01:25:58.625615 11 namespace_controller.go:185] Namespace has been deleted projected-4933\nI0623 01:25:58.812451 11 garbagecollector.go:504] \"Processing object\" object=\"provisioning-6110-4182/csi-hostpathplugin-0\" objectUID=3921edbc-1c6d-44e5-a5e6-9d856c0d10c9 kind=\"Pod\" virtual=false\nI0623 01:25:58.812759 11 stateful_set.go:450] StatefulSet has been deleted provisioning-6110-4182/csi-hostpathplugin\nI0623 01:25:58.812826 11 garbagecollector.go:504] \"Processing object\" object=\"provisioning-6110-4182/csi-hostpathplugin-5888567586\" objectUID=e2762b84-a92d-4084-a261-f9b9d3cc15f5 kind=\"ControllerRevision\" virtual=false\nI0623 01:25:58.817278 11 garbagecollector.go:616] \"Deleting object\" object=\"provisioning-6110-4182/csi-hostpathplugin-5888567586\" objectUID=e2762b84-a92d-4084-a261-f9b9d3cc15f5 kind=\"ControllerRevision\" propagationPolicy=Background\nI0623 01:25:58.817737 11 garbagecollector.go:616] \"Deleting object\" object=\"provisioning-6110-4182/csi-hostpathplugin-0\" objectUID=3921edbc-1c6d-44e5-a5e6-9d856c0d10c9 kind=\"Pod\" propagationPolicy=Background\nI0623 01:25:58.923105 11 namespace_controller.go:185] Namespace has been deleted apparmor-1296\nI0623 01:25:58.975444 11 operation_generator.go:513] DetachVolume.Detach succeeded for volume \"pvc-73097fc8-a61c-4985-8037-2ae6f78779ce\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8870^5cf77542-f293-11ec-b89f-fe1ba80b49ec\") on node \"nodes-us-west3-a-j1m9\" \nI0623 01:25:59.024386 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"csi-mock-volumes-2402/pvc-vccsq\"\nI0623 01:25:59.038303 11 pv_controller.go:651] volume \"pvc-c569742e-3e3c-4465-a2c5-6547dc25b34f\" is released and reclaim policy \"Delete\" will be executed\nI0623 01:25:59.045665 11 pv_controller.go:890] volume \"pvc-c569742e-3e3c-4465-a2c5-6547dc25b34f\" entered phase \"Released\"\nI0623 01:25:59.050436 11 pv_controller.go:1353] isVolumeReleased[pvc-c569742e-3e3c-4465-a2c5-6547dc25b34f]: volume is released\nI0623 01:25:59.175764 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7909/pod-a0954485-43dd-49ff-8a37-1ef1140f2e04\" PVC=\"persistent-local-volumes-test-7909/pvc-p9v6d\"\nI0623 01:25:59.175792 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7909/pvc-p9v6d\"\nI0623 01:25:59.226589 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"deployment-6777/test-orphan-deployment-68c48f9ff9\" need=1 creating=1\nI0623 01:25:59.227120 11 event.go:294] \"Event occurred\" object=\"deployment-6777/test-orphan-deployment\" fieldPath=\"\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-orphan-deployment-68c48f9ff9 to 1\"\nI0623 01:25:59.240146 11 deployment_controller.go:497] \"Error syncing deployment\" deployment=\"deployment-6777/test-orphan-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-orphan-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0623 01:25:59.244576 11 event.go:294] \"Event occurred\" object=\"deployment-6777/test-orphan-deployment-68c48f9ff9\" fieldPath=\"\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-orphan-deployment-68c48f9ff9-67gjn\"\nI0623 01:25:59.370793 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7909/pod-a0954485-43dd-49ff-8a37-1ef1140f2e04\" PVC=\"persistent-local-volumes-test-7909/pvc-p9v6d\"\nI0623 01:25:59.372125 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7909/pvc-p9v6d\"\nI0623 01:25:59.378154 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"persistent-local-volumes-test-7909/pvc-p9v6d\"\nI0623 01:25:59.392920 11 pv_controller.go:651] volume \"local-pv7qg4p\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:25:59.400516 11 pv_controller.go:890] volume \"local-pv7qg4p\" entered phase \"Released\"\nI0623 01:25:59.410806 11 pv_controller_base.go:582] deletion of claim \"persistent-local-volumes-test-7909/pvc-p9v6d\" was already processed\nE0623 01:25:59.688171 11 namespace_controller.go:162] deletion of namespace svcaccounts-4323 failed: unexpected items still remain in namespace: svcaccounts-4323 for gvr: /v1, Resource=pods\nE0623 01:25:59.865297 11 namespace_controller.go:162] deletion of namespace svcaccounts-4323 failed: unexpected items still remain in namespace: svcaccounts-4323 for gvr: /v1, Resource=pods\nI0623 01:25:59.902364 11 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-3499\nE0623 01:26:00.032742 11 namespace_controller.go:162] deletion of namespace svcaccounts-4323 failed: unexpected items still remain in namespace: svcaccounts-4323 for gvr: /v1, Resource=pods\nI0623 01:26:00.075779 11 stateful_set.go:450] StatefulSet has been deleted provisioning-356-2061/csi-hostpathplugin\nI0623 01:26:00.075933 11 garbagecollector.go:504] \"Processing object\" object=\"provisioning-356-2061/csi-hostpathplugin-0\" objectUID=0f52f9ce-81f5-4fef-b738-8f2a52d23c01 kind=\"Pod\" virtual=false\nI0623 01:26:00.076374 11 garbagecollector.go:504] \"Processing object\" object=\"provisioning-356-2061/csi-hostpathplugin-759b47dc6b\" objectUID=b6dabd90-df24-421a-82c2-48f01a1dfa27 kind=\"ControllerRevision\" virtual=false\nI0623 01:26:00.080156 11 garbagecollector.go:616] \"Deleting object\" object=\"provisioning-356-2061/csi-hostpathplugin-759b47dc6b\" objectUID=b6dabd90-df24-421a-82c2-48f01a1dfa27 kind=\"ControllerRevision\" propagationPolicy=Background\nI0623 01:26:00.080244 11 garbagecollector.go:616] \"Deleting object\" object=\"provisioning-356-2061/csi-hostpathplugin-0\" objectUID=0f52f9ce-81f5-4fef-b738-8f2a52d23c01 kind=\"Pod\" propagationPolicy=Background\nI0623 01:26:00.183770 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3035/pod-e339862f-9171-45a8-91ce-89fc315a45da\" PVC=\"persistent-local-volumes-test-3035/pvc-dwhjx\"\nI0623 01:26:00.183800 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3035/pvc-dwhjx\"\nE0623 01:26:00.252255 11 namespace_controller.go:162] deletion of namespace svcaccounts-4323 failed: unexpected items still remain in namespace: svcaccounts-4323 for gvr: /v1, Resource=pods\nI0623 01:26:00.373268 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3035/pod-e339862f-9171-45a8-91ce-89fc315a45da\" PVC=\"persistent-local-volumes-test-3035/pvc-dwhjx\"\nI0623 01:26:00.374152 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3035/pvc-dwhjx\"\nI0623 01:26:00.385724 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"persistent-local-volumes-test-3035/pvc-dwhjx\"\nI0623 01:26:00.399258 11 pv_controller.go:651] volume \"local-pvwnmxp\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:26:00.409782 11 pv_controller.go:890] volume \"local-pvwnmxp\" entered phase \"Released\"\nI0623 01:26:00.423119 11 pv_controller_base.go:582] deletion of claim \"persistent-local-volumes-test-3035/pvc-dwhjx\" was already processed\nE0623 01:26:00.484986 11 namespace_controller.go:162] deletion of namespace svcaccounts-4323 failed: unexpected items still remain in namespace: svcaccounts-4323 for gvr: /v1, Resource=pods\nE0623 01:26:00.804549 11 namespace_controller.go:162] deletion of namespace svcaccounts-4323 failed: unexpected items still remain in namespace: svcaccounts-4323 for gvr: /v1, Resource=pods\nI0623 01:26:01.033940 11 namespace_controller.go:185] Namespace has been deleted runtimeclass-5996\nE0623 01:26:01.134857 11 namespace_controller.go:162] deletion of namespace svcaccounts-4323 failed: unexpected items still remain in namespace: svcaccounts-4323 for gvr: /v1, Resource=pods\nI0623 01:26:01.391042 11 namespace_controller.go:185] Namespace has been deleted volumemode-6662\nI0623 01:26:01.475140 11 pv_controller.go:941] claim \"volume-6226/pvc-blxxm\" bound to volume \"local-rltjv\"\nI0623 01:26:01.480419 11 pv_controller.go:1353] isVolumeReleased[pvc-c569742e-3e3c-4465-a2c5-6547dc25b34f]: volume is released\nI0623 01:26:01.482950 11 pv_controller.go:1353] isVolumeReleased[pvc-73097fc8-a61c-4985-8037-2ae6f78779ce]: volume is released\nI0623 01:26:01.495224 11 pv_controller.go:890] volume \"local-rltjv\" entered phase \"Bound\"\nI0623 01:26:01.495531 11 pv_controller.go:993] volume \"local-rltjv\" bound to claim \"volume-6226/pvc-blxxm\"\nI0623 01:26:01.511600 11 pv_controller.go:834] claim \"volume-6226/pvc-blxxm\" entered phase \"Bound\"\nI0623 01:26:01.512191 11 pv_controller.go:941] claim \"provisioning-5364/pvc-xzmvr\" bound to volume \"local-hhqgr\"\nI0623 01:26:01.526917 11 pv_controller.go:890] volume \"local-hhqgr\" entered phase \"Bound\"\nI0623 01:26:01.526964 11 pv_controller.go:993] volume \"local-hhqgr\" bound to claim \"provisioning-5364/pvc-xzmvr\"\nI0623 01:26:01.544228 11 pv_controller.go:834] claim \"provisioning-5364/pvc-xzmvr\" entered phase \"Bound\"\nI0623 01:26:01.556100 11 namespace_controller.go:185] Namespace has been deleted provisioning-1212\nI0623 01:26:01.602748 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-6850/inline-volume-tester-p6vrd\" PVC=\"ephemeral-6850/inline-volume-tester-p6vrd-my-volume-0\"\nI0623 01:26:01.602778 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-6850/inline-volume-tester-p6vrd-my-volume-0\"\nI0623 01:26:01.624083 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-6850/inline-volume-tester-p6vrd-my-volume-0\"\nI0623 01:26:01.636049 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-6850/inline-volume-tester-p6vrd\" objectUID=22d65573-f99c-497e-9a68-18055d49f75f kind=\"Pod\" virtual=false\nI0623 01:26:01.641256 11 garbagecollector.go:626] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-6850, name: inline-volume-tester-p6vrd, uid: 22d65573-f99c-497e-9a68-18055d49f75f]\nI0623 01:26:01.641622 11 pv_controller.go:651] volume \"pvc-f4f83ac2-c7ca-42c5-9dff-1ca056661f49\" is released and reclaim policy \"Delete\" will be executed\nI0623 01:26:01.648426 11 pv_controller.go:890] volume \"pvc-f4f83ac2-c7ca-42c5-9dff-1ca056661f49\" entered phase \"Released\"\nI0623 01:26:01.673739 11 pv_controller_base.go:582] deletion of claim \"ephemeral-6850/inline-volume-tester-p6vrd-my-volume-0\" was already processed\nE0623 01:26:01.703451 11 namespace_controller.go:162] deletion of namespace svcaccounts-4323 failed: unexpected items still remain in namespace: svcaccounts-4323 for gvr: /v1, Resource=pods\nI0623 01:26:01.982845 11 reconciler.go:250] \"attacherDetacher.DetachVolume started\" volume={AttachedVolume:{VolumeName:kubernetes.io/csi/csi-mock-csi-mock-volumes-2402^3891063d-f293-11ec-8835-aaaf86b0c220 VolumeSpec:0xc0027eda10 NodeName:nodes-us-west3-a-s284 PluginIsAttachable:true DevicePath: DeviceMountPath: PluginName:} MountedByNode:false DetachRequestedTime:2022-06-23 01:25:57.412788477 +0000 UTC m=+955.359490996}\nI0623 01:26:01.987131 11 operation_generator.go:1603] Verified volume is safe to detach for volume \"pvc-c569742e-3e3c-4465-a2c5-6547dc25b34f\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-2402^3891063d-f293-11ec-8835-aaaf86b0c220\") on node \"nodes-us-west3-a-s284\" \nE0623 01:26:02.073214 11 pv_protection_controller.go:114] PV pvc-c569742e-3e3c-4465-a2c5-6547dc25b34f failed with : Operation cannot be fulfilled on persistentvolumes \"pvc-c569742e-3e3c-4465-a2c5-6547dc25b34f\": the object has been modified; please apply your changes to the latest version and try again\nI0623 01:26:02.096812 11 pv_controller_base.go:582] deletion of claim \"csi-mock-volumes-2402/pvc-vccsq\" was already processed\nE0623 01:26:02.104696 11 pv_protection_controller.go:114] PV pvc-c569742e-3e3c-4465-a2c5-6547dc25b34f failed with : Operation cannot be fulfilled on persistentvolumes \"pvc-c569742e-3e3c-4465-a2c5-6547dc25b34f\": StorageError: invalid object, Code: 4, Key: /registry/persistentvolumes/pvc-c569742e-3e3c-4465-a2c5-6547dc25b34f, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 1f7e1f68-70dc-4252-93df-35bba4a8d1b8, UID in object meta: \nI0623 01:26:02.154430 11 namespace_controller.go:185] Namespace has been deleted ephemeral-7373-8047\nW0623 01:26:02.254104 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:26:02.254278 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:26:02.292958 11 namespace_controller.go:185] Namespace has been deleted provisioning-6110\nE0623 01:26:02.501967 11 namespace_controller.go:162] deletion of namespace svcaccounts-4323 failed: unexpected items still remain in namespace: svcaccounts-4323 for gvr: /v1, Resource=pods\nI0623 01:26:02.503765 11 operation_generator.go:513] DetachVolume.Detach succeeded for volume \"pvc-c569742e-3e3c-4465-a2c5-6547dc25b34f\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-2402^3891063d-f293-11ec-8835-aaaf86b0c220\") on node \"nodes-us-west3-a-s284\" \nI0623 01:26:02.763133 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"replication-controller-650/pod-release\" need=1 creating=1\nI0623 01:26:02.763836 11 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-5767-969\nI0623 01:26:02.781281 11 event.go:294] \"Event occurred\" object=\"replication-controller-650/pod-release\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: pod-release-ql2bv\"\nI0623 01:26:02.820339 11 namespace_controller.go:185] Namespace has been deleted provisioning-7195\nI0623 01:26:02.866878 11 controller_ref_manager.go:239] patching pod replication-controller-650_pod-release-ql2bv to remove its controllerRef to v1/ReplicationController:pod-release\nI0623 01:26:02.879241 11 garbagecollector.go:504] \"Processing object\" object=\"replication-controller-650/pod-release\" objectUID=69061dc7-25f1-4588-b062-93e5b6c37e87 kind=\"ReplicationController\" virtual=false\nI0623 01:26:02.879658 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"replication-controller-650/pod-release\" need=1 creating=1\nI0623 01:26:02.890429 11 event.go:294] \"Event occurred\" object=\"replication-controller-650/pod-release\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: pod-release-g7cn9\"\nI0623 01:26:02.927255 11 namespace_controller.go:185] Namespace has been deleted svcaccounts-6865\nI0623 01:26:02.943868 11 garbagecollector.go:543] object [v1/ReplicationController, namespace: replication-controller-650, name: pod-release, uid: 69061dc7-25f1-4588-b062-93e5b6c37e87]'s doesn't have an owner, continue on next item\nI0623 01:26:03.237941 11 namespace_controller.go:185] Namespace has been deleted server-version-5619\nE0623 01:26:03.247888 11 pv_controller.go:1501] error finding provisioning plugin for claim ephemeral-61/inline-volume-d2ldn-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0623 01:26:03.248333 11 event.go:294] \"Event occurred\" object=\"ephemeral-61/inline-volume-d2ldn-my-volume\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nE0623 01:26:03.290717 11 namespace_controller.go:162] deletion of namespace webhook-6842 failed: unexpected items still remain in namespace: webhook-6842 for gvr: /v1, Resource=pods\nI0623 01:26:03.327366 11 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-61, name: inline-volume-d2ldn, uid: 6a16e596-f5c2-4642-8f5e-8642d009766d] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0623 01:26:03.328499 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-61/inline-volume-d2ldn-my-volume\" objectUID=88be3f3d-b542-413f-9884-b54efc2ab3b5 kind=\"PersistentVolumeClaim\" virtual=false\nI0623 01:26:03.328738 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-61/inline-volume-d2ldn\" objectUID=6a16e596-f5c2-4642-8f5e-8642d009766d kind=\"Pod\" virtual=false\nI0623 01:26:03.337011 11 garbagecollector.go:631] adding [v1/PersistentVolumeClaim, namespace: ephemeral-61, name: inline-volume-d2ldn-my-volume, uid: 88be3f3d-b542-413f-9884-b54efc2ab3b5] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-61, name: inline-volume-d2ldn, uid: 6a16e596-f5c2-4642-8f5e-8642d009766d] is deletingDependents\nI0623 01:26:03.344421 11 garbagecollector.go:616] \"Deleting object\" object=\"ephemeral-61/inline-volume-d2ldn-my-volume\" objectUID=88be3f3d-b542-413f-9884-b54efc2ab3b5 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nE0623 01:26:03.354392 11 pv_controller.go:1501] error finding provisioning plugin for claim ephemeral-61/inline-volume-d2ldn-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0623 01:26:03.355214 11 event.go:294] \"Event occurred\" object=\"ephemeral-61/inline-volume-d2ldn-my-volume\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0623 01:26:03.356077 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-61/inline-volume-d2ldn-my-volume\" objectUID=88be3f3d-b542-413f-9884-b54efc2ab3b5 kind=\"PersistentVolumeClaim\" virtual=false\nI0623 01:26:03.361977 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-61/inline-volume-d2ldn-my-volume\"\nI0623 01:26:03.372790 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-61/inline-volume-d2ldn\" objectUID=6a16e596-f5c2-4642-8f5e-8642d009766d kind=\"Pod\" virtual=false\nI0623 01:26:03.380364 11 garbagecollector.go:626] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-61, name: inline-volume-d2ldn, uid: 6a16e596-f5c2-4642-8f5e-8642d009766d]\nI0623 01:26:03.384296 11 namespace_controller.go:185] Namespace has been deleted provisioning-356\nE0623 01:26:03.401662 11 namespace_controller.go:162] deletion of namespace kubelet-test-280 failed: unexpected items still remain in namespace: kubelet-test-280 for gvr: /v1, Resource=pods\nI0623 01:26:03.507514 11 garbagecollector.go:504] \"Processing object\" object=\"webhook-7952/e2e-test-webhook-htb4j\" objectUID=89e6d4d7-3122-47ae-b455-30b13a2d9a44 kind=\"EndpointSlice\" virtual=false\nI0623 01:26:03.521114 11 garbagecollector.go:616] \"Deleting object\" object=\"webhook-7952/e2e-test-webhook-htb4j\" objectUID=89e6d4d7-3122-47ae-b455-30b13a2d9a44 kind=\"EndpointSlice\" propagationPolicy=Background\nI0623 01:26:03.544182 11 deployment_controller.go:590] \"Deployment has been deleted\" deployment=\"webhook-7952/sample-webhook-deployment\"\nI0623 01:26:03.544730 11 garbagecollector.go:504] \"Processing object\" object=\"webhook-7952/sample-webhook-deployment-5f8b6c9658\" objectUID=5c9fcbfd-4f28-4405-8c3b-41535984a158 kind=\"ReplicaSet\" virtual=false\nI0623 01:26:03.547398 11 garbagecollector.go:616] \"Deleting object\" object=\"webhook-7952/sample-webhook-deployment-5f8b6c9658\" objectUID=5c9fcbfd-4f28-4405-8c3b-41535984a158 kind=\"ReplicaSet\" propagationPolicy=Background\nI0623 01:26:03.553263 11 garbagecollector.go:504] \"Processing object\" object=\"webhook-7952/sample-webhook-deployment-5f8b6c9658-zmc8n\" objectUID=fc8c9a17-530d-4a9e-9348-fb6ba35386d6 kind=\"Pod\" virtual=false\nI0623 01:26:03.558474 11 garbagecollector.go:616] \"Deleting object\" object=\"webhook-7952/sample-webhook-deployment-5f8b6c9658-zmc8n\" objectUID=fc8c9a17-530d-4a9e-9348-fb6ba35386d6 kind=\"Pod\" propagationPolicy=Background\nI0623 01:26:04.004128 11 namespace_controller.go:185] Namespace has been deleted webhook-2336-markers\nI0623 01:26:04.010097 11 namespace_controller.go:185] Namespace has been deleted webhook-2336\nE0623 01:26:04.100338 11 namespace_controller.go:162] deletion of namespace svcaccounts-4323 failed: unexpected items still remain in namespace: svcaccounts-4323 for gvr: /v1, Resource=pods\nI0623 01:26:04.298636 11 namespace_controller.go:185] Namespace has been deleted volume-7905\nI0623 01:26:05.238836 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/ss\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\"\nI0623 01:26:05.240009 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/datadir-ss-0\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0623 01:26:05.246240 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/ss\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0623 01:26:05.263315 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/datadir-ss-0\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"pd.csi.storage.gke.io\\\" or manually created by system administrator\"\nI0623 01:26:05.263587 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/datadir-ss-0\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"pd.csi.storage.gke.io\\\" or manually created by system administrator\"\nI0623 01:26:06.040160 11 pv_controller_base.go:582] deletion of claim \"csi-mock-volumes-8870/pvc-m7jg8\" was already processed\nI0623 01:26:06.577143 11 event.go:294] \"Event occurred\" object=\"ephemeral-61-788/csi-hostpathplugin\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nW0623 01:26:06.625992 11 utils.go:264] Service services-9875/service-headless-toggled using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0623 01:26:06.693320 11 event.go:294] \"Event occurred\" object=\"ephemeral-61/inline-volume-tester-vvd5r-my-volume-0\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod inline-volume-tester-vvd5r to be scheduled\"\nI0623 01:26:06.854722 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"volume-6226/pvc-blxxm\"\nI0623 01:26:06.865203 11 pv_controller.go:651] volume \"local-rltjv\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:26:06.872178 11 pv_controller.go:890] volume \"local-rltjv\" entered phase \"Released\"\nI0623 01:26:06.893951 11 pv_controller_base.go:582] deletion of claim \"volume-6226/pvc-blxxm\" was already processed\nE0623 01:26:06.933988 11 namespace_controller.go:162] deletion of namespace svcaccounts-4323 failed: unexpected items still remain in namespace: svcaccounts-4323 for gvr: /v1, Resource=pods\nI0623 01:26:06.966895 11 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-2687\nI0623 01:26:07.516589 11 resource_quota_controller.go:312] Resource quota has been deleted resourcequota-4397/test-quota\nI0623 01:26:08.042760 11 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-7909\nI0623 01:26:08.091264 11 garbagecollector.go:504] \"Processing object\" object=\"replication-controller-650/pod-release-g7cn9\" objectUID=e7f208f9-b257-4c64-b7cf-f7d31c1597e4 kind=\"Pod\" virtual=false\nI0623 01:26:08.096655 11 garbagecollector.go:616] \"Deleting object\" object=\"replication-controller-650/pod-release-g7cn9\" objectUID=e7f208f9-b257-4c64-b7cf-f7d31c1597e4 kind=\"Pod\" propagationPolicy=Background\nI0623 01:26:08.289022 11 reconciler.go:250] \"attacherDetacher.DetachVolume started\" volume={AttachedVolume:{VolumeName:kubernetes.io/csi/csi-hostpath-ephemeral-6850^54eca6e9-f293-11ec-b933-5a9b878f29da VolumeSpec:0xc0024e3458 NodeName:nodes-us-west3-a-l43j PluginIsAttachable:true DevicePath: DeviceMountPath: PluginName:} MountedByNode:false DetachRequestedTime:2022-06-23 01:26:01.661335103 +0000 UTC m=+959.608037623}\nI0623 01:26:08.306050 11 operation_generator.go:1603] Verified volume is safe to detach for volume \"pvc-f4f83ac2-c7ca-42c5-9dff-1ca056661f49\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-6850^54eca6e9-f293-11ec-b933-5a9b878f29da\") on node \"nodes-us-west3-a-l43j\" \nI0623 01:26:08.451198 11 event.go:294] \"Event occurred\" object=\"ephemeral-61/inline-volume-tester-vvd5r-my-volume-0\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-61\\\" or manually created by system administrator\"\nI0623 01:26:08.828111 11 operation_generator.go:513] DetachVolume.Detach succeeded for volume \"pvc-f4f83ac2-c7ca-42c5-9dff-1ca056661f49\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-6850^54eca6e9-f293-11ec-b933-5a9b878f29da\") on node \"nodes-us-west3-a-l43j\" \nI0623 01:26:09.426323 11 namespace_controller.go:185] Namespace has been deleted e2e-privileged-pod-9171\nI0623 01:26:09.455887 11 namespace_controller.go:185] Namespace has been deleted provisioning-6110-4182\nI0623 01:26:09.616196 11 pv_controller.go:890] volume \"pvc-73740996-47d1-45b1-ba5f-42acd1d75114\" entered phase \"Bound\"\nI0623 01:26:09.616236 11 pv_controller.go:993] volume \"pvc-73740996-47d1-45b1-ba5f-42acd1d75114\" bound to claim \"statefulset-9707/datadir-ss-0\"\nI0623 01:26:09.630426 11 pv_controller.go:834] claim \"statefulset-9707/datadir-ss-0\" entered phase \"Bound\"\nI0623 01:26:09.874528 11 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-9466/pvc-4vjm4\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-9466\\\" or manually created by system administrator\"\nI0623 01:26:09.890893 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-5364/pvc-xzmvr\"\nI0623 01:26:09.900809 11 pv_controller.go:651] volume \"local-hhqgr\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:26:09.905902 11 pv_controller.go:890] volume \"local-hhqgr\" entered phase \"Released\"\nI0623 01:26:09.922470 11 pv_controller.go:890] volume \"pvc-980d9565-a82c-42cc-a74e-757882226896\" entered phase \"Bound\"\nI0623 01:26:09.923311 11 pv_controller.go:993] volume \"pvc-980d9565-a82c-42cc-a74e-757882226896\" bound to claim \"csi-mock-volumes-9466/pvc-4vjm4\"\nI0623 01:26:09.942590 11 pv_controller.go:834] claim \"csi-mock-volumes-9466/pvc-4vjm4\" entered phase \"Bound\"\nI0623 01:26:09.943596 11 pv_controller_base.go:582] deletion of claim \"provisioning-5364/pvc-xzmvr\" was already processed\nI0623 01:26:09.990800 11 garbagecollector.go:504] \"Processing object\" object=\"csi-mock-volumes-2402-1639/csi-mockplugin-74644c5497\" objectUID=a14fb366-8aa1-4e32-bfaa-e7b8a3da6379 kind=\"ControllerRevision\" virtual=false\nI0623 01:26:09.991640 11 stateful_set.go:450] StatefulSet has been deleted csi-mock-volumes-2402-1639/csi-mockplugin\nI0623 01:26:09.992166 11 garbagecollector.go:504] \"Processing object\" object=\"csi-mock-volumes-2402-1639/csi-mockplugin-0\" objectUID=fd3fe9f1-ecca-4a3d-b7bd-27aa5ecf9163 kind=\"Pod\" virtual=false\nI0623 01:26:10.002048 11 garbagecollector.go:616] \"Deleting object\" object=\"csi-mock-volumes-2402-1639/csi-mockplugin-74644c5497\" objectUID=a14fb366-8aa1-4e32-bfaa-e7b8a3da6379 kind=\"ControllerRevision\" propagationPolicy=Background\nI0623 01:26:10.002048 11 garbagecollector.go:616] \"Deleting object\" object=\"csi-mock-volumes-2402-1639/csi-mockplugin-0\" objectUID=fd3fe9f1-ecca-4a3d-b7bd-27aa5ecf9163 kind=\"Pod\" propagationPolicy=Background\nI0623 01:26:10.027266 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-6850-2938/csi-hostpathplugin-54878b5d7f\" objectUID=e5131fac-727b-4374-bd7a-18ac13df7ce6 kind=\"ControllerRevision\" virtual=false\nI0623 01:26:10.027762 11 stateful_set.go:450] StatefulSet has been deleted ephemeral-6850-2938/csi-hostpathplugin\nI0623 01:26:10.027803 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-6850-2938/csi-hostpathplugin-0\" objectUID=80a46074-386b-4722-92e1-c4e6fcfbc801 kind=\"Pod\" virtual=false\nI0623 01:26:10.030915 11 garbagecollector.go:504] \"Processing object\" object=\"csi-mock-volumes-2402-1639/csi-mockplugin-attacher-6b8964cb7d\" objectUID=99780e55-ea54-4e1b-b4cf-2993ec38c745 kind=\"ControllerRevision\" virtual=false\nI0623 01:26:10.032036 11 stateful_set.go:450] StatefulSet has been deleted csi-mock-volumes-2402-1639/csi-mockplugin-attacher\nI0623 01:26:10.032110 11 garbagecollector.go:504] \"Processing object\" object=\"csi-mock-volumes-2402-1639/csi-mockplugin-attacher-0\" objectUID=eefb0e2e-66ed-43bf-a50f-3b8f28610742 kind=\"Pod\" virtual=false\nI0623 01:26:10.034383 11 garbagecollector.go:616] \"Deleting object\" object=\"ephemeral-6850-2938/csi-hostpathplugin-54878b5d7f\" objectUID=e5131fac-727b-4374-bd7a-18ac13df7ce6 kind=\"ControllerRevision\" propagationPolicy=Background\nI0623 01:26:10.046222 11 garbagecollector.go:616] \"Deleting object\" object=\"ephemeral-6850-2938/csi-hostpathplugin-0\" objectUID=80a46074-386b-4722-92e1-c4e6fcfbc801 kind=\"Pod\" propagationPolicy=Background\nI0623 01:26:10.046400 11 garbagecollector.go:616] \"Deleting object\" object=\"csi-mock-volumes-2402-1639/csi-mockplugin-attacher-6b8964cb7d\" objectUID=99780e55-ea54-4e1b-b4cf-2993ec38c745 kind=\"ControllerRevision\" propagationPolicy=Background\nI0623 01:26:10.050284 11 garbagecollector.go:616] \"Deleting object\" object=\"csi-mock-volumes-2402-1639/csi-mockplugin-attacher-0\" objectUID=eefb0e2e-66ed-43bf-a50f-3b8f28610742 kind=\"Pod\" propagationPolicy=Background\nI0623 01:26:10.074762 11 garbagecollector.go:504] \"Processing object\" object=\"csi-mock-volumes-2402-1639/csi-mockplugin-resizer-6f6974748\" objectUID=ce6ac034-9011-41d2-97a1-e5b64e193ff6 kind=\"ControllerRevision\" virtual=false\nI0623 01:26:10.075138 11 stateful_set.go:450] StatefulSet has been deleted csi-mock-volumes-2402-1639/csi-mockplugin-resizer\nI0623 01:26:10.075181 11 garbagecollector.go:504] \"Processing object\" object=\"csi-mock-volumes-2402-1639/csi-mockplugin-resizer-0\" objectUID=346ccb30-be90-48e1-82c0-085a26a8219d kind=\"Pod\" virtual=false\nI0623 01:26:10.082793 11 garbagecollector.go:616] \"Deleting object\" object=\"csi-mock-volumes-2402-1639/csi-mockplugin-resizer-6f6974748\" objectUID=ce6ac034-9011-41d2-97a1-e5b64e193ff6 kind=\"ControllerRevision\" propagationPolicy=Background\nI0623 01:26:10.083261 11 garbagecollector.go:616] \"Deleting object\" object=\"csi-mock-volumes-2402-1639/csi-mockplugin-resizer-0\" objectUID=346ccb30-be90-48e1-82c0-085a26a8219d kind=\"Pod\" propagationPolicy=Background\nI0623 01:26:10.303347 11 reconciler.go:325] \"attacherDetacher.AttachVolume started\" volume={VolumeToAttach:{MultiAttachErrorReported:false VolumeName:kubernetes.io/csi/pd.csi.storage.gke.io^projects/k8s-jkns-gce-soak-2/zones/us-west3-a/disks/pvc-73740996-47d1-45b1-ba5f-42acd1d75114 VolumeSpec:0xc000372bb8 NodeName:nodes-us-west3-a-9jqc ScheduledPods:[&Pod{ObjectMeta:{ss-0 ss- statefulset-9707 2388bf01-15f4-4c74-b20c-ac13d3e6dcb5 31359 0 2022-06-23 01:26:05 +0000 UTC <nil> <nil> map[baz:blah controller-revision-hash:ss-5b74f7c5d foo:bar statefulset.kubernetes.io/pod-name:ss-0] map[] [{apps/v1 StatefulSet ss 443a664f-ea73-4296-b08a-6209ac80fa30 0xc003477037 0xc003477038}] [] [{kube-controller-manager Update v1 2022-06-23 01:26:05 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:baz\":{},\"f:controller-revision-hash\":{},\"f:foo\":{},\"f:statefulset.kubernetes.io/pod-name\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"443a664f-ea73-4296-b08a-6209ac80fa30\\\"}\":{}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\"webserver\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:readinessProbe\":{\".\":{},\"f:exec\":{\".\":{},\"f:command\":{}},\"f:failureThreshold\":{},\"f:periodSeconds\":{},\"f:successThreshold\":{},\"f:timeoutSeconds\":{}},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{},\"f:volumeMounts\":{\".\":{},\"k:{\\\"mountPath\\\":\\\"/data/\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}},\"k:{\\\"mountPath\\\":\\\"/home\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}}}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:hostname\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:subdomain\":{},\"f:terminationGracePeriodSeconds\":{},\"f:volumes\":{\".\":{},\"k:{\\\"name\\\":\\\"datadir\\\"}\":{\".\":{},\"f:name\":{},\"f:persistentVolumeClaim\":{\".\":{},\"f:claimName\":{}}},\"k:{\\\"name\\\":\\\"home\\\"}\":{\".\":{},\"f:hostPath\":{\".\":{},\"f:path\":{},\"f:type\":{}},\"f:name\":{}}}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:datadir,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:&PersistentVolumeClaimVolumeSource{ClaimName:datadir-ss-0,ReadOnly:false,},RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:home,VolumeSource:VolumeSource{HostPath:&HostPathVolumeSource{Path:/tmp/home,Type:*,},EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-85hjn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:webserver,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:datadir,ReadOnly:false,MountPath:/data/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:home,ReadOnly:false,MountPath:/home,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-85hjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[test -f /data/statefulset-continue],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:1,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-9jqc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:ss-0,Subdomain:test,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:26:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}]}}\nI0623 01:26:10.430779 11 namespace_controller.go:185] Namespace has been deleted provisioning-356-2061\nI0623 01:26:11.984296 11 pv_controller.go:890] volume \"pvc-b40350ad-93e5-4fb3-8487-7345ccdd7c1c\" entered phase \"Bound\"\nI0623 01:26:11.984342 11 pv_controller.go:993] volume \"pvc-b40350ad-93e5-4fb3-8487-7345ccdd7c1c\" bound to claim \"ephemeral-61/inline-volume-tester-vvd5r-my-volume-0\"\nI0623 01:26:12.002088 11 pv_controller.go:834] claim \"ephemeral-61/inline-volume-tester-vvd5r-my-volume-0\" entered phase \"Bound\"\nE0623 01:26:12.269014 11 namespace_controller.go:162] deletion of namespace svcaccounts-4323 failed: unexpected items still remain in namespace: svcaccounts-4323 for gvr: /v1, Resource=pods\nW0623 01:26:12.337338 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:26:12.337370 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:26:12.516402 11 reconciler.go:325] \"attacherDetacher.AttachVolume started\" volume={VolumeToAttach:{MultiAttachErrorReported:false VolumeName:kubernetes.io/csi/csi-hostpath-ephemeral-61^77714bba-f293-11ec-bbcb-a2260c93e9d3 VolumeSpec:0xc003022c90 NodeName:nodes-us-west3-a-s284 ScheduledPods:[&Pod{ObjectMeta:{inline-volume-tester-vvd5r inline-volume-tester- ephemeral-61 f31b1a60-bbca-4dd6-bf53-700f10b13135 31408 0 2022-06-23 01:26:06 +0000 UTC <nil> <nil> map[app:inline-volume-tester] map[] [] [] [{e2e.test Update v1 2022-06-23 01:26:06 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:app\":{}}},\"f:spec\":{\"f:affinity\":{\".\":{},\"f:nodeAffinity\":{\".\":{},\"f:requiredDuringSchedulingIgnoredDuringExecution\":{}}},\"f:containers\":{\"k:{\\\"name\\\":\\\"csi-volume-tester\\\"}\":{\".\":{},\"f:command\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{},\"f:volumeMounts\":{\".\":{},\"k:{\\\"mountPath\\\":\\\"/mnt/test-0\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}}}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{},\"f:volumes\":{\".\":{},\"k:{\\\"name\\\":\\\"my-volume-0\\\"}\":{\".\":{},\"f:ephemeral\":{\".\":{},\"f:volumeClaimTemplate\":{\".\":{},\"f:metadata\":{\".\":{},\"f:creationTimestamp\":{}},\"f:spec\":{\".\":{},\"f:accessModes\":{},\"f:resources\":{\".\":{},\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}},\"f:name\":{}}}}} } {kube-scheduler Update v1 2022-06-23 01:26:06 +0000 UTC FieldsV1 {\"f:status\":{\"f:conditions\":{\".\":{},\"k:{\\\"type\\\":\\\"PodScheduled\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:my-volume-0,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:&EphemeralVolumeSource{VolumeClaimTemplate:&PersistentVolumeClaimTemplate{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1048576 0} {<nil>} 1Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*ephemeral-61vjtjp,VolumeMode:*Filesystem,DataSource:nil,DataSourceRef:nil,},},},},},Volume{Name:kube-api-access-drtfh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:csi-volume-tester,Image:registry.k8s.io/e2e-test-images/busybox:1.29-2,Command:[/bin/sh -c sleep 10000],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:my-volume-0,ReadOnly:false,MountPath:/mnt/test-0,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-drtfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Never,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-s284,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{},MatchFields:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:metadata.name,Operator:In,Values:[nodes-us-west3-a-s284],},},},},},PreferredDuringSchedulingIgnoredDuringExecution:[]PreferredSchedulingTerm{},},PodAffinity:nil,PodAntiAffinity:nil,},SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:26:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}]}}\nI0623 01:26:12.764498 11 namespace_controller.go:185] Namespace has been deleted emptydir-5753\nI0623 01:26:13.084073 11 operation_generator.go:398] AttachVolume.Attach succeeded for volume \"pvc-b40350ad-93e5-4fb3-8487-7345ccdd7c1c\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-61^77714bba-f293-11ec-bbcb-a2260c93e9d3\") from node \"nodes-us-west3-a-s284\" \nI0623 01:26:13.084458 11 event.go:294] \"Event occurred\" object=\"ephemeral-61/inline-volume-tester-vvd5r\" fieldPath=\"\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-b40350ad-93e5-4fb3-8487-7345ccdd7c1c\\\" \"\nI0623 01:26:13.559314 11 namespace_controller.go:185] Namespace has been deleted replication-controller-650\nI0623 01:26:13.584177 11 namespace_controller.go:185] Namespace has been deleted dns-8904\nI0623 01:26:13.642452 11 namespace_controller.go:185] Namespace has been deleted ephemeral-6850\nI0623 01:26:13.788724 11 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-2402\nW0623 01:26:13.865623 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:26:13.865662 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:26:13.935434 11 namespace_controller.go:185] Namespace has been deleted webhook-7952-markers\nI0623 01:26:14.002959 11 namespace_controller.go:185] Namespace has been deleted webhook-7952\nI0623 01:26:14.084113 11 stateful_set.go:450] StatefulSet has been deleted csi-mock-volumes-8870-5968/csi-mockplugin\nI0623 01:26:14.084171 11 garbagecollector.go:504] \"Processing object\" object=\"csi-mock-volumes-8870-5968/csi-mockplugin-74f467ffd8\" objectUID=231f28a0-d82c-425a-aa2f-a655ecd4a8e8 kind=\"ControllerRevision\" virtual=false\nI0623 01:26:14.084192 11 garbagecollector.go:504] \"Processing object\" object=\"csi-mock-volumes-8870-5968/csi-mockplugin-0\" objectUID=0ff40395-5214-4227-be33-c0d307da107e kind=\"Pod\" virtual=false\nI0623 01:26:14.088588 11 garbagecollector.go:616] \"Deleting object\" object=\"csi-mock-volumes-8870-5968/csi-mockplugin-74f467ffd8\" objectUID=231f28a0-d82c-425a-aa2f-a655ecd4a8e8 kind=\"ControllerRevision\" propagationPolicy=Background\nI0623 01:26:14.088587 11 garbagecollector.go:616] \"Deleting object\" object=\"csi-mock-volumes-8870-5968/csi-mockplugin-0\" objectUID=0ff40395-5214-4227-be33-c0d307da107e kind=\"Pod\" propagationPolicy=Background\nI0623 01:26:14.137551 11 garbagecollector.go:504] \"Processing object\" object=\"csi-mock-volumes-8870-5968/csi-mockplugin-attacher-9dbc665bf\" objectUID=3be1e275-beb4-4dac-b492-55981c1f45c3 kind=\"ControllerRevision\" virtual=false\nI0623 01:26:14.137836 11 stateful_set.go:450] StatefulSet has been deleted csi-mock-volumes-8870-5968/csi-mockplugin-attacher\nI0623 01:26:14.137881 11 garbagecollector.go:504] \"Processing object\" object=\"csi-mock-volumes-8870-5968/csi-mockplugin-attacher-0\" objectUID=35e574c6-8e35-4c26-a284-ef4fe46af724 kind=\"Pod\" virtual=false\nI0623 01:26:14.141117 11 garbagecollector.go:616] \"Deleting object\" object=\"csi-mock-volumes-8870-5968/csi-mockplugin-attacher-0\" objectUID=35e574c6-8e35-4c26-a284-ef4fe46af724 kind=\"Pod\" propagationPolicy=Background\nI0623 01:26:14.141117 11 garbagecollector.go:616] \"Deleting object\" object=\"csi-mock-volumes-8870-5968/csi-mockplugin-attacher-9dbc665bf\" objectUID=3be1e275-beb4-4dac-b492-55981c1f45c3 kind=\"ControllerRevision\" propagationPolicy=Background\nI0623 01:26:15.380912 11 garbagecollector.go:504] \"Processing object\" object=\"deployment-6777/test-orphan-deployment\" objectUID=14e96b0c-630c-40e2-a383-7d3aaf24ae30 kind=\"Deployment\" virtual=false\nI0623 01:26:15.429840 11 deployment_controller.go:590] \"Deployment has been deleted\" deployment=\"deployment-6777/test-orphan-deployment\"\nW0623 01:26:15.838864 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:26:15.838901 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nW0623 01:26:16.527252 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:26:16.527495 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nW0623 01:26:16.613138 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:26:16.613174 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:26:17.223313 11 namespace_controller.go:185] Namespace has been deleted cronjob-5346\nI0623 01:26:17.439731 11 deployment_controller.go:497] \"Error syncing deployment\" deployment=\"deployment-6777/test-adopt-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-adopt-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0623 01:26:17.472955 11 namespace_controller.go:185] Namespace has been deleted volume-6226\nI0623 01:26:17.644368 11 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-8870\nI0623 01:26:17.852383 11 namespace_controller.go:185] Namespace has been deleted projected-4995\nI0623 01:26:17.940050 11 namespace_controller.go:185] Namespace has been deleted resourcequota-4397\nE0623 01:26:18.139657 11 pv_controller.go:1501] error finding provisioning plugin for claim volume-9126/pvc-s7npm: storageclass.storage.k8s.io \"volume-9126\" not found\nI0623 01:26:18.140216 11 event.go:294] \"Event occurred\" object=\"volume-9126/pvc-s7npm\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-9126\\\" not found\"\nI0623 01:26:18.168210 11 pv_controller.go:890] volume \"local-dvvdg\" entered phase \"Available\"\nI0623 01:26:18.528444 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-8476/inline-volume-tester-5wzwg\" PVC=\"ephemeral-8476/inline-volume-tester-5wzwg-my-volume-0\"\nI0623 01:26:18.528490 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-8476/inline-volume-tester-5wzwg-my-volume-0\"\nI0623 01:26:18.541631 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-8476/inline-volume-tester-5wzwg-my-volume-0\"\nI0623 01:26:18.550203 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-8476/inline-volume-tester-5wzwg\" objectUID=f385d8de-10a4-4e80-9db8-2ad390f2c5bc kind=\"Pod\" virtual=false\nI0623 01:26:18.553507 11 garbagecollector.go:626] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-8476, name: inline-volume-tester-5wzwg, uid: f385d8de-10a4-4e80-9db8-2ad390f2c5bc]\nI0623 01:26:18.553923 11 pv_controller.go:651] volume \"pvc-4186a50b-2624-4e78-a138-660f17ef054e\" is released and reclaim policy \"Delete\" will be executed\nW0623 01:26:18.557761 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:26:18.557813 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:26:18.564517 11 pv_controller.go:890] volume \"pvc-4186a50b-2624-4e78-a138-660f17ef054e\" entered phase \"Released\"\nI0623 01:26:18.568179 11 pv_controller.go:1353] isVolumeReleased[pvc-4186a50b-2624-4e78-a138-660f17ef054e]: volume is released\nI0623 01:26:18.588181 11 pv_controller_base.go:582] deletion of claim \"ephemeral-8476/inline-volume-tester-5wzwg-my-volume-0\" was already processed\nE0623 01:26:18.691482 11 pv_controller.go:1501] error finding provisioning plugin for claim volume-8600/pvc-b9qsf: storageclass.storage.k8s.io \"volume-8600\" not found\nI0623 01:26:18.691606 11 event.go:294] \"Event occurred\" object=\"volume-8600/pvc-b9qsf\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-8600\\\" not found\"\nI0623 01:26:18.704113 11 namespace_controller.go:185] Namespace has been deleted kubectl-9358\nI0623 01:26:18.712396 11 reconciler.go:250] \"attacherDetacher.DetachVolume started\" volume={AttachedVolume:{VolumeName:kubernetes.io/csi/csi-hostpath-ephemeral-8476^419d564d-f293-11ec-8879-f22a4ec34fd8 VolumeSpec:0xc0030a8cd8 NodeName:nodes-us-west3-a-j1m9 PluginIsAttachable:true DevicePath: DeviceMountPath: PluginName:} MountedByNode:false DetachRequestedTime:2022-06-23 01:26:18.595345793 +0000 UTC m=+976.542048338}\nI0623 01:26:18.718013 11 operation_generator.go:1603] Verified volume is safe to detach for volume \"pvc-4186a50b-2624-4e78-a138-660f17ef054e\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-8476^419d564d-f293-11ec-8879-f22a4ec34fd8\") on node \"nodes-us-west3-a-j1m9\" \nI0623 01:26:18.725562 11 pv_controller.go:890] volume \"local-vkw59\" entered phase \"Available\"\nI0623 01:26:18.830902 11 garbagecollector.go:223] syncing garbage collector with updated resources from discovery (attempt 1): added: [mygroup.example.com/v1beta1, Resource=noxus], removed: []\nI0623 01:26:18.858869 11 shared_informer.go:255] Waiting for caches to sync for garbage collector\nI0623 01:26:18.960538 11 shared_informer.go:262] Caches are synced for garbage collector\nI0623 01:26:18.960818 11 garbagecollector.go:266] synced garbage collector\nI0623 01:26:19.230245 11 operation_generator.go:513] DetachVolume.Detach succeeded for volume \"pvc-4186a50b-2624-4e78-a138-660f17ef054e\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-8476^419d564d-f293-11ec-8879-f22a4ec34fd8\") on node \"nodes-us-west3-a-j1m9\" \nI0623 01:26:19.345993 11 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-3035\nI0623 01:26:19.641222 11 namespace_controller.go:185] Namespace has been deleted secrets-4617\nI0623 01:26:20.001000 11 deployment_controller.go:590] \"Deployment has been deleted\" deployment=\"kubectl-2219/httpd-deployment\"\nI0623 01:26:20.565354 11 operation_generator.go:398] AttachVolume.Attach succeeded for volume \"pvc-73740996-47d1-45b1-ba5f-42acd1d75114\" (UniqueName: \"kubernetes.io/csi/pd.csi.storage.gke.io^projects/k8s-jkns-gce-soak-2/zones/us-west3-a/disks/pvc-73740996-47d1-45b1-ba5f-42acd1d75114\") from node \"nodes-us-west3-a-9jqc\" \nI0623 01:26:20.565945 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/ss-0\" fieldPath=\"\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-73740996-47d1-45b1-ba5f-42acd1d75114\\\" \"\nI0623 01:26:20.665956 11 namespace_controller.go:185] Namespace has been deleted ephemeral-6850-2938\nI0623 01:26:20.723523 11 namespace_controller.go:185] Namespace has been deleted provisioning-5364\nI0623 01:26:21.111258 11 namespace_controller.go:185] Namespace has been deleted provisioning-6376\nI0623 01:26:21.116877 11 event.go:294] \"Event occurred\" object=\"provisioning-9554-4762/csi-hostpathplugin\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0623 01:26:21.182549 11 event.go:294] \"Event occurred\" object=\"provisioning-9554/csi-hostpathp7cj4\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9554\\\" or manually created by system administrator\"\nI0623 01:26:21.183864 11 event.go:294] \"Event occurred\" object=\"provisioning-9554/csi-hostpathp7cj4\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9554\\\" or manually created by system administrator\"\nI0623 01:26:21.684632 11 event.go:294] \"Event occurred\" object=\"volume-3387-7352/csi-hostpathplugin\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0623 01:26:21.756111 11 event.go:294] \"Event occurred\" object=\"volume-3387/csi-hostpathjrldc\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-3387\\\" or manually created by system administrator\"\nW0623 01:26:21.956679 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:26:21.956714 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:26:22.758639 11 namespace_controller.go:162] deletion of namespace svcaccounts-4323 failed: unexpected items still remain in namespace: svcaccounts-4323 for gvr: /v1, Resource=pods\nW0623 01:26:23.105885 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:26:23.106271 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nW0623 01:26:24.282837 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:26:24.282875 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:26:24.352298 11 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-8870-5968\nI0623 01:26:24.808394 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"deployment-6777/test-orphan-deployment-68c48f9ff9\" need=1 creating=1\nI0623 01:26:24.930551 11 deployment_controller.go:590] \"Deployment has been deleted\" deployment=\"deployment-6777/test-adopt-deployment\"\nI0623 01:26:25.539679 11 namespace_controller.go:185] Namespace has been deleted downward-api-9832\nI0623 01:26:25.855656 11 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-2402-1639\nI0623 01:26:26.669597 11 pv_controller.go:890] volume \"pvc-8e57760f-2dfd-4050-8309-ed4207bfdc45\" entered phase \"Bound\"\nI0623 01:26:26.669638 11 pv_controller.go:993] volume \"pvc-8e57760f-2dfd-4050-8309-ed4207bfdc45\" bound to claim \"provisioning-9554/csi-hostpathp7cj4\"\nI0623 01:26:26.688863 11 pv_controller.go:834] claim \"provisioning-9554/csi-hostpathp7cj4\" entered phase \"Bound\"\nI0623 01:26:26.698941 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-8476-35/csi-hostpathplugin-57f86d6cff\" objectUID=43984304-9384-42d0-8361-a51bbf20b2ba kind=\"ControllerRevision\" virtual=false\nI0623 01:26:26.699500 11 stateful_set.go:450] StatefulSet has been deleted ephemeral-8476-35/csi-hostpathplugin\nI0623 01:26:26.699713 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-8476-35/csi-hostpathplugin-0\" objectUID=91b561b0-17e3-4427-9a43-f5900b50ead8 kind=\"Pod\" virtual=false\nI0623 01:26:26.702656 11 garbagecollector.go:616] \"Deleting object\" object=\"ephemeral-8476-35/csi-hostpathplugin-57f86d6cff\" objectUID=43984304-9384-42d0-8361-a51bbf20b2ba kind=\"ControllerRevision\" propagationPolicy=Background\nI0623 01:26:26.702667 11 garbagecollector.go:616] \"Deleting object\" object=\"ephemeral-8476-35/csi-hostpathplugin-0\" objectUID=91b561b0-17e3-4427-9a43-f5900b50ead8 kind=\"Pod\" propagationPolicy=Background\nI0623 01:26:27.417368 11 namespace_controller.go:185] Namespace has been deleted kubelet-test-5096\nI0623 01:26:27.422302 11 reconciler.go:325] \"attacherDetacher.AttachVolume started\" volume={VolumeToAttach:{MultiAttachErrorReported:false VolumeName:kubernetes.io/csi/csi-hostpath-provisioning-9554^80316092-f293-11ec-926f-32e7d6c6e60e VolumeSpec:0xc00283f9e0 NodeName:nodes-us-west3-a-l43j ScheduledPods:[&Pod{ObjectMeta:{pod-subpath-test-dynamicpv-kp7s provisioning-9554 dac63d51-89f0-40b4-b7b9-2baeebf8fd2e 31970 0 2022-06-23 01:26:27 +0000 UTC <nil> <nil> map[] map[] [] [] [{e2e.test Update v1 2022-06-23 01:26:27 +0000 UTC FieldsV1 {\"f:spec\":{\"f:affinity\":{\".\":{},\"f:nodeAffinity\":{\".\":{},\"f:requiredDuringSchedulingIgnoredDuringExecution\":{}}},\"f:containers\":{\"k:{\\\"name\\\":\\\"test-container-subpath-dynamicpv-kp7s\\\"}\":{\".\":{},\"f:args\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:securityContext\":{\".\":{},\"f:privileged\":{}},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{},\"f:volumeMounts\":{\".\":{},\"k:{\\\"mountPath\\\":\\\"/probe-volume\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}},\"k:{\\\"mountPath\\\":\\\"/test-volume\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{},\"f:subPath\":{}}}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:initContainers\":{\".\":{},\"k:{\\\"name\\\":\\\"init-volume-dynamicpv-kp7s\\\"}\":{\".\":{},\"f:command\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:securityContext\":{\".\":{},\"f:privileged\":{}},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{},\"f:volumeMounts\":{\".\":{},\"k:{\\\"mountPath\\\":\\\"/probe-volume\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}},\"k:{\\\"mountPath\\\":\\\"/test-volume\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}}}},\"k:{\\\"name\\\":\\\"test-init-subpath-dynamicpv-kp7s\\\"}\":{\".\":{},\"f:args\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:securityContext\":{\".\":{},\"f:privileged\":{}},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{},\"f:volumeMounts\":{\".\":{},\"k:{\\\"mountPath\\\":\\\"/probe-volume\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}},\"k:{\\\"mountPath\\\":\\\"/test-volume\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{},\"f:subPath\":{}}}}},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{\".\":{},\"f:seLinuxOptions\":{\".\":{},\"f:level\":{}}},\"f:terminationGracePeriodSeconds\":{},\"f:volumes\":{\".\":{},\"k:{\\\"name\\\":\\\"liveness-probe-volume\\\"}\":{\".\":{},\"f:emptyDir\":{},\"f:name\":{}},\"k:{\\\"name\\\":\\\"test-volume\\\"}\":{\".\":{},\"f:name\":{},\"f:persistentVolumeClaim\":{\".\":{},\"f:claimName\":{}}}}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:test-volume,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:&PersistentVolumeClaimVolumeSource{ClaimName:csi-hostpathp7cj4,ReadOnly:false,},RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:liveness-probe-volume,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:&EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,},GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-lg8z6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-container-subpath-dynamicpv-kp7s,Image:registry.k8s.io/e2e-test-images/agnhost:2.39,Command:[],Args:[mounttest --file_content_in_loop=/test-volume/test-file --retry_time=20],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-volume,ReadOnly:false,MountPath:/test-volume,SubPath:provisioning-9554,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:liveness-probe-volume,ReadOnly:false,MountPath:/probe-volume,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lg8z6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Never,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-l43j,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c0,c1,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{},MatchFields:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:metadata.name,Operator:In,Values:[nodes-us-west3-a-l43j],},},},},},PreferredDuringSchedulingIgnoredDuringExecution:[]PreferredSchedulingTerm{},},PodAffinity:nil,PodAntiAffinity:nil,},SchedulerName:default-scheduler,InitContainers:[]Container{Container{Name:init-volume-dynamicpv-kp7s,Image:registry.k8s.io/e2e-test-images/busybox:1.29-2,Command:[/bin/sh -c mkdir -p /test-volume/provisioning-9554],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-volume,ReadOnly:false,MountPath:/test-volume,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:liveness-probe-volume,ReadOnly:false,MountPath:/probe-volume,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lg8z6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},Container{Name:test-init-subpath-dynamicpv-kp7s,Image:registry.k8s.io/e2e-test-images/agnhost:2.39,Command:[],Args:[mounttest --new_file_0644=/test-volume/test-file --file_mode=/test-volume/test-file],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-volume,ReadOnly:false,MountPath:/test-volume,SubPath:provisioning-9554,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:liveness-probe-volume,ReadOnly:false,MountPath:/probe-volume,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lg8z6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:26:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}]}}\nI0623 01:26:27.535213 11 pv_controller.go:890] volume \"pvc-9fd72ccb-83fd-415a-8e0c-773db72e59d5\" entered phase \"Bound\"\nI0623 01:26:27.535255 11 pv_controller.go:993] volume \"pvc-9fd72ccb-83fd-415a-8e0c-773db72e59d5\" bound to claim \"volume-3387/csi-hostpathjrldc\"\nI0623 01:26:27.546550 11 pv_controller.go:834] claim \"volume-3387/csi-hostpathjrldc\" entered phase \"Bound\"\nI0623 01:26:27.612868 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/ss\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-1 Pod ss-1 in StatefulSet ss success\"\nI0623 01:26:27.624633 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/datadir-ss-1\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0623 01:26:27.633145 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/ss\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI0623 01:26:27.668342 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/datadir-ss-1\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"pd.csi.storage.gke.io\\\" or manually created by system administrator\"\nI0623 01:26:27.791384 11 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8251-7526/csi-mockplugin-attacher\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0623 01:26:27.829308 11 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8251-7526/csi-mockplugin\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0623 01:26:27.972526 11 operation_generator.go:398] AttachVolume.Attach succeeded for volume \"pvc-8e57760f-2dfd-4050-8309-ed4207bfdc45\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9554^80316092-f293-11ec-926f-32e7d6c6e60e\") from node \"nodes-us-west3-a-l43j\" \nI0623 01:26:27.972862 11 event.go:294] \"Event occurred\" object=\"provisioning-9554/pod-subpath-test-dynamicpv-kp7s\" fieldPath=\"\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-8e57760f-2dfd-4050-8309-ed4207bfdc45\\\" \"\nI0623 01:26:28.172435 11 reconciler.go:325] \"attacherDetacher.AttachVolume started\" volume={VolumeToAttach:{MultiAttachErrorReported:false VolumeName:kubernetes.io/csi/csi-hostpath-volume-3387^80b69f21-f293-11ec-9ddc-169b2121904c VolumeSpec:0xc003965470 NodeName:nodes-us-west3-a-l43j ScheduledPods:[&Pod{ObjectMeta:{hostpath-injector volume-3387 a7c5bf52-35c2-4f76-9e4b-a4faff95c86b 32016 0 2022-06-23 01:26:28 +0000 UTC <nil> <nil> map[role:hostpath-injector] map[] [] [] [{e2e.test Update v1 2022-06-23 01:26:28 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:labels\":{\".\":{},\"f:role\":{}}},\"f:spec\":{\"f:affinity\":{\".\":{},\"f:nodeAffinity\":{\".\":{},\"f:requiredDuringSchedulingIgnoredDuringExecution\":{}}},\"f:containers\":{\"k:{\\\"name\\\":\\\"hostpath-injector\\\"}\":{\".\":{},\"f:command\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:securityContext\":{\".\":{},\"f:privileged\":{}},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{},\"f:volumeMounts\":{\".\":{},\"k:{\\\"mountPath\\\":\\\"/opt/0\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}}},\"f:workingDir\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{\".\":{},\"f:seLinuxOptions\":{\".\":{},\"f:level\":{}}},\"f:terminationGracePeriodSeconds\":{},\"f:volumes\":{\".\":{},\"k:{\\\"name\\\":\\\"hostpath-volume-0\\\"}\":{\".\":{},\"f:name\":{},\"f:persistentVolumeClaim\":{\".\":{},\"f:claimName\":{}}}}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:hostpath-volume-0,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:&PersistentVolumeClaimVolumeSource{ClaimName:csi-hostpathjrldc,ReadOnly:false,},RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-zg5x5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:hostpath-injector,Image:registry.k8s.io/e2e-test-images/busybox:1.29-2,Command:[/bin/sh -c while true ; do sleep 2; done ],Args:[],WorkingDir:/opt,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostpath-volume-0,ReadOnly:false,MountPath:/opt/0,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zg5x5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-l43j,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c0,c1,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{},MatchFields:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:metadata.name,Operator:In,Values:[nodes-us-west3-a-l43j],},},},},},PreferredDuringSchedulingIgnoredDuringExecution:[]PreferredSchedulingTerm{},},PodAffinity:nil,PodAntiAffinity:nil,},SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:26:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}]}}\nI0623 01:26:28.591666 11 namespace_controller.go:185] Namespace has been deleted disruption-8471\nI0623 01:26:28.679674 11 operation_generator.go:398] AttachVolume.Attach succeeded for volume \"pvc-9fd72ccb-83fd-415a-8e0c-773db72e59d5\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-3387^80b69f21-f293-11ec-9ddc-169b2121904c\") from node \"nodes-us-west3-a-l43j\" \nI0623 01:26:28.680169 11 event.go:294] \"Event occurred\" object=\"volume-3387/hostpath-injector\" fieldPath=\"\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-9fd72ccb-83fd-415a-8e0c-773db72e59d5\\\" \"\nI0623 01:26:28.807564 11 event.go:294] \"Event occurred\" object=\"ephemeral-61/inline-volume-tester2-7bzmh-my-volume-0\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod inline-volume-tester2-7bzmh to be scheduled\"\nI0623 01:26:29.950263 11 namespace_controller.go:185] Namespace has been deleted deployment-6777\nI0623 01:26:30.045494 11 namespace_controller.go:185] Namespace has been deleted ephemeral-8476\nI0623 01:26:30.445080 11 event.go:294] \"Event occurred\" object=\"ephemeral-61/inline-volume-tester2-7bzmh-my-volume-0\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-61\\\" or manually created by system administrator\"\nI0623 01:26:30.464021 11 pv_controller.go:890] volume \"pvc-d1f9410d-eeef-48c2-b9cd-ceb7daf1f409\" entered phase \"Bound\"\nI0623 01:26:30.464085 11 pv_controller.go:993] volume \"pvc-d1f9410d-eeef-48c2-b9cd-ceb7daf1f409\" bound to claim \"ephemeral-61/inline-volume-tester2-7bzmh-my-volume-0\"\nI0623 01:26:30.479151 11 pv_controller.go:834] claim \"ephemeral-61/inline-volume-tester2-7bzmh-my-volume-0\" entered phase \"Bound\"\nI0623 01:26:31.065128 11 namespace_controller.go:185] Namespace has been deleted disruption-7040\nI0623 01:26:31.476148 11 pv_controller.go:941] claim \"volume-8600/pvc-b9qsf\" bound to volume \"local-vkw59\"\nI0623 01:26:31.492643 11 pv_controller.go:890] volume \"local-vkw59\" entered phase \"Bound\"\nI0623 01:26:31.493054 11 pv_controller.go:993] volume \"local-vkw59\" bound to claim \"volume-8600/pvc-b9qsf\"\nI0623 01:26:31.504839 11 pv_controller.go:834] claim \"volume-8600/pvc-b9qsf\" entered phase \"Bound\"\nI0623 01:26:31.506099 11 pv_controller.go:941] claim \"volume-9126/pvc-s7npm\" bound to volume \"local-dvvdg\"\nI0623 01:26:31.521109 11 pv_controller.go:890] volume \"local-dvvdg\" entered phase \"Bound\"\nI0623 01:26:31.521420 11 pv_controller.go:993] volume \"local-dvvdg\" bound to claim \"volume-9126/pvc-s7npm\"\nI0623 01:26:31.531411 11 reconciler.go:325] \"attacherDetacher.AttachVolume started\" volume={VolumeToAttach:{MultiAttachErrorReported:false VolumeName:kubernetes.io/csi/csi-hostpath-ephemeral-61^827561f9-f293-11ec-bbcb-a2260c93e9d3 VolumeSpec:0xc000cefe48 NodeName:nodes-us-west3-a-s284 ScheduledPods:[&Pod{ObjectMeta:{inline-volume-tester2-7bzmh inline-volume-tester2- ephemeral-61 2f895efb-c5ad-4539-9c30-5f8f8f4b5741 32094 0 2022-06-23 01:26:28 +0000 UTC <nil> <nil> map[app:inline-volume-tester2] map[] [] [] [{e2e.test Update v1 2022-06-23 01:26:28 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:app\":{}}},\"f:spec\":{\"f:affinity\":{\".\":{},\"f:nodeAffinity\":{\".\":{},\"f:requiredDuringSchedulingIgnoredDuringExecution\":{}}},\"f:containers\":{\"k:{\\\"name\\\":\\\"csi-volume-tester\\\"}\":{\".\":{},\"f:command\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{},\"f:volumeMounts\":{\".\":{},\"k:{\\\"mountPath\\\":\\\"/mnt/test-0\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}}}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{},\"f:volumes\":{\".\":{},\"k:{\\\"name\\\":\\\"my-volume-0\\\"}\":{\".\":{},\"f:ephemeral\":{\".\":{},\"f:volumeClaimTemplate\":{\".\":{},\"f:metadata\":{\".\":{},\"f:creationTimestamp\":{}},\"f:spec\":{\".\":{},\"f:accessModes\":{},\"f:resources\":{\".\":{},\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}},\"f:name\":{}}}}} } {kube-scheduler Update v1 2022-06-23 01:26:28 +0000 UTC FieldsV1 {\"f:status\":{\"f:conditions\":{\".\":{},\"k:{\\\"type\\\":\\\"PodScheduled\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:my-volume-0,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:&EphemeralVolumeSource{VolumeClaimTemplate:&PersistentVolumeClaimTemplate{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1048576 0} {<nil>} 1Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*ephemeral-61vjtjp,VolumeMode:*Filesystem,DataSource:nil,DataSourceRef:nil,},},},},},Volume{Name:kube-api-access-gvtxq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:csi-volume-tester,Image:registry.k8s.io/e2e-test-images/busybox:1.29-2,Command:[/bin/sh -c sleep 100000],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:my-volume-0,ReadOnly:false,MountPath:/mnt/test-0,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-gvtxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Never,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-s284,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{},MatchFields:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:metadata.name,Operator:In,Values:[nodes-us-west3-a-s284],},},},},},PreferredDuringSchedulingIgnoredDuringExecution:[]PreferredSchedulingTerm{},},PodAffinity:nil,PodAntiAffinity:nil,},SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:26:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}]}}\nI0623 01:26:31.533332 11 pv_controller.go:834] claim \"volume-9126/pvc-s7npm\" entered phase \"Bound\"\nI0623 01:26:31.534333 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/datadir-ss-1\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"pd.csi.storage.gke.io\\\" or manually created by system administrator\"\nI0623 01:26:31.996121 11 namespace_controller.go:185] Namespace has been deleted projected-4326\nI0623 01:26:32.055786 11 operation_generator.go:398] AttachVolume.Attach succeeded for volume \"pvc-d1f9410d-eeef-48c2-b9cd-ceb7daf1f409\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-61^827561f9-f293-11ec-bbcb-a2260c93e9d3\") from node \"nodes-us-west3-a-s284\" \nI0623 01:26:32.056059 11 event.go:294] \"Event occurred\" object=\"ephemeral-61/inline-volume-tester2-7bzmh\" fieldPath=\"\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-d1f9410d-eeef-48c2-b9cd-ceb7daf1f409\\\" \"\nI0623 01:26:32.504921 11 pv_controller.go:890] volume \"pvc-8861af5d-f3f7-4131-bc49-3294fa79d49e\" entered phase \"Bound\"\nI0623 01:26:32.504986 11 pv_controller.go:993] volume \"pvc-8861af5d-f3f7-4131-bc49-3294fa79d49e\" bound to claim \"statefulset-9707/datadir-ss-1\"\nI0623 01:26:32.515272 11 pv_controller.go:834] claim \"statefulset-9707/datadir-ss-1\" entered phase \"Bound\"\nI0623 01:26:32.657039 11 reconciler.go:325] \"attacherDetacher.AttachVolume started\" volume={VolumeToAttach:{MultiAttachErrorReported:false VolumeName:kubernetes.io/csi/pd.csi.storage.gke.io^projects/k8s-jkns-gce-soak-2/zones/us-west3-a/disks/pvc-8861af5d-f3f7-4131-bc49-3294fa79d49e VolumeSpec:0xc0025cc3c0 NodeName:nodes-us-west3-a-j1m9 ScheduledPods:[&Pod{ObjectMeta:{ss-1 ss- statefulset-9707 f1b171f5-29e2-4569-b003-76fefea8f4ed 32149 0 2022-06-23 01:26:27 +0000 UTC <nil> <nil> map[baz:blah controller-revision-hash:ss-5b74f7c5d foo:bar statefulset.kubernetes.io/pod-name:ss-1] map[] [{apps/v1 StatefulSet ss 443a664f-ea73-4296-b08a-6209ac80fa30 0xc000fa16e7 0xc000fa16e8}] [] [{kube-controller-manager Update v1 2022-06-23 01:26:27 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:baz\":{},\"f:controller-revision-hash\":{},\"f:foo\":{},\"f:statefulset.kubernetes.io/pod-name\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"443a664f-ea73-4296-b08a-6209ac80fa30\\\"}\":{}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\"webserver\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:readinessProbe\":{\".\":{},\"f:exec\":{\".\":{},\"f:command\":{}},\"f:failureThreshold\":{},\"f:periodSeconds\":{},\"f:successThreshold\":{},\"f:timeoutSeconds\":{}},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{},\"f:volumeMounts\":{\".\":{},\"k:{\\\"mountPath\\\":\\\"/data/\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}},\"k:{\\\"mountPath\\\":\\\"/home\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}}}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:hostname\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:subdomain\":{},\"f:terminationGracePeriodSeconds\":{},\"f:volumes\":{\".\":{},\"k:{\\\"name\\\":\\\"datadir\\\"}\":{\".\":{},\"f:name\":{},\"f:persistentVolumeClaim\":{\".\":{},\"f:claimName\":{}}},\"k:{\\\"name\\\":\\\"home\\\"}\":{\".\":{},\"f:hostPath\":{\".\":{},\"f:path\":{},\"f:type\":{}},\"f:name\":{}}}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:datadir,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:&PersistentVolumeClaimVolumeSource{ClaimName:datadir-ss-1,ReadOnly:false,},RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:home,VolumeSource:VolumeSource{HostPath:&HostPathVolumeSource{Path:/tmp/home,Type:*,},EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-xrwf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:webserver,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:datadir,ReadOnly:false,MountPath:/data/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:home,ReadOnly:false,MountPath:/home,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-xrwf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[test -f /data/statefulset-continue],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:1,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-j1m9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:ss-1,Subdomain:test,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:26:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}]}}\nW0623 01:26:33.058925 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:26:33.058961 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nW0623 01:26:34.756841 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:26:34.756879 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:26:36.714436 11 namespace_controller.go:185] Namespace has been deleted job-434\nI0623 01:26:37.015073 11 namespace_controller.go:185] Namespace has been deleted ephemeral-8476-35\nI0623 01:26:37.365147 11 namespace_controller.go:185] Namespace has been deleted container-probe-390\nI0623 01:26:37.404480 11 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8251/pvc-rd692\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-8251\\\" or manually created by system administrator\"\nI0623 01:26:37.442592 11 pv_controller.go:890] volume \"pvc-5ed981de-b34a-4bef-9fbd-05b8a6e2c3e2\" entered phase \"Bound\"\nI0623 01:26:37.442638 11 pv_controller.go:993] volume \"pvc-5ed981de-b34a-4bef-9fbd-05b8a6e2c3e2\" bound to claim \"csi-mock-volumes-8251/pvc-rd692\"\nI0623 01:26:37.467632 11 pv_controller.go:834] claim \"csi-mock-volumes-8251/pvc-rd692\" entered phase \"Bound\"\nI0623 01:26:37.998463 11 namespace_controller.go:185] Namespace has been deleted kubectl-5981\nI0623 01:26:38.673450 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"volume-9126/pvc-s7npm\"\nI0623 01:26:38.689337 11 pv_controller.go:651] volume \"local-dvvdg\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:26:38.693395 11 pv_controller.go:890] volume \"local-dvvdg\" entered phase \"Released\"\nI0623 01:26:38.706199 11 pv_controller_base.go:582] deletion of claim \"volume-9126/pvc-s7npm\" was already processed\nI0623 01:26:39.260224 11 reconciler.go:250] \"attacherDetacher.DetachVolume started\" volume={AttachedVolume:{VolumeName:kubernetes.io/csi/csi-hostpath-volume-expand-6317^538a62f3-f293-11ec-8a1b-7eccb78503aa VolumeSpec:0xc003686e28 NodeName:nodes-us-west3-a-j1m9 PluginIsAttachable:true DevicePath: DeviceMountPath: PluginName:} MountedByNode:false DetachRequestedTime:2022-06-23 01:26:38.213715163 +0000 UTC m=+996.160417683}\nI0623 01:26:39.288224 11 operation_generator.go:1603] Verified volume is safe to detach for volume \"pvc-4fee6476-581a-4ef6-ba3c-2ca49753bcba\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-6317^538a62f3-f293-11ec-8a1b-7eccb78503aa\") on node \"nodes-us-west3-a-j1m9\" \nI0623 01:26:39.335638 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"replicaset-3588/condition-test\" need=3 creating=3\nW0623 01:26:39.354336 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:26:39.354702 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:26:39.375171 11 event.go:294] \"Event occurred\" object=\"replicaset-3588/condition-test\" fieldPath=\"\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-9g2n8\"\nI0623 01:26:39.398542 11 event.go:294] \"Event occurred\" object=\"replicaset-3588/condition-test\" fieldPath=\"\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-b6sx2\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0623 01:26:39.400971 11 event.go:294] \"Event occurred\" object=\"replicaset-3588/condition-test\" fieldPath=\"\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-z9f6n\"\nI0623 01:26:39.401001 11 replica_set.go:602] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-3588/condition-test\nE0623 01:26:39.417195 11 replica_set.go:550] sync \"replicaset-3588/condition-test\" failed with pods \"condition-test-b6sx2\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0623 01:26:39.417348 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"replicaset-3588/condition-test\" need=3 creating=1\nI0623 01:26:39.420768 11 event.go:294] \"Event occurred\" object=\"replicaset-3588/condition-test\" fieldPath=\"\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-tlthx\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0623 01:26:39.420504 11 replica_set.go:602] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-3588/condition-test\nE0623 01:26:39.429304 11 replica_set.go:550] sync \"replicaset-3588/condition-test\" failed with pods \"condition-test-tlthx\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0623 01:26:39.429387 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"replicaset-3588/condition-test\" need=3 creating=1\nI0623 01:26:39.432176 11 replica_set.go:602] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-3588/condition-test\nE0623 01:26:39.432516 11 replica_set.go:550] sync \"replicaset-3588/condition-test\" failed with pods \"condition-test-4sgz8\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0623 01:26:39.432408 11 event.go:294] \"Event occurred\" object=\"replicaset-3588/condition-test\" fieldPath=\"\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-4sgz8\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0623 01:26:39.439453 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"replicaset-3588/condition-test\" need=3 creating=1\nI0623 01:26:39.444324 11 replica_set.go:602] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-3588/condition-test\nE0623 01:26:39.444395 11 replica_set.go:550] sync \"replicaset-3588/condition-test\" failed with pods \"condition-test-h4tcp\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0623 01:26:39.445144 11 event.go:294] \"Event occurred\" object=\"replicaset-3588/condition-test\" fieldPath=\"\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-h4tcp\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0623 01:26:39.485009 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"replicaset-3588/condition-test\" need=3 creating=1\nI0623 01:26:39.487855 11 replica_set.go:602] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-3588/condition-test\nE0623 01:26:39.488247 11 replica_set.go:550] sync \"replicaset-3588/condition-test\" failed with pods \"condition-test-j4wz2\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0623 01:26:39.488498 11 event.go:294] \"Event occurred\" object=\"replicaset-3588/condition-test\" fieldPath=\"\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-j4wz2\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0623 01:26:39.518125 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"volume-expand-6317/csi-hostpathr7nn2\"\nI0623 01:26:39.541199 11 pv_controller.go:651] volume \"pvc-4fee6476-581a-4ef6-ba3c-2ca49753bcba\" is released and reclaim policy \"Delete\" will be executed\nI0623 01:26:39.554263 11 pv_controller.go:890] volume \"pvc-4fee6476-581a-4ef6-ba3c-2ca49753bcba\" entered phase \"Released\"\nI0623 01:26:39.568789 11 pv_controller.go:1353] isVolumeReleased[pvc-4fee6476-581a-4ef6-ba3c-2ca49753bcba]: volume is released\nI0623 01:26:39.573685 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"replicaset-3588/condition-test\" need=3 creating=1\nI0623 01:26:39.580220 11 replica_set.go:602] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-3588/condition-test\nE0623 01:26:39.581440 11 replica_set.go:550] sync \"replicaset-3588/condition-test\" failed with pods \"condition-test-6n6l5\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0623 01:26:39.581147 11 event.go:294] \"Event occurred\" object=\"replicaset-3588/condition-test\" fieldPath=\"\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-6n6l5\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0623 01:26:39.621274 11 pv_controller_base.go:582] deletion of claim \"volume-expand-6317/csi-hostpathr7nn2\" was already processed\nI0623 01:26:39.671615 11 reconciler.go:325] \"attacherDetacher.AttachVolume started\" volume={VolumeToAttach:{MultiAttachErrorReported:false VolumeName:kubernetes.io/csi/csi-mock-csi-mock-volumes-8251^869b9bf9-f293-11ec-bb9a-8a4045343884 VolumeSpec:0xc0025cd1d0 NodeName:nodes-us-west3-a-l43j ScheduledPods:[&Pod{ObjectMeta:{pvc-volume-tester-425ch pvc-volume-tester- csi-mock-volumes-8251 708f6eac-1fdd-481a-b346-70eb94a7c423 32367 0 2022-06-23 01:26:39 +0000 UTC <nil> <nil> map[] map[] [] [] [{e2e.test Update v1 2022-06-23 01:26:39 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:affinity\":{\".\":{},\"f:nodeAffinity\":{\".\":{},\"f:requiredDuringSchedulingIgnoredDuringExecution\":{}}},\"f:containers\":{\"k:{\\\"name\\\":\\\"volume-tester\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{},\"f:volumeMounts\":{\".\":{},\"k:{\\\"mountPath\\\":\\\"/mnt/test\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}}}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{},\"f:volumes\":{\".\":{},\"k:{\\\"name\\\":\\\"my-volume\\\"}\":{\".\":{},\"f:name\":{},\"f:persistentVolumeClaim\":{\".\":{},\"f:claimName\":{}}}}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:my-volume,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:&PersistentVolumeClaimVolumeSource{ClaimName:pvc-rd692,ReadOnly:false,},RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-khdcg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:volume-tester,Image:registry.k8s.io/pause:3.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:my-volume,ReadOnly:false,MountPath:/mnt/test,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-khdcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Never,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-l43j,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{},MatchFields:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:metadata.name,Operator:In,Values:[nodes-us-west3-a-l43j],},},},},},PreferredDuringSchedulingIgnoredDuringExecution:[]PreferredSchedulingTerm{},},PodAffinity:nil,PodAntiAffinity:nil,},SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:26:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}]}}\nI0623 01:26:39.743235 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"replicaset-3588/condition-test\" need=3 creating=1\nI0623 01:26:39.746890 11 replica_set.go:602] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-3588/condition-test\nE0623 01:26:39.746972 11 replica_set.go:550] sync \"replicaset-3588/condition-test\" failed with pods \"condition-test-bbfr7\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0623 01:26:39.747341 11 event.go:294] \"Event occurred\" object=\"replicaset-3588/condition-test\" fieldPath=\"\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-bbfr7\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nW0623 01:26:39.776338 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:26:39.776407 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:26:39.841646 11 operation_generator.go:513] DetachVolume.Detach succeeded for volume \"pvc-4fee6476-581a-4ef6-ba3c-2ca49753bcba\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-6317^538a62f3-f293-11ec-8a1b-7eccb78503aa\") on node \"nodes-us-west3-a-j1m9\" \nI0623 01:26:40.067750 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"replicaset-3588/condition-test\" need=3 creating=1\nI0623 01:26:40.071809 11 replica_set.go:602] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-3588/condition-test\nE0623 01:26:40.071875 11 replica_set.go:550] sync \"replicaset-3588/condition-test\" failed with pods \"condition-test-zqn45\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0623 01:26:40.072364 11 event.go:294] \"Event occurred\" object=\"replicaset-3588/condition-test\" fieldPath=\"\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-zqn45\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0623 01:26:40.223655 11 operation_generator.go:398] AttachVolume.Attach succeeded for volume \"pvc-5ed981de-b34a-4bef-9fbd-05b8a6e2c3e2\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8251^869b9bf9-f293-11ec-bb9a-8a4045343884\") from node \"nodes-us-west3-a-l43j\" \nI0623 01:26:40.224042 11 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8251/pvc-volume-tester-425ch\" fieldPath=\"\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-5ed981de-b34a-4bef-9fbd-05b8a6e2c3e2\\\" \"\nI0623 01:26:40.476287 11 namespace_controller.go:185] Namespace has been deleted projected-9906\nE0623 01:26:40.512961 11 pv_controller.go:1501] error finding provisioning plugin for claim provisioning-3701/pvc-jzzt9: storageclass.storage.k8s.io \"provisioning-3701\" not found\nI0623 01:26:40.513454 11 event.go:294] \"Event occurred\" object=\"provisioning-3701/pvc-jzzt9\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-3701\\\" not found\"\nI0623 01:26:40.544496 11 pv_controller.go:890] volume \"local-gwj6g\" entered phase \"Available\"\nE0623 01:26:40.822265 11 pv_controller.go:1501] error finding provisioning plugin for claim provisioning-2799/pvc-gknbc: storageclass.storage.k8s.io \"provisioning-2799\" not found\nI0623 01:26:40.822586 11 event.go:294] \"Event occurred\" object=\"provisioning-2799/pvc-gknbc\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-2799\\\" not found\"\nI0623 01:26:40.854725 11 pv_controller.go:890] volume \"local-dr65z\" entered phase \"Available\"\nW0623 01:26:40.960034 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:26:40.960068 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:26:41.256435 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"volume-8600/pvc-b9qsf\"\nI0623 01:26:41.264959 11 pv_controller.go:651] volume \"local-vkw59\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:26:41.271087 11 pv_controller.go:890] volume \"local-vkw59\" entered phase \"Released\"\nI0623 01:26:41.290770 11 pv_controller_base.go:582] deletion of claim \"volume-8600/pvc-b9qsf\" was already processed\nI0623 01:26:41.303896 11 event.go:294] \"Event occurred\" object=\"ephemeral-6433-8450/csi-hostpathplugin\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0623 01:26:41.447720 11 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-61, name: inline-volume-tester2-7bzmh, uid: 2f895efb-c5ad-4539-9c30-5f8f8f4b5741] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0623 01:26:41.448827 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-61/inline-volume-tester2-7bzmh-my-volume-0\" objectUID=d1f9410d-eeef-48c2-b9cd-ceb7daf1f409 kind=\"PersistentVolumeClaim\" virtual=false\nI0623 01:26:41.449077 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-61/inline-volume-tester2-7bzmh\" objectUID=2f895efb-c5ad-4539-9c30-5f8f8f4b5741 kind=\"Pod\" virtual=false\nI0623 01:26:41.475579 11 garbagecollector.go:631] adding [v1/PersistentVolumeClaim, namespace: ephemeral-61, name: inline-volume-tester2-7bzmh-my-volume-0, uid: d1f9410d-eeef-48c2-b9cd-ceb7daf1f409] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-61, name: inline-volume-tester2-7bzmh, uid: 2f895efb-c5ad-4539-9c30-5f8f8f4b5741] is deletingDependents\nI0623 01:26:41.482412 11 garbagecollector.go:616] \"Deleting object\" object=\"ephemeral-61/inline-volume-tester2-7bzmh-my-volume-0\" objectUID=d1f9410d-eeef-48c2-b9cd-ceb7daf1f409 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0623 01:26:41.488702 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-61/inline-volume-tester2-7bzmh-my-volume-0\" objectUID=d1f9410d-eeef-48c2-b9cd-ceb7daf1f409 kind=\"PersistentVolumeClaim\" virtual=false\nI0623 01:26:41.489524 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-61/inline-volume-tester2-7bzmh\" PVC=\"ephemeral-61/inline-volume-tester2-7bzmh-my-volume-0\"\nI0623 01:26:41.489549 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-61/inline-volume-tester2-7bzmh-my-volume-0\"\nI0623 01:26:41.493715 11 garbagecollector.go:616] \"Deleting object\" object=\"ephemeral-61/inline-volume-tester2-7bzmh-my-volume-0\" objectUID=d1f9410d-eeef-48c2-b9cd-ceb7daf1f409 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nE0623 01:26:41.648603 11 pv_controller.go:1501] error finding provisioning plugin for claim provisioning-9659/pvc-7q5qr: storageclass.storage.k8s.io \"provisioning-9659\" not found\nI0623 01:26:41.649319 11 event.go:294] \"Event occurred\" object=\"provisioning-9659/pvc-7q5qr\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-9659\\\" not found\"\nI0623 01:26:41.685812 11 pv_controller.go:890] volume \"local-sbqkf\" entered phase \"Available\"\nI0623 01:26:42.320165 11 reconciler.go:250] \"attacherDetacher.DetachVolume started\" volume={AttachedVolume:{VolumeName:kubernetes.io/csi/csi-hostpath-provisioning-9554^80316092-f293-11ec-926f-32e7d6c6e60e VolumeSpec:0xc00283f9e0 NodeName:nodes-us-west3-a-l43j PluginIsAttachable:true DevicePath: DeviceMountPath: PluginName:} MountedByNode:false DetachRequestedTime:0001-01-01 00:00:00 +0000 UTC}\nI0623 01:26:42.328370 11 operation_generator.go:1603] Verified volume is safe to detach for volume \"pvc-8e57760f-2dfd-4050-8309-ed4207bfdc45\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9554^80316092-f293-11ec-926f-32e7d6c6e60e\") on node \"nodes-us-west3-a-l43j\" \nI0623 01:26:42.618919 11 namespace_controller.go:185] Namespace has been deleted pods-59\nI0623 01:26:42.753891 11 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-5383-955/csi-mockplugin\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0623 01:26:42.863227 11 operation_generator.go:513] DetachVolume.Detach succeeded for volume \"pvc-8e57760f-2dfd-4050-8309-ed4207bfdc45\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9554^80316092-f293-11ec-926f-32e7d6c6e60e\") on node \"nodes-us-west3-a-l43j\" \nI0623 01:26:43.319662 11 namespace_controller.go:185] Namespace has been deleted subpath-2693\nE0623 01:26:43.400325 11 namespace_controller.go:162] deletion of namespace svcaccounts-4323 failed: unexpected items still remain in namespace: svcaccounts-4323 for gvr: /v1, Resource=pods\nI0623 01:26:43.634061 11 reconciler.go:325] \"attacherDetacher.AttachVolume started\" volume={VolumeToAttach:{MultiAttachErrorReported:false VolumeName:kubernetes.io/csi/csi-hostpath-provisioning-9554^80316092-f293-11ec-926f-32e7d6c6e60e VolumeSpec:0xc000cd9770 NodeName:nodes-us-west3-a-l43j ScheduledPods:[&Pod{ObjectMeta:{pod-subpath-test-dynamicpv-kp7s provisioning-9554 2c5d9f28-66ee-42a2-bf3e-5fb3b77b420b 32578 0 2022-06-23 01:26:43 +0000 UTC <nil> <nil> map[] map[] [] [] [{e2e.test Update v1 2022-06-23 01:26:43 +0000 UTC FieldsV1 {\"f:spec\":{\"f:affinity\":{\".\":{},\"f:nodeAffinity\":{\".\":{},\"f:requiredDuringSchedulingIgnoredDuringExecution\":{}}},\"f:containers\":{\"k:{\\\"name\\\":\\\"test-container-subpath-dynamicpv-kp7s\\\"}\":{\".\":{},\"f:args\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:securityContext\":{\".\":{},\"f:privileged\":{}},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{},\"f:volumeMounts\":{\".\":{},\"k:{\\\"mountPath\\\":\\\"/probe-volume\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}},\"k:{\\\"mountPath\\\":\\\"/test-volume\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{},\"f:subPath\":{}}}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{\".\":{},\"f:seLinuxOptions\":{\".\":{},\"f:level\":{}}},\"f:terminationGracePeriodSeconds\":{},\"f:volumes\":{\".\":{},\"k:{\\\"name\\\":\\\"liveness-probe-volume\\\"}\":{\".\":{},\"f:emptyDir\":{},\"f:name\":{}},\"k:{\\\"name\\\":\\\"test-volume\\\"}\":{\".\":{},\"f:name\":{},\"f:persistentVolumeClaim\":{\".\":{},\"f:claimName\":{},\"f:readOnly\":{}}}}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:test-volume,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:&PersistentVolumeClaimVolumeSource{ClaimName:csi-hostpathp7cj4,ReadOnly:true,},RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:liveness-probe-volume,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:&EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,},GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-h82mt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-container-subpath-dynamicpv-kp7s,Image:registry.k8s.io/e2e-test-images/agnhost:2.39,Command:[],Args:[mounttest --file_content_in_loop=/test-volume/test-file --retry_time=20],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-volume,ReadOnly:false,MountPath:/test-volume,SubPath:provisioning-9554,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:liveness-probe-volume,ReadOnly:false,MountPath:/probe-volume,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-h82mt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Never,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-l43j,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c0,c1,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{},MatchFields:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:metadata.name,Operator:In,Values:[nodes-us-west3-a-l43j],},},},},},PreferredDuringSchedulingIgnoredDuringExecution:[]PreferredSchedulingTerm{},},PodAffinity:nil,PodAntiAffinity:nil,},SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:26:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}]}}\nI0623 01:26:43.658480 11 namespace_controller.go:185] Namespace has been deleted discovery-5462\nI0623 01:26:44.168797 11 operation_generator.go:398] AttachVolume.Attach succeeded for volume \"pvc-8e57760f-2dfd-4050-8309-ed4207bfdc45\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9554^80316092-f293-11ec-926f-32e7d6c6e60e\") from node \"nodes-us-west3-a-l43j\" \nI0623 01:26:44.169150 11 event.go:294] \"Event occurred\" object=\"provisioning-9554/pod-subpath-test-dynamicpv-kp7s\" fieldPath=\"\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-8e57760f-2dfd-4050-8309-ed4207bfdc45\\\" \"\nI0623 01:26:44.588773 11 namespace_controller.go:185] Namespace has been deleted subpath-9620\nI0623 01:26:44.642607 11 namespace_controller.go:185] Namespace has been deleted port-forwarding-6768\nI0623 01:26:44.796412 11 pv_controller.go:890] volume \"local-pvzbhd4\" entered phase \"Available\"\nI0623 01:26:44.816991 11 pv_controller.go:941] claim \"persistent-local-volumes-test-7019/pvc-t7r82\" bound to volume \"local-pvzbhd4\"\nI0623 01:26:44.837671 11 pv_controller.go:890] volume \"local-pvzbhd4\" entered phase \"Bound\"\nI0623 01:26:44.838099 11 pv_controller.go:993] volume \"local-pvzbhd4\" bound to claim \"persistent-local-volumes-test-7019/pvc-t7r82\"\nI0623 01:26:44.852160 11 pv_controller.go:834] claim \"persistent-local-volumes-test-7019/pvc-t7r82\" entered phase \"Bound\"\nI0623 01:26:45.601856 11 resource_quota_controller.go:312] Resource quota has been deleted replicaset-3588/condition-test\nI0623 01:26:45.629640 11 garbagecollector.go:504] \"Processing object\" object=\"replicaset-3588/condition-test-z9f6n\" objectUID=33e6c026-3c25-4466-8ef1-99d135642bfa kind=\"Pod\" virtual=false\nI0623 01:26:45.630026 11 garbagecollector.go:504] \"Processing object\" object=\"replicaset-3588/condition-test-9g2n8\" objectUID=1cf38fa8-4c84-4d17-81df-3454bd8f15a4 kind=\"Pod\" virtual=false\nI0623 01:26:45.634104 11 garbagecollector.go:616] \"Deleting object\" object=\"replicaset-3588/condition-test-9g2n8\" objectUID=1cf38fa8-4c84-4d17-81df-3454bd8f15a4 kind=\"Pod\" propagationPolicy=Background\nI0623 01:26:45.634511 11 garbagecollector.go:616] \"Deleting object\" object=\"replicaset-3588/condition-test-z9f6n\" objectUID=33e6c026-3c25-4466-8ef1-99d135642bfa kind=\"Pod\" propagationPolicy=Background\nI0623 01:26:46.156598 11 operation_generator.go:398] AttachVolume.Attach succeeded for volume \"pvc-8861af5d-f3f7-4131-bc49-3294fa79d49e\" (UniqueName: \"kubernetes.io/csi/pd.csi.storage.gke.io^projects/k8s-jkns-gce-soak-2/zones/us-west3-a/disks/pvc-8861af5d-f3f7-4131-bc49-3294fa79d49e\") from node \"nodes-us-west3-a-j1m9\" \nI0623 01:26:46.157311 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/ss-1\" fieldPath=\"\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-8861af5d-f3f7-4131-bc49-3294fa79d49e\\\" \"\nI0623 01:26:46.183337 11 garbagecollector.go:504] \"Processing object\" object=\"services-9875/service-headless-rtbq4\" objectUID=ed94cddb-2242-4b5c-bdcd-960a5c0a1294 kind=\"Pod\" virtual=false\nI0623 01:26:46.184798 11 garbagecollector.go:504] \"Processing object\" object=\"services-9875/service-headless-fj5pg\" objectUID=ca775c6b-91ea-4256-b478-2c75903d0f85 kind=\"Pod\" virtual=false\nI0623 01:26:46.184921 11 garbagecollector.go:504] \"Processing object\" object=\"services-9875/service-headless-twqzm\" objectUID=068e3a2f-dc93-4987-a28a-e10b4bbde835 kind=\"Pod\" virtual=false\nI0623 01:26:46.193132 11 garbagecollector.go:616] \"Deleting object\" object=\"services-9875/service-headless-fj5pg\" objectUID=ca775c6b-91ea-4256-b478-2c75903d0f85 kind=\"Pod\" propagationPolicy=Background\nI0623 01:26:46.193398 11 garbagecollector.go:616] \"Deleting object\" object=\"services-9875/service-headless-twqzm\" objectUID=068e3a2f-dc93-4987-a28a-e10b4bbde835 kind=\"Pod\" propagationPolicy=Background\nI0623 01:26:46.193638 11 garbagecollector.go:616] \"Deleting object\" object=\"services-9875/service-headless-rtbq4\" objectUID=ed94cddb-2242-4b5c-bdcd-960a5c0a1294 kind=\"Pod\" propagationPolicy=Background\nI0623 01:26:46.201707 11 garbagecollector.go:504] \"Processing object\" object=\"services-9875/service-headless-toggled-ww9q5\" objectUID=87fcf1c7-91f4-4d45-817c-e27cdac411a1 kind=\"Pod\" virtual=false\nI0623 01:26:46.201893 11 garbagecollector.go:504] \"Processing object\" object=\"services-9875/service-headless-toggled-dxg85\" objectUID=dbe71e47-3c72-4168-a2c2-daf0b4cb1943 kind=\"Pod\" virtual=false\nI0623 01:26:46.202972 11 garbagecollector.go:504] \"Processing object\" object=\"services-9875/service-headless-toggled-pmb54\" objectUID=5de85ee7-56ca-4cac-9d1f-69242c192209 kind=\"Pod\" virtual=false\nI0623 01:26:46.214431 11 garbagecollector.go:616] \"Deleting object\" object=\"services-9875/service-headless-toggled-pmb54\" objectUID=5de85ee7-56ca-4cac-9d1f-69242c192209 kind=\"Pod\" propagationPolicy=Background\nI0623 01:26:46.215554 11 garbagecollector.go:616] \"Deleting object\" object=\"services-9875/service-headless-toggled-dxg85\" objectUID=dbe71e47-3c72-4168-a2c2-daf0b4cb1943 kind=\"Pod\" propagationPolicy=Background\nI0623 01:26:46.216601 11 garbagecollector.go:616] \"Deleting object\" object=\"services-9875/service-headless-toggled-ww9q5\" objectUID=87fcf1c7-91f4-4d45-817c-e27cdac411a1 kind=\"Pod\" propagationPolicy=Background\nW0623 01:26:46.228660 11 utils.go:264] Service services-9875/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nW0623 01:26:46.258790 11 utils.go:264] Service services-9875/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0623 01:26:46.477286 11 pv_controller.go:941] claim \"provisioning-3701/pvc-jzzt9\" bound to volume \"local-gwj6g\"\nI0623 01:26:46.496106 11 pv_controller.go:890] volume \"local-gwj6g\" entered phase \"Bound\"\nI0623 01:26:46.496232 11 pv_controller.go:993] volume \"local-gwj6g\" bound to claim \"provisioning-3701/pvc-jzzt9\"\nI0623 01:26:46.511278 11 pv_controller.go:834] claim \"provisioning-3701/pvc-jzzt9\" entered phase \"Bound\"\nI0623 01:26:46.511454 11 pv_controller.go:941] claim \"provisioning-2799/pvc-gknbc\" bound to volume \"local-dr65z\"\nI0623 01:26:46.521771 11 pv_controller.go:890] volume \"local-dr65z\" entered phase \"Bound\"\nI0623 01:26:46.521808 11 pv_controller.go:993] volume \"local-dr65z\" bound to claim \"provisioning-2799/pvc-gknbc\"\nI0623 01:26:46.534147 11 pv_controller.go:834] claim \"provisioning-2799/pvc-gknbc\" entered phase \"Bound\"\nI0623 01:26:46.535074 11 pv_controller.go:941] claim \"provisioning-9659/pvc-7q5qr\" bound to volume \"local-sbqkf\"\nI0623 01:26:46.552283 11 pv_controller.go:890] volume \"local-sbqkf\" entered phase \"Bound\"\nI0623 01:26:46.552319 11 pv_controller.go:993] volume \"local-sbqkf\" bound to claim \"provisioning-9659/pvc-7q5qr\"\nI0623 01:26:46.566982 11 pv_controller.go:834] claim \"provisioning-9659/pvc-7q5qr\" entered phase \"Bound\"\nI0623 01:26:46.707597 11 garbagecollector.go:504] \"Processing object\" object=\"services-9875/service-headless-n726v\" objectUID=b063c827-5d9b-4672-b6f1-56d911707c05 kind=\"EndpointSlice\" virtual=false\nI0623 01:26:46.717015 11 garbagecollector.go:616] \"Deleting object\" object=\"services-9875/service-headless-n726v\" objectUID=b063c827-5d9b-4672-b6f1-56d911707c05 kind=\"EndpointSlice\" propagationPolicy=Background\nI0623 01:26:46.720267 11 garbagecollector.go:504] \"Processing object\" object=\"services-9875/service-headless-toggled-h9f4l\" objectUID=42853147-9713-4ecb-94a7-b4313c103e64 kind=\"EndpointSlice\" virtual=false\nI0623 01:26:46.727485 11 garbagecollector.go:616] \"Deleting object\" object=\"services-9875/service-headless-toggled-h9f4l\" objectUID=42853147-9713-4ecb-94a7-b4313c103e64 kind=\"EndpointSlice\" propagationPolicy=Background\nE0623 01:26:46.829693 11 namespace_controller.go:162] deletion of namespace services-9875 failed: unexpected items still remain in namespace: services-9875 for gvr: /v1, Resource=pods\nE0623 01:26:47.052908 11 namespace_controller.go:162] deletion of namespace services-9875 failed: unexpected items still remain in namespace: services-9875 for gvr: /v1, Resource=pods\nI0623 01:26:47.172278 11 pv_controller.go:890] volume \"local-pv9swhn\" entered phase \"Available\"\nI0623 01:26:47.188422 11 pv_controller.go:941] claim \"persistent-local-volumes-test-3583/pvc-zwv7g\" bound to volume \"local-pv9swhn\"\nI0623 01:26:47.202670 11 pv_controller.go:890] volume \"local-pv9swhn\" entered phase \"Bound\"\nI0623 01:26:47.203259 11 pv_controller.go:993] volume \"local-pv9swhn\" bound to claim \"persistent-local-volumes-test-3583/pvc-zwv7g\"\nI0623 01:26:47.215908 11 pv_controller.go:834] claim \"persistent-local-volumes-test-3583/pvc-zwv7g\" entered phase \"Bound\"\nE0623 01:26:47.228274 11 namespace_controller.go:162] deletion of namespace services-9875 failed: unexpected items still remain in namespace: services-9875 for gvr: /v1, Resource=pods\nE0623 01:26:47.448161 11 namespace_controller.go:162] deletion of namespace services-9875 failed: unexpected items still remain in namespace: services-9875 for gvr: /v1, Resource=pods\nE0623 01:26:47.788825 11 namespace_controller.go:162] deletion of namespace services-9875 failed: unexpected items still remain in namespace: services-9875 for gvr: /v1, Resource=pods\nI0623 01:26:48.078918 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"kubectl-240/agnhost-primary\" need=1 creating=1\nI0623 01:26:48.086727 11 namespace_controller.go:185] Namespace has been deleted configmap-4688\nI0623 01:26:48.093275 11 event.go:294] \"Event occurred\" object=\"kubectl-240/agnhost-primary\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-8phw2\"\nE0623 01:26:48.167530 11 namespace_controller.go:162] deletion of namespace services-9875 failed: unexpected items still remain in namespace: services-9875 for gvr: /v1, Resource=pods\nE0623 01:26:48.467801 11 namespace_controller.go:162] deletion of namespace services-9875 failed: unexpected items still remain in namespace: services-9875 for gvr: /v1, Resource=pods\nE0623 01:26:49.012595 11 namespace_controller.go:162] deletion of namespace services-9875 failed: unexpected items still remain in namespace: services-9875 for gvr: /v1, Resource=pods\nI0623 01:26:49.093094 11 namespace_controller.go:185] Namespace has been deleted kubectl-8887\nI0623 01:26:49.242888 11 resource_quota_controller.go:312] Resource quota has been deleted resourcequota-1381/test-quota\nI0623 01:26:49.306346 11 namespace_controller.go:185] Namespace has been deleted volume-9126\nI0623 01:26:49.447399 11 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-3066, name: inline-volume-tester-xlvr7, uid: cfa40f0a-ad0c-4025-b56f-5a899193b6cf] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0623 01:26:49.447549 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-3066/inline-volume-tester-xlvr7-my-volume-0\" objectUID=2985ff71-ea3a-4f55-bf71-a243ca3dfce9 kind=\"PersistentVolumeClaim\" virtual=false\nI0623 01:26:49.448333 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-3066/inline-volume-tester-xlvr7\" objectUID=cfa40f0a-ad0c-4025-b56f-5a899193b6cf kind=\"Pod\" virtual=false\nI0623 01:26:49.459125 11 garbagecollector.go:631] adding [v1/PersistentVolumeClaim, namespace: ephemeral-3066, name: inline-volume-tester-xlvr7-my-volume-0, uid: 2985ff71-ea3a-4f55-bf71-a243ca3dfce9] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-3066, name: inline-volume-tester-xlvr7, uid: cfa40f0a-ad0c-4025-b56f-5a899193b6cf] is deletingDependents\nI0623 01:26:49.463258 11 garbagecollector.go:616] \"Deleting object\" object=\"ephemeral-3066/inline-volume-tester-xlvr7-my-volume-0\" objectUID=2985ff71-ea3a-4f55-bf71-a243ca3dfce9 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0623 01:26:49.467310 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-3066/inline-volume-tester-xlvr7-my-volume-0\" objectUID=2985ff71-ea3a-4f55-bf71-a243ca3dfce9 kind=\"PersistentVolumeClaim\" virtual=false\nI0623 01:26:49.470191 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-3066/inline-volume-tester-xlvr7\" PVC=\"ephemeral-3066/inline-volume-tester-xlvr7-my-volume-0\"\nI0623 01:26:49.470233 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-3066/inline-volume-tester-xlvr7-my-volume-0\"\nI0623 01:26:49.473976 11 garbagecollector.go:616] \"Deleting object\" object=\"ephemeral-3066/inline-volume-tester-xlvr7-my-volume-0\" objectUID=2985ff71-ea3a-4f55-bf71-a243ca3dfce9 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0623 01:26:49.767865 11 namespace_controller.go:185] Namespace has been deleted downward-api-5776\nE0623 01:26:49.874660 11 namespace_controller.go:162] deletion of namespace services-9875 failed: unexpected items still remain in namespace: services-9875 for gvr: /v1, Resource=pods\nI0623 01:26:50.048423 11 namespace_controller.go:185] Namespace has been deleted volumemode-4709\nI0623 01:26:50.171551 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"csi-mock-volumes-9466/pvc-4vjm4\"\nI0623 01:26:50.185370 11 pv_controller.go:651] volume \"pvc-980d9565-a82c-42cc-a74e-757882226896\" is released and reclaim policy \"Delete\" will be executed\nI0623 01:26:50.193832 11 pv_controller.go:890] volume \"pvc-980d9565-a82c-42cc-a74e-757882226896\" entered phase \"Released\"\nI0623 01:26:50.198098 11 pv_controller.go:1353] isVolumeReleased[pvc-980d9565-a82c-42cc-a74e-757882226896]: volume is released\nI0623 01:26:50.241478 11 pv_controller_base.go:582] deletion of claim \"csi-mock-volumes-9466/pvc-4vjm4\" was already processed\nI0623 01:26:50.723893 11 namespace_controller.go:185] Namespace has been deleted replicaset-3588\nI0623 01:26:51.001082 11 deployment_controller.go:590] \"Deployment has been deleted\" deployment=\"crd-webhook-8837/sample-crd-conversion-webhook-deployment\"\nE0623 01:26:51.327405 11 namespace_controller.go:162] deletion of namespace services-9875 failed: unexpected items still remain in namespace: services-9875 for gvr: /v1, Resource=pods\nI0623 01:26:51.682641 11 stateful_set.go:450] StatefulSet has been deleted volume-expand-6317-942/csi-hostpathplugin\nI0623 01:26:51.683666 11 garbagecollector.go:504] \"Processing object\" object=\"volume-expand-6317-942/csi-hostpathplugin-0\" objectUID=9e72e4e5-67eb-401b-b14d-6d6c4a4d0144 kind=\"Pod\" virtual=false\nI0623 01:26:51.683666 11 garbagecollector.go:504] \"Processing object\" object=\"volume-expand-6317-942/csi-hostpathplugin-7c4d7fdd5f\" objectUID=4d9fc5a4-945e-41f2-a84b-386ab5dc51a6 kind=\"ControllerRevision\" virtual=false\nI0623 01:26:51.687568 11 garbagecollector.go:616] \"Deleting object\" object=\"volume-expand-6317-942/csi-hostpathplugin-0\" objectUID=9e72e4e5-67eb-401b-b14d-6d6c4a4d0144 kind=\"Pod\" propagationPolicy=Background\nI0623 01:26:51.687569 11 garbagecollector.go:616] \"Deleting object\" object=\"volume-expand-6317-942/csi-hostpathplugin-7c4d7fdd5f\" objectUID=4d9fc5a4-945e-41f2-a84b-386ab5dc51a6 kind=\"ControllerRevision\" propagationPolicy=Background\nW0623 01:26:51.695233 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:26:51.695276 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:26:51.705776 11 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-6433, name: inline-volume-tester-kcbdg, uid: 7c2e4a5e-8b58-4c4c-bb80-861f20780e6c] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0623 01:26:51.705844 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-6433/inline-volume-tester-kcbdg\" objectUID=7c2e4a5e-8b58-4c4c-bb80-861f20780e6c kind=\"Pod\" virtual=false\nI0623 01:26:51.714598 11 garbagecollector.go:626] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-6433, name: inline-volume-tester-kcbdg, uid: 7c2e4a5e-8b58-4c4c-bb80-861f20780e6c]\nI0623 01:26:51.895224 11 namespace_controller.go:185] Namespace has been deleted volume-8600\nI0623 01:26:53.669113 11 namespace_controller.go:185] Namespace has been deleted provisioning-1474\nE0623 01:26:54.066905 11 namespace_controller.go:162] deletion of namespace services-9875 failed: unexpected items still remain in namespace: services-9875 for gvr: /v1, Resource=pods\nI0623 01:26:54.302137 11 namespace_controller.go:185] Namespace has been deleted resourcequota-1381\nI0623 01:26:54.973383 11 namespace_controller.go:185] Namespace has been deleted volume-expand-6317\nW0623 01:26:55.255367 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:26:55.255399 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:26:55.469812 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"kubectl-240/agnhost-primary\" need=1 creating=1\nI0623 01:26:55.503356 11 garbagecollector.go:504] \"Processing object\" object=\"kubectl-240/agnhost-primary-8phw2\" objectUID=6861c671-01b9-44b3-a774-a88e18e8c6bb kind=\"Pod\" virtual=false\nI0623 01:26:55.738595 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7019/pod-83a147ed-c483-4160-919f-780202232833\" PVC=\"persistent-local-volumes-test-7019/pvc-t7r82\"\nI0623 01:26:55.739478 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7019/pvc-t7r82\"\nI0623 01:26:56.100855 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3583/pod-553d275d-10ef-46c8-8a5e-a25a2bf12670\" PVC=\"persistent-local-volumes-test-3583/pvc-zwv7g\"\nI0623 01:26:56.100882 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3583/pvc-zwv7g\"\nE0623 01:26:56.734190 11 pv_controller.go:1501] error finding provisioning plugin for claim provisioning-6918/pvc-ttp9d: storageclass.storage.k8s.io \"provisioning-6918\" not found\nI0623 01:26:56.734652 11 event.go:294] \"Event occurred\" object=\"provisioning-6918/pvc-ttp9d\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-6918\\\" not found\"\nI0623 01:26:56.779379 11 pv_controller.go:890] volume \"local-hnzjg\" entered phase \"Available\"\nI0623 01:26:57.766723 11 namespace_controller.go:185] Namespace has been deleted secrets-7728\nI0623 01:26:58.012763 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3583/pod-553d275d-10ef-46c8-8a5e-a25a2bf12670\" PVC=\"persistent-local-volumes-test-3583/pvc-zwv7g\"\nI0623 01:26:58.012801 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3583/pvc-zwv7g\"\nI0623 01:26:58.209050 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3583/pod-553d275d-10ef-46c8-8a5e-a25a2bf12670\" PVC=\"persistent-local-volumes-test-3583/pvc-zwv7g\"\nI0623 01:26:58.209346 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3583/pvc-zwv7g\"\nI0623 01:26:58.215538 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"persistent-local-volumes-test-3583/pvc-zwv7g\"\nI0623 01:26:58.222816 11 pv_controller.go:651] volume \"local-pv9swhn\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:26:58.235171 11 pv_controller.go:890] volume \"local-pv9swhn\" entered phase \"Released\"\nI0623 01:26:58.241640 11 pv_controller_base.go:582] deletion of claim \"persistent-local-volumes-test-3583/pvc-zwv7g\" was already processed\nI0623 01:26:58.610722 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7019/pod-83a147ed-c483-4160-919f-780202232833\" PVC=\"persistent-local-volumes-test-7019/pvc-t7r82\"\nI0623 01:26:58.610760 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7019/pvc-t7r82\"\nI0623 01:26:58.809887 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7019/pod-83a147ed-c483-4160-919f-780202232833\" PVC=\"persistent-local-volumes-test-7019/pvc-t7r82\"\nI0623 01:26:58.809915 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7019/pvc-t7r82\"\nI0623 01:26:58.821659 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"persistent-local-volumes-test-7019/pvc-t7r82\"\nI0623 01:26:58.832990 11 pv_controller.go:651] volume \"local-pvzbhd4\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:26:58.839176 11 pv_controller.go:890] volume \"local-pvzbhd4\" entered phase \"Released\"\nI0623 01:26:58.847426 11 pv_controller_base.go:582] deletion of claim \"persistent-local-volumes-test-7019/pvc-t7r82\" was already processed\nI0623 01:26:59.233589 11 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-5383/pvc-kzjjz\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0623 01:26:59.269388 11 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-5383/pvc-kzjjz\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-5383\\\" or manually created by system administrator\"\nI0623 01:26:59.310416 11 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-5383/pvc-kzjjz\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod pvc-volume-tester-pjstp to be scheduled\"\nI0623 01:26:59.529591 11 reconciler.go:250] \"attacherDetacher.DetachVolume started\" volume={AttachedVolume:{VolumeName:kubernetes.io/csi/csi-hostpath-provisioning-9554^80316092-f293-11ec-926f-32e7d6c6e60e VolumeSpec:0xc000cd9770 NodeName:nodes-us-west3-a-l43j PluginIsAttachable:true DevicePath: DeviceMountPath: PluginName:} MountedByNode:false DetachRequestedTime:2022-06-23 01:26:58.3059837 +0000 UTC m=+1016.252686198}\nI0623 01:26:59.532797 11 operation_generator.go:1603] Verified volume is safe to detach for volume \"pvc-8e57760f-2dfd-4050-8309-ed4207bfdc45\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9554^80316092-f293-11ec-926f-32e7d6c6e60e\") on node \"nodes-us-west3-a-l43j\" \nI0623 01:26:59.669768 11 expand_controller.go:291] Ignoring the PVC \"csi-mock-volumes-8251/pvc-rd692\" (uid: \"5ed981de-b34a-4bef-9fbd-05b8a6e2c3e2\") : didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nI0623 01:26:59.670045 11 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8251/pvc-rd692\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ExternalExpanding\" message=\"Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\"\nI0623 01:26:59.820633 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-9554/csi-hostpathp7cj4\"\nI0623 01:26:59.831401 11 pv_controller.go:651] volume \"pvc-8e57760f-2dfd-4050-8309-ed4207bfdc45\" is released and reclaim policy \"Delete\" will be executed\nI0623 01:26:59.841868 11 pv_controller.go:890] volume \"pvc-8e57760f-2dfd-4050-8309-ed4207bfdc45\" entered phase \"Released\"\nI0623 01:26:59.844550 11 pv_controller.go:1353] isVolumeReleased[pvc-8e57760f-2dfd-4050-8309-ed4207bfdc45]: volume is released\nI0623 01:26:59.863081 11 pv_controller_base.go:582] deletion of claim \"provisioning-9554/csi-hostpathp7cj4\" was already processed\nI0623 01:27:00.076500 11 operation_generator.go:513] DetachVolume.Detach succeeded for volume \"pvc-8e57760f-2dfd-4050-8309-ed4207bfdc45\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9554^80316092-f293-11ec-926f-32e7d6c6e60e\") on node \"nodes-us-west3-a-l43j\" \nI0623 01:27:00.118711 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-9659/pvc-7q5qr\"\nI0623 01:27:00.128905 11 pv_controller.go:651] volume \"local-sbqkf\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:27:00.133253 11 pv_controller.go:890] volume \"local-sbqkf\" entered phase \"Released\"\nI0623 01:27:00.147385 11 pv_controller_base.go:582] deletion of claim \"provisioning-9659/pvc-7q5qr\" was already processed\nE0623 01:27:00.855409 11 pv_controller.go:1501] error finding provisioning plugin for claim ephemeral-2756/inline-volume-bkkf8-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0623 01:27:00.856169 11 event.go:294] \"Event occurred\" object=\"ephemeral-2756/inline-volume-bkkf8-my-volume\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0623 01:27:00.930051 11 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-2756, name: inline-volume-bkkf8, uid: a795ee90-6b4b-466c-a66f-718e5f96599a] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0623 01:27:00.931194 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-2756/inline-volume-bkkf8-my-volume\" objectUID=73f7b418-a39f-44ea-b239-8050a57c074a kind=\"PersistentVolumeClaim\" virtual=false\nI0623 01:27:00.932012 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-2756/inline-volume-bkkf8\" objectUID=a795ee90-6b4b-466c-a66f-718e5f96599a kind=\"Pod\" virtual=false\nI0623 01:27:00.947823 11 garbagecollector.go:631] adding [v1/PersistentVolumeClaim, namespace: ephemeral-2756, name: inline-volume-bkkf8-my-volume, uid: 73f7b418-a39f-44ea-b239-8050a57c074a] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-2756, name: inline-volume-bkkf8, uid: a795ee90-6b4b-466c-a66f-718e5f96599a] is deletingDependents\nI0623 01:27:00.952110 11 garbagecollector.go:616] \"Deleting object\" object=\"ephemeral-2756/inline-volume-bkkf8-my-volume\" objectUID=73f7b418-a39f-44ea-b239-8050a57c074a kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nE0623 01:27:00.961795 11 pv_controller.go:1501] error finding provisioning plugin for claim ephemeral-2756/inline-volume-bkkf8-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0623 01:27:00.962786 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-2756/inline-volume-bkkf8-my-volume\" objectUID=73f7b418-a39f-44ea-b239-8050a57c074a kind=\"PersistentVolumeClaim\" virtual=false\nI0623 01:27:00.963379 11 event.go:294] \"Event occurred\" object=\"ephemeral-2756/inline-volume-bkkf8-my-volume\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0623 01:27:00.971181 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-2756/inline-volume-bkkf8-my-volume\"\nI0623 01:27:00.981580 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-2756/inline-volume-bkkf8\" objectUID=a795ee90-6b4b-466c-a66f-718e5f96599a kind=\"Pod\" virtual=false\nI0623 01:27:00.985363 11 garbagecollector.go:626] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-2756, name: inline-volume-bkkf8, uid: a795ee90-6b4b-466c-a66f-718e5f96599a]\nI0623 01:27:01.040219 11 pv_controller.go:890] volume \"local-pv986zv\" entered phase \"Available\"\nI0623 01:27:01.058705 11 pv_controller.go:941] claim \"persistent-local-volumes-test-6678/pvc-6lfld\" bound to volume \"local-pv986zv\"\nI0623 01:27:01.073158 11 pv_controller.go:890] volume \"local-pv986zv\" entered phase \"Bound\"\nI0623 01:27:01.073196 11 pv_controller.go:993] volume \"local-pv986zv\" bound to claim \"persistent-local-volumes-test-6678/pvc-6lfld\"\nI0623 01:27:01.086745 11 pv_controller.go:834] claim \"persistent-local-volumes-test-6678/pvc-6lfld\" entered phase \"Bound\"\nI0623 01:27:01.236521 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"persistent-local-volumes-test-6678/pvc-6lfld\"\nI0623 01:27:01.244736 11 pv_controller.go:651] volume \"local-pv986zv\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:27:01.255571 11 pv_controller.go:890] volume \"local-pv986zv\" entered phase \"Released\"\nI0623 01:27:01.276566 11 pv_controller_base.go:582] deletion of claim \"persistent-local-volumes-test-6678/pvc-6lfld\" was already processed\nI0623 01:27:01.388673 11 namespace_controller.go:185] Namespace has been deleted custom-resource-definition-5445\nI0623 01:27:01.467321 11 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-5383/pvc-kzjjz\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-5383\\\" or manually created by system administrator\"\nI0623 01:27:01.482525 11 pv_controller.go:941] claim \"provisioning-6918/pvc-ttp9d\" bound to volume \"local-hnzjg\"\nI0623 01:27:01.523426 11 pv_controller.go:890] volume \"local-hnzjg\" entered phase \"Bound\"\nI0623 01:27:01.523468 11 pv_controller.go:993] volume \"local-hnzjg\" bound to claim \"provisioning-6918/pvc-ttp9d\"\nI0623 01:27:01.539819 11 pv_controller.go:834] claim \"provisioning-6918/pvc-ttp9d\" entered phase \"Bound\"\nI0623 01:27:01.559336 11 pv_controller.go:890] volume \"pvc-7d2f2978-dede-48c6-b509-f89d4c635887\" entered phase \"Bound\"\nI0623 01:27:01.561441 11 pv_controller.go:993] volume \"pvc-7d2f2978-dede-48c6-b509-f89d4c635887\" bound to claim \"csi-mock-volumes-5383/pvc-kzjjz\"\nI0623 01:27:01.573972 11 pv_controller.go:834] claim \"csi-mock-volumes-5383/pvc-kzjjz\" entered phase \"Bound\"\nI0623 01:27:02.111611 11 job_controller.go:504] enqueueing job job-9490/backofflimit\nI0623 01:27:02.128050 11 job_controller.go:504] enqueueing job job-9490/backofflimit\nI0623 01:27:02.129849 11 event.go:294] \"Event occurred\" object=\"job-9490/backofflimit\" fieldPath=\"\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: backofflimit-km8bn\"\nI0623 01:27:02.137056 11 job_controller.go:504] enqueueing job job-9490/backofflimit\nI0623 01:27:02.137245 11 job_controller.go:504] enqueueing job job-9490/backofflimit\nI0623 01:27:02.148144 11 namespace_controller.go:185] Namespace has been deleted downward-api-8007\nI0623 01:27:02.237270 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"kubectl-5321/agnhost-primary\" need=1 creating=1\nI0623 01:27:02.250368 11 event.go:294] \"Event occurred\" object=\"kubectl-5321/agnhost-primary\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-79gl7\"\nI0623 01:27:03.318693 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-3701/pvc-jzzt9\"\nI0623 01:27:03.337930 11 pv_controller.go:651] volume \"local-gwj6g\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:27:03.346091 11 pv_controller.go:651] volume \"local-gwj6g\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:27:03.354956 11 pv_controller.go:890] volume \"local-gwj6g\" entered phase \"Released\"\nI0623 01:27:03.366430 11 pv_controller_base.go:582] deletion of claim \"provisioning-3701/pvc-jzzt9\" was already processed\nI0623 01:27:03.869308 11 job_controller.go:504] enqueueing job ttlafterfinished-4583/rand-non-local\nI0623 01:27:03.879183 11 event.go:294] \"Event occurred\" object=\"ttlafterfinished-4583/rand-non-local\" fieldPath=\"\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rand-non-local-w9t65\"\nI0623 01:27:03.882341 11 job_controller.go:504] enqueueing job ttlafterfinished-4583/rand-non-local\nI0623 01:27:03.888457 11 job_controller.go:504] enqueueing job ttlafterfinished-4583/rand-non-local\nI0623 01:27:03.891765 11 job_controller.go:504] enqueueing job ttlafterfinished-4583/rand-non-local\nI0623 01:27:04.216569 11 namespace_controller.go:185] Namespace has been deleted services-9875\nI0623 01:27:04.243072 11 event.go:294] \"Event occurred\" object=\"ephemeral-2756-6936/csi-hostpathplugin\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0623 01:27:04.338971 11 event.go:294] \"Event occurred\" object=\"ephemeral-2756/inline-volume-tester-wz44h-my-volume-0\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-2756\\\" or manually created by system administrator\"\nW0623 01:27:04.404691 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:04.404725 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:05.756025 11 namespace_controller.go:185] Namespace has been deleted kubectl-240\nI0623 01:27:06.186081 11 job_controller.go:504] enqueueing job job-9490/backofflimit\nI0623 01:27:06.503899 11 garbagecollector.go:504] \"Processing object\" object=\"csi-mock-volumes-9466-9954/csi-mockplugin-78966945bb\" objectUID=67c8a3ee-d5fe-4525-821d-9f1c2523286b kind=\"ControllerRevision\" virtual=false\nI0623 01:27:06.504150 11 stateful_set.go:450] StatefulSet has been deleted csi-mock-volumes-9466-9954/csi-mockplugin\nI0623 01:27:06.504198 11 garbagecollector.go:504] \"Processing object\" object=\"csi-mock-volumes-9466-9954/csi-mockplugin-0\" objectUID=c8e9523f-1241-403c-9b84-28420a2383ae kind=\"Pod\" virtual=false\nI0623 01:27:06.578859 11 garbagecollector.go:616] \"Deleting object\" object=\"csi-mock-volumes-9466-9954/csi-mockplugin-0\" objectUID=c8e9523f-1241-403c-9b84-28420a2383ae kind=\"Pod\" propagationPolicy=Background\nI0623 01:27:06.579297 11 garbagecollector.go:616] \"Deleting object\" object=\"csi-mock-volumes-9466-9954/csi-mockplugin-78966945bb\" objectUID=67c8a3ee-d5fe-4525-821d-9f1c2523286b kind=\"ControllerRevision\" propagationPolicy=Background\nI0623 01:27:06.873118 11 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-3583\nI0623 01:27:06.873808 11 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-7019\nI0623 01:27:07.305452 11 namespace_controller.go:185] Namespace has been deleted volume-expand-6317-942\nI0623 01:27:07.552540 11 pv_controller.go:890] volume \"pvc-983e5913-38a5-4529-90bf-4914fcbdf86e\" entered phase \"Bound\"\nI0623 01:27:07.552577 11 pv_controller.go:993] volume \"pvc-983e5913-38a5-4529-90bf-4914fcbdf86e\" bound to claim \"ephemeral-2756/inline-volume-tester-wz44h-my-volume-0\"\nI0623 01:27:07.567457 11 pv_controller.go:834] claim \"ephemeral-2756/inline-volume-tester-wz44h-my-volume-0\" entered phase \"Bound\"\nI0623 01:27:07.960609 11 job_controller.go:504] enqueueing job job-9490/backofflimit\nI0623 01:27:07.967850 11 event.go:294] \"Event occurred\" object=\"job-9490/backofflimit\" fieldPath=\"\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: backofflimit-msn24\"\nI0623 01:27:07.968244 11 job_controller.go:504] enqueueing job job-9490/backofflimit\nE0623 01:27:07.975356 11 job_controller.go:539] syncing job: failed pod(s) detected for job key \"job-9490/backofflimit\"\nI0623 01:27:07.975718 11 job_controller.go:504] enqueueing job job-9490/backofflimit\nI0623 01:27:07.978853 11 job_controller.go:504] enqueueing job job-9490/backofflimit\nI0623 01:27:08.530051 11 namespace_controller.go:185] Namespace has been deleted webhook-6842\nI0623 01:27:08.607885 11 namespace_controller.go:185] Namespace has been deleted kubelet-test-280\nI0623 01:27:08.756242 11 job_controller.go:504] enqueueing job ttlafterfinished-4583/rand-non-local\nI0623 01:27:08.761818 11 job_controller.go:504] enqueueing job ttlafterfinished-4583/rand-non-local\nI0623 01:27:08.764482 11 event.go:294] \"Event occurred\" object=\"ttlafterfinished-4583/rand-non-local\" fieldPath=\"\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rand-non-local-vqgt7\"\nE0623 01:27:08.776075 11 job_controller.go:539] syncing job: failed pod(s) detected for job key \"ttlafterfinished-4583/rand-non-local\"\nI0623 01:27:08.776264 11 job_controller.go:504] enqueueing job ttlafterfinished-4583/rand-non-local\nI0623 01:27:08.779013 11 job_controller.go:504] enqueueing job ttlafterfinished-4583/rand-non-local\nI0623 01:27:09.412121 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"csi-mock-volumes-5383/pvc-kzjjz\"\nI0623 01:27:09.421348 11 pv_controller.go:651] volume \"pvc-7d2f2978-dede-48c6-b509-f89d4c635887\" is released and reclaim policy \"Delete\" will be executed\nI0623 01:27:09.427700 11 pv_controller.go:890] volume \"pvc-7d2f2978-dede-48c6-b509-f89d4c635887\" entered phase \"Released\"\nI0623 01:27:09.431147 11 pv_controller.go:1353] isVolumeReleased[pvc-7d2f2978-dede-48c6-b509-f89d4c635887]: volume is released\nI0623 01:27:09.472244 11 pv_controller_base.go:582] deletion of claim \"csi-mock-volumes-5383/pvc-kzjjz\" was already processed\nI0623 01:27:09.667383 11 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-9466\nW0623 01:27:09.962463 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:09.962899 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:09.981535 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/datadir-ss-2\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0623 01:27:09.987766 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/ss\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-2 Pod ss-2 in StatefulSet ss success\"\nI0623 01:27:09.997415 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/ss\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-2 in StatefulSet ss successful\"\nI0623 01:27:10.024371 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/datadir-ss-2\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"pd.csi.storage.gke.io\\\" or manually created by system administrator\"\nI0623 01:27:10.024844 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/datadir-ss-2\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"pd.csi.storage.gke.io\\\" or manually created by system administrator\"\nW0623 01:27:10.620217 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:10.621789 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nW0623 01:27:10.766326 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:10.768339 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:10.804705 11 job_controller.go:504] enqueueing job job-9490/backofflimit\nI0623 01:27:11.263545 11 namespace_controller.go:185] Namespace has been deleted kubectl-4670\nI0623 01:27:11.307348 11 namespace_controller.go:185] Namespace has been deleted provisioning-9659\nI0623 01:27:11.343910 11 namespace_controller.go:185] Namespace has been deleted provisioning-2222\nE0623 01:27:11.369788 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:11.529232 11 reconciler.go:325] \"attacherDetacher.AttachVolume started\" volume={VolumeToAttach:{MultiAttachErrorReported:false VolumeName:kubernetes.io/csi/csi-hostpath-ephemeral-2756^9890637c-f293-11ec-820a-ee75b91ad526 VolumeSpec:0xc000e40690 NodeName:nodes-us-west3-a-j1m9 ScheduledPods:[&Pod{ObjectMeta:{inline-volume-tester-wz44h inline-volume-tester- ephemeral-2756 20915478-a56f-4b77-81d0-27d22bc2ef7b 33580 0 2022-06-23 01:27:04 +0000 UTC <nil> <nil> map[app:inline-volume-tester] map[] [] [] [{e2e.test Update v1 2022-06-23 01:27:04 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:app\":{}}},\"f:spec\":{\"f:affinity\":{\".\":{},\"f:nodeAffinity\":{\".\":{},\"f:requiredDuringSchedulingIgnoredDuringExecution\":{}}},\"f:containers\":{\"k:{\\\"name\\\":\\\"csi-volume-tester\\\"}\":{\".\":{},\"f:command\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{},\"f:volumeMounts\":{\".\":{},\"k:{\\\"mountPath\\\":\\\"/mnt/test-0\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}}}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{},\"f:volumes\":{\".\":{},\"k:{\\\"name\\\":\\\"my-volume-0\\\"}\":{\".\":{},\"f:ephemeral\":{\".\":{},\"f:volumeClaimTemplate\":{\".\":{},\"f:metadata\":{\".\":{},\"f:creationTimestamp\":{}},\"f:spec\":{\".\":{},\"f:accessModes\":{},\"f:resources\":{\".\":{},\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}},\"f:name\":{}}}}} } {kube-scheduler Update v1 2022-06-23 01:27:05 +0000 UTC FieldsV1 {\"f:status\":{\"f:conditions\":{\".\":{},\"k:{\\\"type\\\":\\\"PodScheduled\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:my-volume-0,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:&EphemeralVolumeSource{VolumeClaimTemplate:&PersistentVolumeClaimTemplate{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1048576 0} {<nil>} 1Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*ephemeral-2756gg5d6,VolumeMode:*Filesystem,DataSource:nil,DataSourceRef:nil,},},},},},Volume{Name:kube-api-access-dzwdf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:csi-volume-tester,Image:registry.k8s.io/e2e-test-images/busybox:1.29-2,Command:[/bin/sh -c sleep 10000],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:my-volume-0,ReadOnly:false,MountPath:/mnt/test-0,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dzwdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Never,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-j1m9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{},MatchFields:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:metadata.name,Operator:In,Values:[nodes-us-west3-a-j1m9],},},},},},PreferredDuringSchedulingIgnoredDuringExecution:[]PreferredSchedulingTerm{},},PodAffinity:nil,PodAntiAffinity:nil,},SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:27:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}]}}\nI0623 01:27:11.974451 11 garbagecollector.go:504] \"Processing object\" object=\"provisioning-9554-4762/csi-hostpathplugin-7d9dcd5b99\" objectUID=946e1022-5579-4dc9-b074-91730b0134c5 kind=\"ControllerRevision\" virtual=false\nI0623 01:27:11.975107 11 stateful_set.go:450] StatefulSet has been deleted provisioning-9554-4762/csi-hostpathplugin\nI0623 01:27:11.975206 11 garbagecollector.go:504] \"Processing object\" object=\"provisioning-9554-4762/csi-hostpathplugin-0\" objectUID=3794c8e3-cfb0-4b53-9b0d-a9f124e21dfb kind=\"Pod\" virtual=false\nI0623 01:27:11.976773 11 garbagecollector.go:616] \"Deleting object\" object=\"provisioning-9554-4762/csi-hostpathplugin-7d9dcd5b99\" objectUID=946e1022-5579-4dc9-b074-91730b0134c5 kind=\"ControllerRevision\" propagationPolicy=Background\nI0623 01:27:11.977081 11 garbagecollector.go:616] \"Deleting object\" object=\"provisioning-9554-4762/csi-hostpathplugin-0\" objectUID=3794c8e3-cfb0-4b53-9b0d-a9f124e21dfb kind=\"Pod\" propagationPolicy=Background\nI0623 01:27:12.010789 11 job_controller.go:504] enqueueing job ttlafterfinished-4583/rand-non-local\nI0623 01:27:12.042647 11 operation_generator.go:398] AttachVolume.Attach succeeded for volume \"pvc-983e5913-38a5-4529-90bf-4914fcbdf86e\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-2756^9890637c-f293-11ec-820a-ee75b91ad526\") from node \"nodes-us-west3-a-j1m9\" \nI0623 01:27:12.042713 11 event.go:294] \"Event occurred\" object=\"ephemeral-2756/inline-volume-tester-wz44h\" fieldPath=\"\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-983e5913-38a5-4529-90bf-4914fcbdf86e\\\" \"\nW0623 01:27:12.107569 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:12.107604 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:12.111903 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-61/inline-volume-tester2-7bzmh\" PVC=\"ephemeral-61/inline-volume-tester2-7bzmh-my-volume-0\"\nI0623 01:27:12.112339 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-61/inline-volume-tester2-7bzmh-my-volume-0\"\nI0623 01:27:12.118839 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-61/inline-volume-tester2-7bzmh-my-volume-0\"\nI0623 01:27:12.122106 11 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-6678\nI0623 01:27:12.127237 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-61/inline-volume-tester2-7bzmh\" objectUID=2f895efb-c5ad-4539-9c30-5f8f8f4b5741 kind=\"Pod\" virtual=false\nI0623 01:27:12.132944 11 garbagecollector.go:626] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-61, name: inline-volume-tester2-7bzmh, uid: 2f895efb-c5ad-4539-9c30-5f8f8f4b5741]\nI0623 01:27:12.133859 11 pv_controller.go:651] volume \"pvc-d1f9410d-eeef-48c2-b9cd-ceb7daf1f409\" is released and reclaim policy \"Delete\" will be executed\nI0623 01:27:12.134340 11 namespace_controller.go:185] Namespace has been deleted configmap-3603\nI0623 01:27:12.143577 11 pv_controller.go:890] volume \"pvc-d1f9410d-eeef-48c2-b9cd-ceb7daf1f409\" entered phase \"Released\"\nI0623 01:27:12.157817 11 pv_controller.go:1353] isVolumeReleased[pvc-d1f9410d-eeef-48c2-b9cd-ceb7daf1f409]: volume is released\nW0623 01:27:12.160165 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:12.160197 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:12.186245 11 pv_controller_base.go:582] deletion of claim \"ephemeral-61/inline-volume-tester2-7bzmh-my-volume-0\" was already processed\nI0623 01:27:12.345521 11 namespace_controller.go:185] Namespace has been deleted provisioning-6910\nI0623 01:27:12.695609 11 namespace_controller.go:185] Namespace has been deleted pods-8530\nW0623 01:27:12.748943 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:12.748985 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nW0623 01:27:12.771480 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:12.771516 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:13.206617 11 job_controller.go:504] enqueueing job job-9490/backofflimit\nI0623 01:27:13.383487 11 reconciler.go:250] \"attacherDetacher.DetachVolume started\" volume={AttachedVolume:{VolumeName:kubernetes.io/csi/csi-hostpath-ephemeral-61^827561f9-f293-11ec-bbcb-a2260c93e9d3 VolumeSpec:0xc000cefe48 NodeName:nodes-us-west3-a-s284 PluginIsAttachable:true DevicePath: DeviceMountPath: PluginName:} MountedByNode:false DetachRequestedTime:2022-06-23 01:27:12.135106874 +0000 UTC m=+1030.081809394}\nI0623 01:27:13.386452 11 operation_generator.go:1603] Verified volume is safe to detach for volume \"pvc-d1f9410d-eeef-48c2-b9cd-ceb7daf1f409\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-61^827561f9-f293-11ec-bbcb-a2260c93e9d3\") on node \"nodes-us-west3-a-s284\" \nW0623 01:27:13.581957 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:13.582158 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:13.632803 11 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-61, name: inline-volume-tester-vvd5r, uid: f31b1a60-bbca-4dd6-bf53-700f10b13135] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0623 01:27:13.633751 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-61/inline-volume-tester-vvd5r-my-volume-0\" objectUID=b40350ad-93e5-4fb3-8487-7345ccdd7c1c kind=\"PersistentVolumeClaim\" virtual=false\nI0623 01:27:13.633991 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-61/inline-volume-tester-vvd5r\" objectUID=f31b1a60-bbca-4dd6-bf53-700f10b13135 kind=\"Pod\" virtual=false\nI0623 01:27:13.643454 11 garbagecollector.go:631] adding [v1/PersistentVolumeClaim, namespace: ephemeral-61, name: inline-volume-tester-vvd5r-my-volume-0, uid: b40350ad-93e5-4fb3-8487-7345ccdd7c1c] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-61, name: inline-volume-tester-vvd5r, uid: f31b1a60-bbca-4dd6-bf53-700f10b13135] is deletingDependents\nI0623 01:27:13.646524 11 garbagecollector.go:616] \"Deleting object\" object=\"ephemeral-61/inline-volume-tester-vvd5r-my-volume-0\" objectUID=b40350ad-93e5-4fb3-8487-7345ccdd7c1c kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0623 01:27:13.655926 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-61/inline-volume-tester-vvd5r-my-volume-0\" objectUID=b40350ad-93e5-4fb3-8487-7345ccdd7c1c kind=\"PersistentVolumeClaim\" virtual=false\nI0623 01:27:13.656838 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-61/inline-volume-tester-vvd5r\" PVC=\"ephemeral-61/inline-volume-tester-vvd5r-my-volume-0\"\nI0623 01:27:13.656861 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-61/inline-volume-tester-vvd5r-my-volume-0\"\nI0623 01:27:13.660732 11 garbagecollector.go:616] \"Deleting object\" object=\"ephemeral-61/inline-volume-tester-vvd5r-my-volume-0\" objectUID=b40350ad-93e5-4fb3-8487-7345ccdd7c1c kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0623 01:27:13.898869 11 namespace_controller.go:185] Namespace has been deleted provisioning-3701\nI0623 01:27:13.929484 11 operation_generator.go:513] DetachVolume.Detach succeeded for volume \"pvc-d1f9410d-eeef-48c2-b9cd-ceb7daf1f409\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-61^827561f9-f293-11ec-bbcb-a2260c93e9d3\") on node \"nodes-us-west3-a-s284\" \nI0623 01:27:14.004129 11 deployment_controller.go:590] \"Deployment has been deleted\" deployment=\"deployment-7389/test-rolling-update-with-lb\"\nI0623 01:27:14.388286 11 pv_controller.go:890] volume \"pvc-2b3f1f49-1e28-4e4c-9cb4-1bc13e5afa9d\" entered phase \"Bound\"\nI0623 01:27:14.388716 11 pv_controller.go:993] volume \"pvc-2b3f1f49-1e28-4e4c-9cb4-1bc13e5afa9d\" bound to claim \"statefulset-9707/datadir-ss-2\"\nI0623 01:27:14.398538 11 pv_controller.go:834] claim \"statefulset-9707/datadir-ss-2\" entered phase \"Bound\"\nI0623 01:27:14.449899 11 event.go:294] \"Event occurred\" object=\"provisioning-1855-1443/csi-hostpathplugin\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0623 01:27:14.531892 11 event.go:294] \"Event occurred\" object=\"provisioning-1855/csi-hostpath2zc5t\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-1855\\\" or manually created by system administrator\"\nI0623 01:27:14.532159 11 event.go:294] \"Event occurred\" object=\"provisioning-1855/csi-hostpath2zc5t\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-1855\\\" or manually created by system administrator\"\nI0623 01:27:14.677472 11 namespace_controller.go:185] Namespace has been deleted nettest-5995\nW0623 01:27:14.694219 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:14.694266 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:15.005852 11 job_controller.go:504] enqueueing job ttlafterfinished-4583/rand-non-local\nI0623 01:27:15.097655 11 reconciler.go:325] \"attacherDetacher.AttachVolume started\" volume={VolumeToAttach:{MultiAttachErrorReported:false VolumeName:kubernetes.io/csi/pd.csi.storage.gke.io^projects/k8s-jkns-gce-soak-2/zones/us-west3-a/disks/pvc-2b3f1f49-1e28-4e4c-9cb4-1bc13e5afa9d VolumeSpec:0xc003964108 NodeName:nodes-us-west3-a-s284 ScheduledPods:[&Pod{ObjectMeta:{ss-2 ss- statefulset-9707 2978544d-1a2f-48f1-bea7-8c79e8a76606 33747 0 2022-06-23 01:27:09 +0000 UTC <nil> <nil> map[baz:blah controller-revision-hash:ss-5b74f7c5d foo:bar statefulset.kubernetes.io/pod-name:ss-2] map[] [{apps/v1 StatefulSet ss 443a664f-ea73-4296-b08a-6209ac80fa30 0xc003613d87 0xc003613d88}] [] [{kube-controller-manager Update v1 2022-06-23 01:27:09 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:baz\":{},\"f:controller-revision-hash\":{},\"f:foo\":{},\"f:statefulset.kubernetes.io/pod-name\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"443a664f-ea73-4296-b08a-6209ac80fa30\\\"}\":{}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\"webserver\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:readinessProbe\":{\".\":{},\"f:exec\":{\".\":{},\"f:command\":{}},\"f:failureThreshold\":{},\"f:periodSeconds\":{},\"f:successThreshold\":{},\"f:timeoutSeconds\":{}},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{},\"f:volumeMounts\":{\".\":{},\"k:{\\\"mountPath\\\":\\\"/data/\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}},\"k:{\\\"mountPath\\\":\\\"/home\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}}}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:hostname\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:subdomain\":{},\"f:terminationGracePeriodSeconds\":{},\"f:volumes\":{\".\":{},\"k:{\\\"name\\\":\\\"datadir\\\"}\":{\".\":{},\"f:name\":{},\"f:persistentVolumeClaim\":{\".\":{},\"f:claimName\":{}}},\"k:{\\\"name\\\":\\\"home\\\"}\":{\".\":{},\"f:hostPath\":{\".\":{},\"f:path\":{},\"f:type\":{}},\"f:name\":{}}}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:datadir,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:&PersistentVolumeClaimVolumeSource{ClaimName:datadir-ss-2,ReadOnly:false,},RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:home,VolumeSource:VolumeSource{HostPath:&HostPathVolumeSource{Path:/tmp/home,Type:*,},EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-zmft5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:webserver,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:datadir,ReadOnly:false,MountPath:/data/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:home,ReadOnly:false,MountPath:/home,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zmft5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[test -f /data/statefulset-continue],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:1,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-s284,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:ss-2,Subdomain:test,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:27:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}]}}\nW0623 01:27:15.221724 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:15.222195 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:15.302627 11 namespace_controller.go:185] Namespace has been deleted provisioning-9554\nI0623 01:27:16.481051 11 event.go:294] \"Event occurred\" object=\"provisioning-1855/csi-hostpath2zc5t\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-1855\\\" or manually created by system administrator\"\nI0623 01:27:16.995181 11 pv_controller.go:890] volume \"pvc-21d47775-4650-4e0e-9884-d079d4a8917b\" entered phase \"Bound\"\nI0623 01:27:16.995550 11 pv_controller.go:993] volume \"pvc-21d47775-4650-4e0e-9884-d079d4a8917b\" bound to claim \"provisioning-1855/csi-hostpath2zc5t\"\nI0623 01:27:17.014614 11 pv_controller.go:834] claim \"provisioning-1855/csi-hostpath2zc5t\" entered phase \"Bound\"\nI0623 01:27:17.302906 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-2799/pvc-gknbc\"\nI0623 01:27:17.322673 11 pv_controller.go:651] volume \"local-dr65z\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:27:17.336141 11 pv_controller.go:651] volume \"local-dr65z\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:27:17.348119 11 pv_controller.go:890] volume \"local-dr65z\" entered phase \"Released\"\nI0623 01:27:17.396692 11 pv_controller_base.go:582] deletion of claim \"provisioning-2799/pvc-gknbc\" was already processed\nE0623 01:27:18.254493 11 pv_controller.go:1501] error finding provisioning plugin for claim provisioning-772/pvc-4lztm: storageclass.storage.k8s.io \"provisioning-772\" not found\nI0623 01:27:18.255247 11 event.go:294] \"Event occurred\" object=\"provisioning-772/pvc-4lztm\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-772\\\" not found\"\nI0623 01:27:18.284154 11 pv_controller.go:890] volume \"local-pvlh7ct\" entered phase \"Available\"\nI0623 01:27:18.304109 11 pv_controller.go:890] volume \"local-p6q6p\" entered phase \"Available\"\nI0623 01:27:18.306047 11 pv_controller.go:941] claim \"persistent-local-volumes-test-3230/pvc-6v9mp\" bound to volume \"local-pvlh7ct\"\nI0623 01:27:18.330044 11 pv_controller.go:890] volume \"local-pvlh7ct\" entered phase \"Bound\"\nI0623 01:27:18.330089 11 pv_controller.go:993] volume \"local-pvlh7ct\" bound to claim \"persistent-local-volumes-test-3230/pvc-6v9mp\"\nI0623 01:27:18.348424 11 pv_controller.go:834] claim \"persistent-local-volumes-test-3230/pvc-6v9mp\" entered phase \"Bound\"\nI0623 01:27:18.672335 11 namespace_controller.go:185] Namespace has been deleted custom-resource-definition-8476\nI0623 01:27:18.733952 11 reconciler.go:325] \"attacherDetacher.AttachVolume started\" volume={VolumeToAttach:{MultiAttachErrorReported:false VolumeName:kubernetes.io/csi/csi-hostpath-provisioning-1855^9e2fcedb-f293-11ec-a830-a260e94f0a3c VolumeSpec:0xc001dfa360 NodeName:nodes-us-west3-a-s284 ScheduledPods:[&Pod{ObjectMeta:{pod-subpath-test-dynamicpv-5s92 provisioning-1855 ed710783-373b-4df1-966e-a962cd9728cc 33878 0 2022-06-23 01:27:18 +0000 UTC <nil> <nil> map[] map[] [] [] [{e2e.test Update v1 2022-06-23 01:27:18 +0000 UTC FieldsV1 {\"f:spec\":{\"f:affinity\":{\".\":{},\"f:nodeAffinity\":{\".\":{},\"f:requiredDuringSchedulingIgnoredDuringExecution\":{}}},\"f:containers\":{\"k:{\\\"name\\\":\\\"test-container-subpath-dynamicpv-5s92\\\"}\":{\".\":{},\"f:args\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:securityContext\":{\".\":{},\"f:privileged\":{}},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{},\"f:volumeMounts\":{\".\":{},\"k:{\\\"mountPath\\\":\\\"/probe-volume\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}},\"k:{\\\"mountPath\\\":\\\"/test-volume\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{},\"f:readOnly\":{},\"f:subPath\":{}}}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:initContainers\":{\".\":{},\"k:{\\\"name\\\":\\\"init-volume-dynamicpv-5s92\\\"}\":{\".\":{},\"f:command\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:securityContext\":{\".\":{},\"f:privileged\":{}},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{},\"f:volumeMounts\":{\".\":{},\"k:{\\\"mountPath\\\":\\\"/probe-volume\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}},\"k:{\\\"mountPath\\\":\\\"/test-volume\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}}}},\"k:{\\\"name\\\":\\\"test-init-volume-dynamicpv-5s92\\\"}\":{\".\":{},\"f:args\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:securityContext\":{\".\":{},\"f:privileged\":{}},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{},\"f:volumeMounts\":{\".\":{},\"k:{\\\"mountPath\\\":\\\"/probe-volume\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}},\"k:{\\\"mountPath\\\":\\\"/test-volume\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}}}}},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{\".\":{},\"f:seLinuxOptions\":{\".\":{},\"f:level\":{}}},\"f:terminationGracePeriodSeconds\":{},\"f:volumes\":{\".\":{},\"k:{\\\"name\\\":\\\"liveness-probe-volume\\\"}\":{\".\":{},\"f:emptyDir\":{},\"f:name\":{}},\"k:{\\\"name\\\":\\\"test-volume\\\"}\":{\".\":{},\"f:name\":{},\"f:persistentVolumeClaim\":{\".\":{},\"f:claimName\":{}}}}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:test-volume,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:&PersistentVolumeClaimVolumeSource{ClaimName:csi-hostpath2zc5t,ReadOnly:false,},RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:liveness-probe-volume,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:&EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,},GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-zsp5r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-container-subpath-dynamicpv-5s92,Image:registry.k8s.io/e2e-test-images/agnhost:2.39,Command:[],Args:[mounttest --file_content_in_loop=/test-volume/test-file --retry_time=20],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-volume,ReadOnly:true,MountPath:/test-volume,SubPath:provisioning-1855,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:liveness-probe-volume,ReadOnly:false,MountPath:/probe-volume,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zsp5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Never,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-s284,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c0,c1,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{},MatchFields:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:metadata.name,Operator:In,Values:[nodes-us-west3-a-s284],},},},},},PreferredDuringSchedulingIgnoredDuringExecution:[]PreferredSchedulingTerm{},},PodAffinity:nil,PodAntiAffinity:nil,},SchedulerName:default-scheduler,InitContainers:[]Container{Container{Name:init-volume-dynamicpv-5s92,Image:registry.k8s.io/e2e-test-images/busybox:1.29-2,Command:[/bin/sh -c mkdir -p /test-volume/provisioning-1855],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-volume,ReadOnly:false,MountPath:/test-volume,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:liveness-probe-volume,ReadOnly:false,MountPath:/probe-volume,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zsp5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},Container{Name:test-init-volume-dynamicpv-5s92,Image:registry.k8s.io/e2e-test-images/agnhost:2.39,Command:[],Args:[mounttest --new_file_0644=/test-volume/provisioning-1855/test-file --file_mode=/test-volume/provisioning-1855/test-file],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-volume,ReadOnly:false,MountPath:/test-volume,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:liveness-probe-volume,ReadOnly:false,MountPath:/probe-volume,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zsp5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 01:27:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}]}}\nI0623 01:27:18.996658 11 garbagecollector.go:223] syncing garbage collector with updated resources from discovery (attempt 1): added: [], removed: [mygroup.example.com/v1beta1, Resource=noxus]\nI0623 01:27:18.996792 11 shared_informer.go:255] Waiting for caches to sync for garbage collector\nI0623 01:27:18.997058 11 shared_informer.go:262] Caches are synced for garbage collector\nI0623 01:27:18.997078 11 garbagecollector.go:266] synced garbage collector\nI0623 01:27:19.268055 11 operation_generator.go:398] AttachVolume.Attach succeeded for volume \"pvc-21d47775-4650-4e0e-9884-d079d4a8917b\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-1855^9e2fcedb-f293-11ec-a830-a260e94f0a3c\") from node \"nodes-us-west3-a-s284\" \nI0623 01:27:19.268726 11 event.go:294] \"Event occurred\" object=\"provisioning-1855/pod-subpath-test-dynamicpv-5s92\" fieldPath=\"\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-21d47775-4650-4e0e-9884-d079d4a8917b\\\" \"\nI0623 01:27:19.402548 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"volume-3387/csi-hostpathjrldc\"\nI0623 01:27:19.411509 11 pv_controller.go:651] volume \"pvc-9fd72ccb-83fd-415a-8e0c-773db72e59d5\" is released and reclaim policy \"Delete\" will be executed\nI0623 01:27:19.416668 11 pv_controller.go:890] volume \"pvc-9fd72ccb-83fd-415a-8e0c-773db72e59d5\" entered phase \"Released\"\nI0623 01:27:19.423673 11 pv_controller.go:1353] isVolumeReleased[pvc-9fd72ccb-83fd-415a-8e0c-773db72e59d5]: volume is released\nI0623 01:27:19.442440 11 pv_controller_base.go:582] deletion of claim \"volume-3387/csi-hostpathjrldc\" was already processed\nI0623 01:27:20.001013 11 deployment_controller.go:590] \"Deployment has been deleted\" deployment=\"deployment-9732/test-new-deployment\"\nI0623 01:27:20.079905 11 reconciler.go:250] \"attacherDetacher.DetachVolume started\" volume={AttachedVolume:{VolumeName:kubernetes.io/csi/csi-hostpath-volume-3387^80b69f21-f293-11ec-9ddc-169b2121904c VolumeSpec:0xc003965470 NodeName:nodes-us-west3-a-l43j PluginIsAttachable:true DevicePath: DeviceMountPath: PluginName:} MountedByNode:false DetachRequestedTime:2022-06-23 01:27:17.618352055 +0000 UTC m=+1035.565054577}\nI0623 01:27:20.083276 11 operation_generator.go:1603] Verified volume is safe to detach for volume \"pvc-9fd72ccb-83fd-415a-8e0c-773db72e59d5\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-3387^80b69f21-f293-11ec-9ddc-169b2121904c\") on node \"nodes-us-west3-a-l43j\" \nI0623 01:27:20.426395 11 namespace_controller.go:185] Namespace has been deleted configmap-978\nI0623 01:27:20.627186 11 operation_generator.go:513] DetachVolume.Detach succeeded for volume \"pvc-9fd72ccb-83fd-415a-8e0c-773db72e59d5\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-3387^80b69f21-f293-11ec-9ddc-169b2121904c\") on node \"nodes-us-west3-a-l43j\" \nE0623 01:27:20.776910 11 pv_controller.go:1501] error finding provisioning plugin for claim volume-6359/pvc-452zs: storageclass.storage.k8s.io \"volume-6359\" not found\nI0623 01:27:20.777556 11 event.go:294] \"Event occurred\" object=\"volume-6359/pvc-452zs\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-6359\\\" not found\"\nI0623 01:27:20.810914 11 pv_controller.go:890] volume \"local-mmz7m\" entered phase \"Available\"\nW0623 01:27:20.826186 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:20.826323 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:20.937317 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-3066/inline-volume-tester-xlvr7\" PVC=\"ephemeral-3066/inline-volume-tester-xlvr7-my-volume-0\"\nI0623 01:27:20.937348 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-3066/inline-volume-tester-xlvr7-my-volume-0\"\nI0623 01:27:20.997656 11 reconciler.go:250] \"attacherDetacher.DetachVolume started\" volume={AttachedVolume:{VolumeName:kubernetes.io/csi/csi-hostpath-ephemeral-3066^5594db52-f293-11ec-9685-924cf2ce080f VolumeSpec:0xc00283e078 NodeName:nodes-us-west3-a-l43j PluginIsAttachable:true DevicePath: DeviceMountPath: PluginName:} MountedByNode:false DetachRequestedTime:0001-01-01 00:00:00 +0000 UTC}\nI0623 01:27:21.007031 11 operation_generator.go:1603] Verified volume is safe to detach for volume \"pvc-2985ff71-ea3a-4f55-bf71-a243ca3dfce9\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-3066^5594db52-f293-11ec-9685-924cf2ce080f\") on node \"nodes-us-west3-a-l43j\" \nI0623 01:27:21.071843 11 garbagecollector.go:504] \"Processing object\" object=\"kubectl-5321/agnhost-primary-79gl7\" objectUID=6bcf82b9-c3a9-4004-bb0e-ee2cb859bce7 kind=\"Pod\" virtual=false\nI0623 01:27:21.091757 11 garbagecollector.go:616] \"Deleting object\" object=\"kubectl-5321/agnhost-primary-79gl7\" objectUID=6bcf82b9-c3a9-4004-bb0e-ee2cb859bce7 kind=\"Pod\" propagationPolicy=Background\nI0623 01:27:21.138154 11 namespace_controller.go:185] Namespace has been deleted emptydir-8553\nI0623 01:27:21.141566 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-3066/inline-volume-tester-xlvr7-my-volume-0\"\nI0623 01:27:21.159369 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-3066/inline-volume-tester-xlvr7\" objectUID=cfa40f0a-ad0c-4025-b56f-5a899193b6cf kind=\"Pod\" virtual=false\nI0623 01:27:21.166427 11 garbagecollector.go:626] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-3066, name: inline-volume-tester-xlvr7, uid: cfa40f0a-ad0c-4025-b56f-5a899193b6cf]\nI0623 01:27:21.167120 11 pv_controller.go:651] volume \"pvc-2985ff71-ea3a-4f55-bf71-a243ca3dfce9\" is released and reclaim policy \"Delete\" will be executed\nI0623 01:27:21.172220 11 namespace_controller.go:185] Namespace has been deleted crd-watch-8350\nI0623 01:27:21.185411 11 pv_controller.go:890] volume \"pvc-2985ff71-ea3a-4f55-bf71-a243ca3dfce9\" entered phase \"Released\"\nI0623 01:27:21.192352 11 pv_controller.go:1353] isVolumeReleased[pvc-2985ff71-ea3a-4f55-bf71-a243ca3dfce9]: volume is released\nI0623 01:27:21.264312 11 pv_controller_base.go:582] deletion of claim \"ephemeral-3066/inline-volume-tester-xlvr7-my-volume-0\" was already processed\nI0623 01:27:21.280479 11 garbagecollector.go:504] \"Processing object\" object=\"kubectl-5321/rm2-52f2t\" objectUID=238b763d-256a-4f5f-881e-297288623d03 kind=\"EndpointSlice\" virtual=false\nI0623 01:27:21.285032 11 garbagecollector.go:616] \"Deleting object\" object=\"kubectl-5321/rm2-52f2t\" objectUID=238b763d-256a-4f5f-881e-297288623d03 kind=\"EndpointSlice\" propagationPolicy=Background\nI0623 01:27:21.292208 11 garbagecollector.go:504] \"Processing object\" object=\"kubectl-5321/rm3-4mkmx\" objectUID=8768cd66-9c5c-4c41-9807-b947bbbdcbdc kind=\"EndpointSlice\" virtual=false\nI0623 01:27:21.297475 11 garbagecollector.go:616] \"Deleting object\" object=\"kubectl-5321/rm3-4mkmx\" objectUID=8768cd66-9c5c-4c41-9807-b947bbbdcbdc kind=\"EndpointSlice\" propagationPolicy=Background\nI0623 01:27:21.554879 11 operation_generator.go:513] DetachVolume.Detach succeeded for volume \"pvc-2985ff71-ea3a-4f55-bf71-a243ca3dfce9\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-3066^5594db52-f293-11ec-9685-924cf2ce080f\") on node \"nodes-us-west3-a-l43j\" \nI0623 01:27:22.429368 11 namespace_controller.go:185] Namespace has been deleted provisioning-9554-4762\nI0623 01:27:23.208165 11 event.go:294] \"Event occurred\" object=\"job-9490/backofflimit\" fieldPath=\"\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Warning\" reason=\"BackoffLimitExceeded\" message=\"Job has reached the specified backoff limit\"\nI0623 01:27:23.214433 11 job_controller.go:504] enqueueing job job-9490/backofflimit\nI0623 01:27:23.414529 11 namespace_controller.go:185] Namespace has been deleted emptydir-4482\nI0623 01:27:24.493130 11 garbagecollector.go:504] \"Processing object\" object=\"csi-mock-volumes-5383-955/csi-mockplugin-6495f8b896\" objectUID=14f155bd-41f0-493b-82f2-3feb4ba0b789 kind=\"ControllerRevision\" virtual=false\nI0623 01:27:24.493357 11 stateful_set.go:450] StatefulSet has been deleted csi-mock-volumes-5383-955/csi-mockplugin\nI0623 01:27:24.493408 11 garbagecollector.go:504] \"Processing object\" object=\"csi-mock-volumes-5383-955/csi-mockplugin-0\" objectUID=4be56bee-6d45-4a54-8f92-99877428ee42 kind=\"Pod\" virtual=false\nI0623 01:27:24.508234 11 garbagecollector.go:616] \"Deleting object\" object=\"csi-mock-volumes-5383-955/csi-mockplugin-6495f8b896\" objectUID=14f155bd-41f0-493b-82f2-3feb4ba0b789 kind=\"ControllerRevision\" propagationPolicy=Background\nI0623 01:27:24.508600 11 garbagecollector.go:616] \"Deleting object\" object=\"csi-mock-volumes-5383-955/csi-mockplugin-0\" objectUID=4be56bee-6d45-4a54-8f92-99877428ee42 kind=\"Pod\" propagationPolicy=Background\nE0623 01:27:24.623082 11 namespace_controller.go:162] deletion of namespace svcaccounts-4323 failed: unexpected items still remain in namespace: svcaccounts-4323 for gvr: /v1, Resource=pods\nI0623 01:27:25.015987 11 job_controller.go:504] enqueueing job ttlafterfinished-4583/rand-non-local\nI0623 01:27:25.017756 11 event.go:294] \"Event occurred\" object=\"ttlafterfinished-4583/rand-non-local\" fieldPath=\"\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rand-non-local-tp6tv\"\nI0623 01:27:25.025464 11 job_controller.go:504] enqueueing job ttlafterfinished-4583/rand-non-local\nI0623 01:27:25.032918 11 job_controller.go:504] enqueueing job ttlafterfinished-4583/rand-non-local\nE0623 01:27:25.036245 11 job_controller.go:539] syncing job: failed pod(s) detected for job key \"ttlafterfinished-4583/rand-non-local\"\nI0623 01:27:27.204350 11 job_controller.go:504] enqueueing job ttlafterfinished-4583/rand-non-local\nW0623 01:27:27.414485 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:27.414868 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:27.922872 11 namespace_controller.go:185] Namespace has been deleted pods-1222\nI0623 01:27:28.027462 11 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-5383\nI0623 01:27:28.036136 11 namespace_controller.go:185] Namespace has been deleted provisioning-2799\nI0623 01:27:28.618076 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-3066-8143/csi-hostpathplugin-0\" objectUID=8858d013-1a70-4d64-8b54-511442cadc6d kind=\"Pod\" virtual=false\nI0623 01:27:28.618660 11 stateful_set.go:450] StatefulSet has been deleted ephemeral-3066-8143/csi-hostpathplugin\nI0623 01:27:28.618713 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-3066-8143/csi-hostpathplugin-77c46f6d79\" objectUID=9a82e23a-ebe4-4a35-9ff8-118efc04c1d5 kind=\"ControllerRevision\" virtual=false\nI0623 01:27:28.620295 11 operation_generator.go:398] AttachVolume.Attach succeeded for volume \"pvc-2b3f1f49-1e28-4e4c-9cb4-1bc13e5afa9d\" (UniqueName: \"kubernetes.io/csi/pd.csi.storage.gke.io^projects/k8s-jkns-gce-soak-2/zones/us-west3-a/disks/pvc-2b3f1f49-1e28-4e4c-9cb4-1bc13e5afa9d\") from node \"nodes-us-west3-a-s284\" \nI0623 01:27:28.620671 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/ss-2\" fieldPath=\"\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-2b3f1f49-1e28-4e4c-9cb4-1bc13e5afa9d\\\" \"\nI0623 01:27:28.624690 11 garbagecollector.go:616] \"Deleting object\" object=\"ephemeral-3066-8143/csi-hostpathplugin-77c46f6d79\" objectUID=9a82e23a-ebe4-4a35-9ff8-118efc04c1d5 kind=\"ControllerRevision\" propagationPolicy=Background\nI0623 01:27:28.625155 11 garbagecollector.go:616] \"Deleting object\" object=\"ephemeral-3066-8143/csi-hostpathplugin-0\" objectUID=8858d013-1a70-4d64-8b54-511442cadc6d kind=\"Pod\" propagationPolicy=Background\nI0623 01:27:28.719494 11 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-2756, name: inline-volume-tester-wz44h, uid: 20915478-a56f-4b77-81d0-27d22bc2ef7b] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0623 01:27:28.719805 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-2756/inline-volume-tester-wz44h-my-volume-0\" objectUID=983e5913-38a5-4529-90bf-4914fcbdf86e kind=\"PersistentVolumeClaim\" virtual=false\nI0623 01:27:28.726763 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-2756/inline-volume-tester-wz44h\" objectUID=20915478-a56f-4b77-81d0-27d22bc2ef7b kind=\"Pod\" virtual=false\nI0623 01:27:28.748371 11 garbagecollector.go:616] \"Deleting object\" object=\"ephemeral-2756/inline-volume-tester-wz44h-my-volume-0\" objectUID=983e5913-38a5-4529-90bf-4914fcbdf86e kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0623 01:27:28.749005 11 garbagecollector.go:631] adding [v1/PersistentVolumeClaim, namespace: ephemeral-2756, name: inline-volume-tester-wz44h-my-volume-0, uid: 983e5913-38a5-4529-90bf-4914fcbdf86e] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-2756, name: inline-volume-tester-wz44h, uid: 20915478-a56f-4b77-81d0-27d22bc2ef7b] is deletingDependents\nI0623 01:27:28.767158 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-2756/inline-volume-tester-wz44h\" PVC=\"ephemeral-2756/inline-volume-tester-wz44h-my-volume-0\"\nI0623 01:27:28.767199 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-2756/inline-volume-tester-wz44h-my-volume-0\"\nI0623 01:27:28.768166 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-2756/inline-volume-tester-wz44h-my-volume-0\" objectUID=983e5913-38a5-4529-90bf-4914fcbdf86e kind=\"PersistentVolumeClaim\" virtual=false\nI0623 01:27:29.253282 11 garbagecollector.go:504] \"Processing object\" object=\"job-9490/backofflimit-km8bn\" objectUID=dfd5e68d-b596-4e23-96cc-99cad49a4547 kind=\"Pod\" virtual=false\nI0623 01:27:29.253605 11 job_controller.go:504] enqueueing job job-9490/backofflimit\nE0623 01:27:29.253827 11 tracking_utils.go:109] \"deleting tracking annotation UID expectations\" err=\"couldn't create key for object job-9490/backofflimit: could not find key for obj \\\"job-9490/backofflimit\\\"\" job=\"job-9490/backofflimit\"\nI0623 01:27:29.253912 11 garbagecollector.go:504] \"Processing object\" object=\"job-9490/backofflimit-msn24\" objectUID=ac0c082e-b559-4cbc-a8ea-59a8f98774e9 kind=\"Pod\" virtual=false\nI0623 01:27:29.257248 11 garbagecollector.go:616] \"Deleting object\" object=\"job-9490/backofflimit-km8bn\" objectUID=dfd5e68d-b596-4e23-96cc-99cad49a4547 kind=\"Pod\" propagationPolicy=Background\nI0623 01:27:29.257454 11 garbagecollector.go:616] \"Deleting object\" object=\"job-9490/backofflimit-msn24\" objectUID=ac0c082e-b559-4cbc-a8ea-59a8f98774e9 kind=\"Pod\" propagationPolicy=Background\nI0623 01:27:30.000363 11 deployment_controller.go:590] \"Deployment has been deleted\" deployment=\"gc-8990/simpletest.deployment\"\nI0623 01:27:30.205693 11 job_controller.go:504] enqueueing job ttlafterfinished-4583/rand-non-local\nW0623 01:27:30.387078 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:30.387112 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nW0623 01:27:30.859912 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:30.861502 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:31.223045 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-6918/pvc-ttp9d\"\nI0623 01:27:31.233799 11 pv_controller.go:651] volume \"local-hnzjg\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:27:31.241068 11 pv_controller.go:890] volume \"local-hnzjg\" entered phase \"Released\"\nI0623 01:27:31.253494 11 pv_controller_base.go:582] deletion of claim \"provisioning-6918/pvc-ttp9d\" was already processed\nI0623 01:27:31.481509 11 pv_controller.go:941] claim \"provisioning-772/pvc-4lztm\" bound to volume \"local-p6q6p\"\nI0623 01:27:31.494899 11 pv_controller.go:890] volume \"local-p6q6p\" entered phase \"Bound\"\nI0623 01:27:31.494947 11 pv_controller.go:993] volume \"local-p6q6p\" bound to claim \"provisioning-772/pvc-4lztm\"\nI0623 01:27:31.509356 11 pv_controller.go:834] claim \"provisioning-772/pvc-4lztm\" entered phase \"Bound\"\nI0623 01:27:31.509861 11 pv_controller.go:941] claim \"volume-6359/pvc-452zs\" bound to volume \"local-mmz7m\"\nI0623 01:27:31.519575 11 pv_controller.go:890] volume \"local-mmz7m\" entered phase \"Bound\"\nI0623 01:27:31.519607 11 pv_controller.go:993] volume \"local-mmz7m\" bound to claim \"volume-6359/pvc-452zs\"\nI0623 01:27:31.528314 11 garbagecollector.go:504] \"Processing object\" object=\"volume-3387-7352/csi-hostpathplugin-6cbb79699d\" objectUID=8fda99bd-86b3-4512-91a1-bc7f9d1ede42 kind=\"ControllerRevision\" virtual=false\nI0623 01:27:31.528556 11 stateful_set.go:450] StatefulSet has been deleted volume-3387-7352/csi-hostpathplugin\nI0623 01:27:31.528585 11 garbagecollector.go:504] \"Processing object\" object=\"volume-3387-7352/csi-hostpathplugin-0\" objectUID=e9c4c436-a838-4824-85af-0563641e7d1a kind=\"Pod\" virtual=false\nI0623 01:27:31.534000 11 garbagecollector.go:616] \"Deleting object\" object=\"volume-3387-7352/csi-hostpathplugin-0\" objectUID=e9c4c436-a838-4824-85af-0563641e7d1a kind=\"Pod\" propagationPolicy=Background\nI0623 01:27:31.535073 11 pv_controller.go:834] claim \"volume-6359/pvc-452zs\" entered phase \"Bound\"\nI0623 01:27:31.535436 11 garbagecollector.go:616] \"Deleting object\" object=\"volume-3387-7352/csi-hostpathplugin-6cbb79699d\" objectUID=8fda99bd-86b3-4512-91a1-bc7f9d1ede42 kind=\"ControllerRevision\" propagationPolicy=Background\nI0623 01:27:31.818323 11 namespace_controller.go:185] Namespace has been deleted ephemeral-3066\nI0623 01:27:32.847925 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-6433-8450/csi-hostpathplugin-bf445dd54\" objectUID=d3eb4e69-a637-42fc-9da4-b734b4adbc52 kind=\"ControllerRevision\" virtual=false\nI0623 01:27:32.848035 11 stateful_set.go:450] StatefulSet has been deleted ephemeral-6433-8450/csi-hostpathplugin\nI0623 01:27:32.848109 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-6433-8450/csi-hostpathplugin-0\" objectUID=0acaeef3-07cc-437c-ba7a-eb8213a8bfbb kind=\"Pod\" virtual=false\nI0623 01:27:32.870439 11 garbagecollector.go:616] \"Deleting object\" object=\"ephemeral-6433-8450/csi-hostpathplugin-bf445dd54\" objectUID=d3eb4e69-a637-42fc-9da4-b734b4adbc52 kind=\"ControllerRevision\" propagationPolicy=Background\nI0623 01:27:32.870694 11 garbagecollector.go:616] \"Deleting object\" object=\"ephemeral-6433-8450/csi-hostpathplugin-0\" objectUID=0acaeef3-07cc-437c-ba7a-eb8213a8bfbb kind=\"Pod\" propagationPolicy=Background\nI0623 01:27:32.952952 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-1855/csi-hostpath2zc5t\"\nI0623 01:27:32.961990 11 pv_controller.go:651] volume \"pvc-21d47775-4650-4e0e-9884-d079d4a8917b\" is released and reclaim policy \"Delete\" will be executed\nI0623 01:27:32.965755 11 pv_controller.go:890] volume \"pvc-21d47775-4650-4e0e-9884-d079d4a8917b\" entered phase \"Released\"\nI0623 01:27:32.969711 11 pv_controller.go:1353] isVolumeReleased[pvc-21d47775-4650-4e0e-9884-d079d4a8917b]: volume is released\nI0623 01:27:32.983589 11 pv_controller_base.go:582] deletion of claim \"provisioning-1855/csi-hostpath2zc5t\" was already processed\nI0623 01:27:33.190433 11 namespace_controller.go:185] Namespace has been deleted proxy-418\nW0623 01:27:33.218072 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:33.218108 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:33.370855 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3230/pod-7939dd2a-cc14-434a-9eb2-93d299f208ba\" PVC=\"persistent-local-volumes-test-3230/pvc-6v9mp\"\nI0623 01:27:33.371266 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3230/pvc-6v9mp\"\nI0623 01:27:33.715164 11 reconciler.go:250] \"attacherDetacher.DetachVolume started\" volume={AttachedVolume:{VolumeName:kubernetes.io/csi/csi-hostpath-provisioning-1855^9e2fcedb-f293-11ec-a830-a260e94f0a3c VolumeSpec:0xc001dfa360 NodeName:nodes-us-west3-a-s284 PluginIsAttachable:true DevicePath: DeviceMountPath: PluginName:} MountedByNode:false DetachRequestedTime:2022-06-23 01:27:31.789141749 +0000 UTC m=+1049.735844317}\nI0623 01:27:33.722359 11 operation_generator.go:1603] Verified volume is safe to detach for volume \"pvc-21d47775-4650-4e0e-9884-d079d4a8917b\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-1855^9e2fcedb-f293-11ec-a830-a260e94f0a3c\") on node \"nodes-us-west3-a-s284\" \nI0623 01:27:33.918284 11 namespace_controller.go:185] Namespace has been deleted nettest-9975\nI0623 01:27:34.257689 11 operation_generator.go:513] DetachVolume.Detach succeeded for volume \"pvc-21d47775-4650-4e0e-9884-d079d4a8917b\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-1855^9e2fcedb-f293-11ec-a830-a260e94f0a3c\") on node \"nodes-us-west3-a-s284\" \nI0623 01:27:34.427299 11 namespace_controller.go:185] Namespace has been deleted job-9490\nI0623 01:27:34.986561 11 namespace_controller.go:185] Namespace has been deleted volume-3387\nI0623 01:27:35.111525 11 event.go:294] \"Event occurred\" object=\"statefulset-923/ss\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0623 01:27:35.187679 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" need=40 creating=40\nI0623 01:27:35.200263 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-s5nvh\"\nI0623 01:27:35.233032 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-tvh2l\"\nI0623 01:27:35.233584 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-nr2m2\"\nI0623 01:27:35.252481 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-v64lw\"\nI0623 01:27:35.282238 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-sdkx5\"\nI0623 01:27:35.282535 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-ckvnk\"\nI0623 01:27:35.282832 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-qp9kk\"\nI0623 01:27:35.313142 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-ptnqj\"\nI0623 01:27:35.334973 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-4gfsd\"\nI0623 01:27:35.335535 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-nqk6l\"\nI0623 01:27:35.336062 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-284zh\"\nI0623 01:27:35.338364 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-cns5b\"\nI0623 01:27:35.353035 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-9268w\"\nI0623 01:27:35.353150 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-945bm\"\nI0623 01:27:35.353510 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-4hw5w\"\nI0623 01:27:35.399604 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-psmjv\"\nI0623 01:27:35.405012 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-qbfwr\"\nI0623 01:27:35.412092 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-kmqtt\"\nI0623 01:27:35.412603 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-66hdl\"\nI0623 01:27:35.418483 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-vl2sg\"\nI0623 01:27:35.418576 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-wph79\"\nI0623 01:27:35.418673 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-9ctxw\"\nI0623 01:27:35.428457 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-986r7\"\nI0623 01:27:35.428502 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-pc2ct\"\nI0623 01:27:35.428520 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-5hrcf\"\nI0623 01:27:35.428538 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-vdrfg\"\nI0623 01:27:35.428620 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-cdkzk\"\nI0623 01:27:35.428683 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-827ht\"\nI0623 01:27:35.460294 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-czkgg\"\nI0623 01:27:35.480268 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-h4ld2\"\nI0623 01:27:35.520669 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-kxkxt\"\nI0623 01:27:35.593849 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-ztgg4\"\nI0623 01:27:35.642958 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-b986k\"\nI0623 01:27:35.697084 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-fm7hx\"\nI0623 01:27:35.706240 11 namespace_controller.go:185] Namespace has been deleted port-forwarding-841\nI0623 01:27:35.742836 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-8qlfn\"\nI0623 01:27:35.797590 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-pmfkq\"\nI0623 01:27:35.846538 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-6tfgj\"\nI0623 01:27:35.893465 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-bv8sx\"\nI0623 01:27:35.944155 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-594sq\"\nI0623 01:27:35.993840 11 event.go:294] \"Event occurred\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-cfvjn\"\nI0623 01:27:35.997167 11 namespace_controller.go:185] Namespace has been deleted ephemeral-6433\nI0623 01:27:37.889920 11 namespace_controller.go:185] Namespace has been deleted provisioning-6631\nI0623 01:27:38.991068 11 namespace_controller.go:185] Namespace has been deleted ephemeral-3066-8143\nE0623 01:27:40.706411 11 pv_controller.go:1501] error finding provisioning plugin for claim provisioning-5722/pvc-c7hzl: storageclass.storage.k8s.io \"provisioning-5722\" not found\nI0623 01:27:40.707054 11 event.go:294] \"Event occurred\" object=\"provisioning-5722/pvc-c7hzl\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-5722\\\" not found\"\nI0623 01:27:40.738444 11 pv_controller.go:890] volume \"local-nq7vc\" entered phase \"Available\"\nI0623 01:27:41.000367 11 deployment_controller.go:590] \"Deployment has been deleted\" deployment=\"deployment-5096/webserver-deployment\"\nI0623 01:27:41.807097 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3230/pod-7939dd2a-cc14-434a-9eb2-93d299f208ba\" PVC=\"persistent-local-volumes-test-3230/pvc-6v9mp\"\nI0623 01:27:41.807556 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3230/pvc-6v9mp\"\nI0623 01:27:41.824902 11 namespace_controller.go:185] Namespace has been deleted volume-3387-7352\nI0623 01:27:42.004952 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3230/pod-7939dd2a-cc14-434a-9eb2-93d299f208ba\" PVC=\"persistent-local-volumes-test-3230/pvc-6v9mp\"\nI0623 01:27:42.005455 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3230/pvc-6v9mp\"\nI0623 01:27:42.019731 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"persistent-local-volumes-test-3230/pvc-6v9mp\"\nI0623 01:27:42.033931 11 pv_controller.go:651] volume \"local-pvlh7ct\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:27:42.038481 11 pv_controller.go:890] volume \"local-pvlh7ct\" entered phase \"Released\"\nI0623 01:27:42.045497 11 pv_controller_base.go:582] deletion of claim \"persistent-local-volumes-test-3230/pvc-6v9mp\" was already processed\nI0623 01:27:42.179678 11 namespace_controller.go:185] Namespace has been deleted provisioning-6918\nI0623 01:27:42.439958 11 namespace_controller.go:185] Namespace has been deleted tables-1265\nE0623 01:27:42.912106 11 pv_controller.go:1501] error finding provisioning plugin for claim provisioning-6972/pvc-p6cd8: storageclass.storage.k8s.io \"provisioning-6972\" not found\nI0623 01:27:42.912695 11 event.go:294] \"Event occurred\" object=\"provisioning-6972/pvc-p6cd8\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-6972\\\" not found\"\nI0623 01:27:42.943426 11 pv_controller.go:890] volume \"local-lhxjv\" entered phase \"Available\"\nI0623 01:27:43.304448 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"volume-6359/pvc-452zs\"\nI0623 01:27:43.328248 11 pv_controller.go:651] volume \"local-mmz7m\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:27:43.339607 11 pv_controller.go:651] volume \"local-mmz7m\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:27:43.348115 11 pv_controller.go:890] volume \"local-mmz7m\" entered phase \"Released\"\nI0623 01:27:43.360554 11 pv_controller_base.go:582] deletion of claim \"volume-6359/pvc-452zs\" was already processed\nI0623 01:27:43.724573 11 stateful_set_control.go:535] \"Pod of StatefulSet is terminating for scale down\" statefulSet=\"statefulset-9707/ss\" pod=\"statefulset-9707/ss-2\"\nI0623 01:27:43.736479 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/ss\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-2 in StatefulSet ss successful\"\nI0623 01:27:44.340499 11 namespace_controller.go:185] Namespace has been deleted emptydir-800\nI0623 01:27:44.900669 11 namespace_controller.go:185] Namespace has been deleted services-5466\nI0623 01:27:45.099497 11 garbagecollector.go:504] \"Processing object\" object=\"provisioning-1855-1443/csi-hostpathplugin-7cd45fbd6c\" objectUID=e8780dfe-26e6-4eda-93f6-1de952f9dfdb kind=\"ControllerRevision\" virtual=false\nI0623 01:27:45.099819 11 stateful_set.go:450] StatefulSet has been deleted provisioning-1855-1443/csi-hostpathplugin\nI0623 01:27:45.099901 11 garbagecollector.go:504] \"Processing object\" object=\"provisioning-1855-1443/csi-hostpathplugin-0\" objectUID=beb2cfb7-5253-44e6-af26-e549b6f3eb95 kind=\"Pod\" virtual=false\nI0623 01:27:45.103065 11 garbagecollector.go:616] \"Deleting object\" object=\"provisioning-1855-1443/csi-hostpathplugin-0\" objectUID=beb2cfb7-5253-44e6-af26-e549b6f3eb95 kind=\"Pod\" propagationPolicy=Background\nI0623 01:27:45.103464 11 garbagecollector.go:616] \"Deleting object\" object=\"provisioning-1855-1443/csi-hostpathplugin-7cd45fbd6c\" objectUID=e8780dfe-26e6-4eda-93f6-1de952f9dfdb kind=\"ControllerRevision\" propagationPolicy=Background\nI0623 01:27:45.228213 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"svc-latency-2830/svc-latency-rc\" need=1 creating=1\nI0623 01:27:45.241372 11 event.go:294] \"Event occurred\" object=\"svc-latency-2830/svc-latency-rc\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: svc-latency-rc-25hld\"\nW0623 01:27:45.655786 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:45.655821 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:45.956578 11 namespace_controller.go:185] Namespace has been deleted container-probe-7211\nI0623 01:27:46.482261 11 pv_controller.go:941] claim \"provisioning-6972/pvc-p6cd8\" bound to volume \"local-lhxjv\"\nI0623 01:27:46.492921 11 pv_controller.go:890] volume \"local-lhxjv\" entered phase \"Bound\"\nI0623 01:27:46.492960 11 pv_controller.go:993] volume \"local-lhxjv\" bound to claim \"provisioning-6972/pvc-p6cd8\"\nI0623 01:27:46.501853 11 pv_controller.go:834] claim \"provisioning-6972/pvc-p6cd8\" entered phase \"Bound\"\nI0623 01:27:46.502646 11 pv_controller.go:941] claim \"provisioning-5722/pvc-c7hzl\" bound to volume \"local-nq7vc\"\nI0623 01:27:46.513380 11 pv_controller.go:890] volume \"local-nq7vc\" entered phase \"Bound\"\nI0623 01:27:46.513434 11 pv_controller.go:993] volume \"local-nq7vc\" bound to claim \"provisioning-5722/pvc-c7hzl\"\nI0623 01:27:46.520157 11 pv_controller.go:834] claim \"provisioning-5722/pvc-c7hzl\" entered phase \"Bound\"\nW0623 01:27:46.972162 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:46.972194 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:47.632206 11 namespace_controller.go:185] Namespace has been deleted kubectl-5321\nI0623 01:27:48.315694 11 namespace_controller.go:185] Namespace has been deleted provisioning-1855\nI0623 01:27:48.486237 11 namespace_controller.go:185] Namespace has been deleted ephemeral-6433-8450\nW0623 01:27:49.131850 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:49.131910 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:49.138780 11 garbagecollector.go:504] \"Processing object\" object=\"sctp-9499/sctp-clusterip-2tk5k\" objectUID=e2bf90dc-3991-40d7-8fa0-5c60026443b6 kind=\"EndpointSlice\" virtual=false\nI0623 01:27:49.146155 11 garbagecollector.go:616] \"Deleting object\" object=\"sctp-9499/sctp-clusterip-2tk5k\" objectUID=e2bf90dc-3991-40d7-8fa0-5c60026443b6 kind=\"EndpointSlice\" propagationPolicy=Background\nI0623 01:27:49.180174 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-61/inline-volume-tester-vvd5r\" PVC=\"ephemeral-61/inline-volume-tester-vvd5r-my-volume-0\"\nI0623 01:27:49.180202 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-61/inline-volume-tester-vvd5r-my-volume-0\"\nI0623 01:27:49.360797 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-61/inline-volume-tester-vvd5r-my-volume-0\"\nI0623 01:27:49.371331 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-61/inline-volume-tester-vvd5r\" objectUID=f31b1a60-bbca-4dd6-bf53-700f10b13135 kind=\"Pod\" virtual=false\nI0623 01:27:49.374571 11 garbagecollector.go:626] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-61, name: inline-volume-tester-vvd5r, uid: f31b1a60-bbca-4dd6-bf53-700f10b13135]\nI0623 01:27:49.376299 11 pv_controller.go:651] volume \"pvc-b40350ad-93e5-4fb3-8487-7345ccdd7c1c\" is released and reclaim policy \"Delete\" will be executed\nI0623 01:27:49.384444 11 pv_controller.go:890] volume \"pvc-b40350ad-93e5-4fb3-8487-7345ccdd7c1c\" entered phase \"Released\"\nI0623 01:27:49.389788 11 pv_controller.go:1353] isVolumeReleased[pvc-b40350ad-93e5-4fb3-8487-7345ccdd7c1c]: volume is released\nI0623 01:27:49.404302 11 pv_controller_base.go:582] deletion of claim \"ephemeral-61/inline-volume-tester-vvd5r-my-volume-0\" was already processed\nI0623 01:27:49.825822 11 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-3230\nI0623 01:27:50.168239 11 stateful_set_control.go:535] \"Pod of StatefulSet is terminating for scale down\" statefulSet=\"statefulset-9707/ss\" pod=\"statefulset-9707/ss-1\"\nI0623 01:27:50.185453 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/ss\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nI0623 01:27:50.210550 11 event.go:294] \"Event occurred\" object=\"ttlafterfinished-4583/rand-non-local\" fieldPath=\"\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Warning\" reason=\"BackoffLimitExceeded\" message=\"Job has reached the specified backoff limit\"\nI0623 01:27:50.224504 11 job_controller.go:504] enqueueing job ttlafterfinished-4583/rand-non-local\nI0623 01:27:51.373237 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"apply-6020/deployment-6c468f5898\" need=3 creating=3\nI0623 01:27:51.376414 11 event.go:294] \"Event occurred\" object=\"apply-6020/deployment\" fieldPath=\"\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set deployment-6c468f5898 to 3\"\nI0623 01:27:51.383096 11 event.go:294] \"Event occurred\" object=\"apply-6020/deployment-6c468f5898\" fieldPath=\"\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-6c468f5898-rnpsk\"\nI0623 01:27:51.390634 11 event.go:294] \"Event occurred\" object=\"apply-6020/deployment-6c468f5898\" fieldPath=\"\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-6c468f5898-jhj6n\"\nI0623 01:27:51.391745 11 event.go:294] \"Event occurred\" object=\"apply-6020/deployment-6c468f5898\" fieldPath=\"\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-6c468f5898-4kmzk\"\nI0623 01:27:51.410736 11 deployment_controller.go:497] \"Error syncing deployment\" deployment=\"apply-6020/deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0623 01:27:51.415622 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-772/pvc-4lztm\"\nI0623 01:27:51.427433 11 deployment_controller.go:497] \"Error syncing deployment\" deployment=\"apply-6020/deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0623 01:27:51.441993 11 deployment_controller.go:497] \"Error syncing deployment\" deployment=\"apply-6020/deployment\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"deployment-6c468f5898\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0623 01:27:51.451077 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"apply-6020/deployment-6c468f5898\" need=5 creating=2\nI0623 01:27:51.452360 11 event.go:294] \"Event occurred\" object=\"apply-6020/deployment\" fieldPath=\"\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set deployment-6c468f5898 to 5 from 3\"\nI0623 01:27:51.464274 11 pv_controller.go:651] volume \"local-p6q6p\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:27:51.469680 11 event.go:294] \"Event occurred\" object=\"apply-6020/deployment-6c468f5898\" fieldPath=\"\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-6c468f5898-qgzp9\"\nI0623 01:27:51.488538 11 event.go:294] \"Event occurred\" object=\"apply-6020/deployment-6c468f5898\" fieldPath=\"\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-6c468f5898-kkvbf\"\nI0623 01:27:51.510776 11 pv_controller.go:890] volume \"local-p6q6p\" entered phase \"Released\"\nI0623 01:27:51.529607 11 deployment_controller.go:497] \"Error syncing deployment\" deployment=\"apply-6020/deployment\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"deployment-6c468f5898\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0623 01:27:51.533584 11 pv_controller_base.go:582] deletion of claim \"provisioning-772/pvc-4lztm\" was already processed\nI0623 01:27:51.539450 11 deployment_controller.go:497] \"Error syncing deployment\" deployment=\"apply-6020/deployment\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"deployment-6c468f5898\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0623 01:27:51.548746 11 replica_set.go:613] \"Too many replicas\" replicaSet=\"apply-6020/deployment-6c468f5898\" need=3 deleting=2\nI0623 01:27:51.549167 11 replica_set.go:241] \"Found related ReplicaSets\" replicaSet=\"apply-6020/deployment-6c468f5898\" relatedReplicaSets=[apply-6020/deployment-6c468f5898]\nI0623 01:27:51.549494 11 controller_utils.go:592] \"Deleting pod\" controller=\"deployment-6c468f5898\" pod=\"apply-6020/deployment-6c468f5898-rnpsk\"\nI0623 01:27:51.549947 11 controller_utils.go:592] \"Deleting pod\" controller=\"deployment-6c468f5898\" pod=\"apply-6020/deployment-6c468f5898-kkvbf\"\nI0623 01:27:51.551072 11 event.go:294] \"Event occurred\" object=\"apply-6020/deployment\" fieldPath=\"\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set deployment-6c468f5898 to 3 from 5\"\nI0623 01:27:51.555333 11 event.go:294] \"Event occurred\" object=\"statefulset-8729/ss2\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0623 01:27:51.573688 11 event.go:294] \"Event occurred\" object=\"apply-6020/deployment-6c468f5898\" fieldPath=\"\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: deployment-6c468f5898-kkvbf\"\nI0623 01:27:51.575649 11 event.go:294] \"Event occurred\" object=\"apply-6020/deployment-6c468f5898\" fieldPath=\"\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: deployment-6c468f5898-rnpsk\"\nE0623 01:27:51.600196 11 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"deployment-6c468f5898.16fb1ba23abe53da\", GenerateName:\"\", Namespace:\"apply-6020\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"apply-6020\", Name:\"deployment-6c468f5898\", UID:\"1fb2d43f-8d33-4b3d-b0f1-f593ef8086ce\", APIVersion:\"apps/v1\", ResourceVersion:\"34998\", FieldPath:\"\"}, Reason:\"SuccessfulDelete\", Message:\"Deleted pod: deployment-6c468f5898-kkvbf\", Source:v1.EventSource{Component:\"replicaset-controller\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 1, 27, 51, 572992986, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 1, 27, 51, 572992986, time.Local), Count:1, Type:\"Normal\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"deployment-6c468f5898.16fb1ba23abe53da\" is forbidden: unable to create new content in namespace apply-6020 because it is being terminated' (will not retry!)\nE0623 01:27:51.604186 11 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"deployment-6c468f5898.16fb1ba23ae20730\", GenerateName:\"\", Namespace:\"apply-6020\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"apply-6020\", Name:\"deployment-6c468f5898\", UID:\"1fb2d43f-8d33-4b3d-b0f1-f593ef8086ce\", APIVersion:\"apps/v1\", ResourceVersion:\"34998\", FieldPath:\"\"}, Reason:\"SuccessfulDelete\", Message:\"Deleted pod: deployment-6c468f5898-rnpsk\", Source:v1.EventSource{Component:\"replicaset-controller\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 23, 1, 27, 51, 575332656, time.Local), LastTimestamp:time.Date(2022, time.June, 23, 1, 27, 51, 575332656, time.Local), Count:1, Type:\"Normal\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"deployment-6c468f5898.16fb1ba23ae20730\" is forbidden: unable to create new content in namespace apply-6020 because it is being terminated' (will not retry!)\nI0623 01:27:51.615444 11 garbagecollector.go:504] \"Processing object\" object=\"apply-6020/deployment-6c468f5898\" objectUID=1fb2d43f-8d33-4b3d-b0f1-f593ef8086ce kind=\"ReplicaSet\" virtual=false\nI0623 01:27:51.616830 11 deployment_controller.go:590] \"Deployment has been deleted\" deployment=\"apply-6020/deployment\"\nI0623 01:27:51.624451 11 garbagecollector.go:616] \"Deleting object\" object=\"apply-6020/deployment-6c468f5898\" objectUID=1fb2d43f-8d33-4b3d-b0f1-f593ef8086ce kind=\"ReplicaSet\" propagationPolicy=Background\nI0623 01:27:51.630419 11 garbagecollector.go:504] \"Processing object\" object=\"apply-6020/deployment-6c468f5898-rnpsk\" objectUID=b720560d-44c9-4365-9821-34c01df32687 kind=\"Pod\" virtual=false\nI0623 01:27:51.630854 11 garbagecollector.go:504] \"Processing object\" object=\"apply-6020/deployment-6c468f5898-4kmzk\" objectUID=58c0bfe4-05f0-4b5f-8f62-e0b27245f1c1 kind=\"Pod\" virtual=false\nI0623 01:27:51.630883 11 garbagecollector.go:504] \"Processing object\" object=\"apply-6020/deployment-6c468f5898-jhj6n\" objectUID=ecc7323c-c33d-4fe7-b990-f96ce314c56e kind=\"Pod\" virtual=false\nI0623 01:27:51.630930 11 garbagecollector.go:504] \"Processing object\" object=\"apply-6020/deployment-6c468f5898-qgzp9\" objectUID=761de7f1-f8f0-45e5-af96-6ffc1b3537b6 kind=\"Pod\" virtual=false\nI0623 01:27:51.630957 11 garbagecollector.go:504] \"Processing object\" object=\"apply-6020/deployment-6c468f5898-kkvbf\" objectUID=4bfdde00-1e1c-4c5c-b18c-e6c89554cfa3 kind=\"Pod\" virtual=false\nE0623 01:27:51.637046 11 replica_set.go:550] sync \"apply-6020/deployment-6c468f5898\" failed with replicasets.apps \"deployment-6c468f5898\" not found\nI0623 01:27:51.646007 11 garbagecollector.go:616] \"Deleting object\" object=\"apply-6020/deployment-6c468f5898-qgzp9\" objectUID=761de7f1-f8f0-45e5-af96-6ffc1b3537b6 kind=\"Pod\" propagationPolicy=Background\nI0623 01:27:51.646711 11 garbagecollector.go:616] \"Deleting object\" object=\"apply-6020/deployment-6c468f5898-4kmzk\" objectUID=58c0bfe4-05f0-4b5f-8f62-e0b27245f1c1 kind=\"Pod\" propagationPolicy=Background\nI0623 01:27:51.647201 11 garbagecollector.go:616] \"Deleting object\" object=\"apply-6020/deployment-6c468f5898-jhj6n\" objectUID=ecc7323c-c33d-4fe7-b990-f96ce314c56e kind=\"Pod\" propagationPolicy=Background\nW0623 01:27:52.216846 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:52.217242 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:52.847387 11 replica_set.go:577] \"Too few replicas\" replicaSet=\"services-3413/nodeport-update-service\" need=2 creating=2\nI0623 01:27:52.861973 11 event.go:294] \"Event occurred\" object=\"services-3413/nodeport-update-service\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: nodeport-update-service-8b7rr\"\nI0623 01:27:52.871185 11 event.go:294] \"Event occurred\" object=\"services-3413/nodeport-update-service\" fieldPath=\"\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: nodeport-update-service-jkmg8\"\nI0623 01:27:53.808511 11 namespace_controller.go:185] Namespace has been deleted volume-6359\nI0623 01:27:53.966848 11 stateful_set_control.go:535] \"Pod of StatefulSet is terminating for scale down\" statefulSet=\"statefulset-9707/ss\" pod=\"statefulset-9707/ss-0\"\nI0623 01:27:53.974385 11 event.go:294] \"Event occurred\" object=\"statefulset-9707/ss\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0623 01:27:54.428625 11 reconciler.go:250] \"attacherDetacher.DetachVolume started\" volume={AttachedVolume:{VolumeName:kubernetes.io/csi/pd.csi.storage.gke.io^projects/k8s-jkns-gce-soak-2/zones/us-west3-a/disks/pvc-2b3f1f49-1e28-4e4c-9cb4-1bc13e5afa9d VolumeSpec:0xc003964108 NodeName:nodes-us-west3-a-s284 PluginIsAttachable:true DevicePath: DeviceMountPath: PluginName:} MountedByNode:false DetachRequestedTime:2022-06-23 01:27:49.766586956 +0000 UTC m=+1067.713289472}\nI0623 01:27:54.439884 11 operation_generator.go:1603] Verified volume is safe to detach for volume \"pvc-2b3f1f49-1e28-4e4c-9cb4-1bc13e5afa9d\" (UniqueName: \"kubernetes.io/csi/pd.csi.storage.gke.io^projects/k8s-jkns-gce-soak-2/zones/us-west3-a/disks/pvc-2b3f1f49-1e28-4e4c-9cb4-1bc13e5afa9d\") on node \"nodes-us-west3-a-s284\" \nI0623 01:27:54.449196 11 reconciler.go:250] \"attacherDetacher.DetachVolume started\" volume={AttachedVolume:{VolumeName:kubernetes.io/csi/csi-hostpath-ephemeral-61^77714bba-f293-11ec-bbcb-a2260c93e9d3 VolumeSpec:0xc003022c90 NodeName:nodes-us-west3-a-s284 PluginIsAttachable:true DevicePath: DeviceMountPath: PluginName:} MountedByNode:false DetachRequestedTime:2022-06-23 01:27:49.262279775 +0000 UTC m=+1067.208982295}\nI0623 01:27:54.454830 11 operation_generator.go:1603] Verified volume is safe to detach for volume \"pvc-b40350ad-93e5-4fb3-8487-7345ccdd7c1c\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-61^77714bba-f293-11ec-bbcb-a2260c93e9d3\") on node \"nodes-us-west3-a-s284\" \nI0623 01:27:54.532841 11 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-9466-9954\nW0623 01:27:54.705823 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:54.707725 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:54.713545 11 namespace_controller.go:185] Namespace has been deleted provisioning-2549\nI0623 01:27:54.989482 11 operation_generator.go:513] DetachVolume.Detach succeeded for volume \"pvc-b40350ad-93e5-4fb3-8487-7345ccdd7c1c\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-61^77714bba-f293-11ec-bbcb-a2260c93e9d3\") on node \"nodes-us-west3-a-s284\" \nW0623 01:27:55.169510 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:55.169567 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:55.308265 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-5722/pvc-c7hzl\"\nI0623 01:27:55.409735 11 pv_controller.go:651] volume \"local-nq7vc\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:27:55.432487 11 pv_controller.go:651] volume \"local-nq7vc\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:27:55.460348 11 pv_controller.go:890] volume \"local-nq7vc\" entered phase \"Released\"\nI0623 01:27:55.496178 11 pv_controller_base.go:582] deletion of claim \"provisioning-5722/pvc-c7hzl\" was already processed\nW0623 01:27:55.856216 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:55.856388 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nW0623 01:27:57.648167 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:27:57.648219 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:27:57.788074 11 event.go:294] \"Event occurred\" object=\"statefulset-8729/ss2\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nI0623 01:27:58.761762 11 operation_generator.go:513] DetachVolume.Detach succeeded for volume \"pvc-2b3f1f49-1e28-4e4c-9cb4-1bc13e5afa9d\" (UniqueName: \"kubernetes.io/csi/pd.csi.storage.gke.io^projects/k8s-jkns-gce-soak-2/zones/us-west3-a/disks/pvc-2b3f1f49-1e28-4e4c-9cb4-1bc13e5afa9d\") on node \"nodes-us-west3-a-s284\" \nI0623 01:27:59.297073 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-61-788/csi-hostpathplugin-6db6db5fc\" objectUID=9c50114d-18ca-4c22-aec4-1ca344dcce41 kind=\"ControllerRevision\" virtual=false\nI0623 01:27:59.297611 11 stateful_set.go:450] StatefulSet has been deleted ephemeral-61-788/csi-hostpathplugin\nI0623 01:27:59.297842 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-61-788/csi-hostpathplugin-0\" objectUID=f3eedd7d-70ca-4881-8556-1fba07b212b6 kind=\"Pod\" virtual=false\nI0623 01:27:59.301900 11 garbagecollector.go:616] \"Deleting object\" object=\"ephemeral-61-788/csi-hostpathplugin-0\" objectUID=f3eedd7d-70ca-4881-8556-1fba07b212b6 kind=\"Pod\" propagationPolicy=Background\nI0623 01:27:59.302388 11 garbagecollector.go:616] \"Deleting object\" object=\"ephemeral-61-788/csi-hostpathplugin-6db6db5fc\" objectUID=9c50114d-18ca-4c22-aec4-1ca344dcce41 kind=\"ControllerRevision\" propagationPolicy=Background\nI0623 01:27:59.765740 11 namespace_controller.go:185] Namespace has been deleted sctp-9499\nI0623 01:28:00.055542 11 graph_builder.go:587] add [batch/v1/Job, namespace: ttlafterfinished-4583, name: rand-non-local, uid: 78fe4113-586e-46b0-b7e9-6ff20585b82c] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0623 01:28:00.055859 11 job_controller.go:504] enqueueing job ttlafterfinished-4583/rand-non-local\nI0623 01:28:00.056112 11 garbagecollector.go:504] \"Processing object\" object=\"ttlafterfinished-4583/rand-non-local-w9t65\" objectUID=c15e6e18-f37f-4a63-99b1-7510bec68b66 kind=\"Pod\" virtual=false\nI0623 01:28:00.056344 11 garbagecollector.go:504] \"Processing object\" object=\"ttlafterfinished-4583/rand-non-local-vqgt7\" objectUID=0c41d557-a666-4ded-a648-360cb8d88f71 kind=\"Pod\" virtual=false\nI0623 01:28:00.056501 11 garbagecollector.go:504] \"Processing object\" object=\"ttlafterfinished-4583/rand-non-local\" objectUID=78fe4113-586e-46b0-b7e9-6ff20585b82c kind=\"Job\" virtual=false\nI0623 01:28:00.056442 11 garbagecollector.go:504] \"Processing object\" object=\"ttlafterfinished-4583/rand-non-local-tp6tv\" objectUID=0c500277-39f3-4b0f-a88c-18abb80c8bdd kind=\"Pod\" virtual=false\nI0623 01:28:00.061987 11 garbagecollector.go:631] adding [v1/Pod, namespace: ttlafterfinished-4583, name: rand-non-local-w9t65, uid: c15e6e18-f37f-4a63-99b1-7510bec68b66] to attemptToDelete, because its owner [batch/v1/Job, namespace: ttlafterfinished-4583, name: rand-non-local, uid: 78fe4113-586e-46b0-b7e9-6ff20585b82c] is deletingDependents\nI0623 01:28:00.062407 11 garbagecollector.go:631] adding [v1/Pod, namespace: ttlafterfinished-4583, name: rand-non-local-vqgt7, uid: 0c41d557-a666-4ded-a648-360cb8d88f71] to attemptToDelete, because its owner [batch/v1/Job, namespace: ttlafterfinished-4583, name: rand-non-local, uid: 78fe4113-586e-46b0-b7e9-6ff20585b82c] is deletingDependents\nI0623 01:28:00.062435 11 garbagecollector.go:631] adding [v1/Pod, namespace: ttlafterfinished-4583, name: rand-non-local-tp6tv, uid: 0c500277-39f3-4b0f-a88c-18abb80c8bdd] to attemptToDelete, because its owner [batch/v1/Job, namespace: ttlafterfinished-4583, name: rand-non-local, uid: 78fe4113-586e-46b0-b7e9-6ff20585b82c] is deletingDependents\nI0623 01:28:00.066492 11 garbagecollector.go:616] \"Deleting object\" object=\"ttlafterfinished-4583/rand-non-local-vqgt7\" objectUID=0c41d557-a666-4ded-a648-360cb8d88f71 kind=\"Pod\" propagationPolicy=Background\nI0623 01:28:00.066487 11 garbagecollector.go:616] \"Deleting object\" object=\"ttlafterfinished-4583/rand-non-local-w9t65\" objectUID=c15e6e18-f37f-4a63-99b1-7510bec68b66 kind=\"Pod\" propagationPolicy=Background\nI0623 01:28:00.084825 11 garbagecollector.go:616] \"Deleting object\" object=\"ttlafterfinished-4583/rand-non-local-tp6tv\" objectUID=0c500277-39f3-4b0f-a88c-18abb80c8bdd kind=\"Pod\" propagationPolicy=Background\nI0623 01:28:00.087661 11 garbagecollector.go:504] \"Processing object\" object=\"ttlafterfinished-4583/rand-non-local-w9t65\" objectUID=c15e6e18-f37f-4a63-99b1-7510bec68b66 kind=\"Pod\" virtual=false\nI0623 01:28:00.092279 11 garbagecollector.go:504] \"Processing object\" object=\"ttlafterfinished-4583/rand-non-local\" objectUID=78fe4113-586e-46b0-b7e9-6ff20585b82c kind=\"Job\" virtual=false\nI0623 01:28:00.092879 11 garbagecollector.go:504] \"Processing object\" object=\"ttlafterfinished-4583/rand-non-local-vqgt7\" objectUID=0c41d557-a666-4ded-a648-360cb8d88f71 kind=\"Pod\" virtual=false\nI0623 01:28:00.103739 11 garbagecollector.go:504] \"Processing object\" object=\"ttlafterfinished-4583/rand-non-local-tp6tv\" objectUID=0c500277-39f3-4b0f-a88c-18abb80c8bdd kind=\"Pod\" virtual=false\nI0623 01:28:00.104091 11 garbagecollector.go:626] remove DeleteDependents finalizer for item [batch/v1/Job, namespace: ttlafterfinished-4583, name: rand-non-local, uid: 78fe4113-586e-46b0-b7e9-6ff20585b82c]\nI0623 01:28:00.107486 11 job_controller.go:504] enqueueing job cronjob-2627/forbid-27599128\nI0623 01:28:00.111756 11 job_controller.go:504] enqueueing job ttlafterfinished-4583/rand-non-local\nI0623 01:28:00.113379 11 garbagecollector.go:504] \"Processing object\" object=\"ttlafterfinished-4583/rand-non-local\" objectUID=78fe4113-586e-46b0-b7e9-6ff20585b82c kind=\"Job\" virtual=false\nI0623 01:28:00.113788 11 event.go:294] \"Event occurred\" object=\"cronjob-2627/forbid\" fieldPath=\"\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job forbid-27599128\"\nI0623 01:28:00.123759 11 job_controller.go:504] enqueueing job cronjob-2627/forbid-27599128\nI0623 01:28:00.124834 11 garbagecollector.go:626] remove DeleteDependents finalizer for item [batch/v1/Job, namespace: ttlafterfinished-4583, name: rand-non-local, uid: 78fe4113-586e-46b0-b7e9-6ff20585b82c]\nI0623 01:28:00.125791 11 event.go:294] \"Event occurred\" object=\"cronjob-2627/forbid-27599128\" fieldPath=\"\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: forbid-27599128-mqphx\"\nI0623 01:28:00.129874 11 job_controller.go:504] enqueueing job cronjob-2627/forbid-27599128\nI0623 01:28:00.135185 11 job_controller.go:504] enqueueing job cronjob-2627/forbid-27599128\nI0623 01:28:00.157634 11 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-2756/inline-volume-tester-wz44h\" PVC=\"ephemeral-2756/inline-volume-tester-wz44h-my-volume-0\"\nI0623 01:28:00.157764 11 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-2756/inline-volume-tester-wz44h-my-volume-0\"\nI0623 01:28:00.363010 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-2756/inline-volume-tester-wz44h-my-volume-0\"\nI0623 01:28:00.374634 11 garbagecollector.go:504] \"Processing object\" object=\"ephemeral-2756/inline-volume-tester-wz44h\" objectUID=20915478-a56f-4b77-81d0-27d22bc2ef7b kind=\"Pod\" virtual=false\nI0623 01:28:00.377493 11 garbagecollector.go:626] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-2756, name: inline-volume-tester-wz44h, uid: 20915478-a56f-4b77-81d0-27d22bc2ef7b]\nI0623 01:28:00.378186 11 pv_controller.go:651] volume \"pvc-983e5913-38a5-4529-90bf-4914fcbdf86e\" is released and reclaim policy \"Delete\" will be executed\nI0623 01:28:00.389220 11 pv_controller.go:890] volume \"pvc-983e5913-38a5-4529-90bf-4914fcbdf86e\" entered phase \"Released\"\nI0623 01:28:00.404617 11 pv_controller.go:1353] isVolumeReleased[pvc-983e5913-38a5-4529-90bf-4914fcbdf86e]: volume is released\nI0623 01:28:00.417453 11 pv_controller_base.go:582] deletion of claim \"ephemeral-2756/inline-volume-tester-wz44h-my-volume-0\" was already processed\nI0623 01:28:00.760475 11 garbagecollector.go:504] \"Processing object\" object=\"cronjob-2627/forbid-27599128-mqphx\" objectUID=24b726a8-a6a9-440d-8748-9e94dda18d31 kind=\"Pod\" virtual=false\nI0623 01:28:00.762115 11 job_controller.go:504] enqueueing job cronjob-2627/forbid-27599128\nE0623 01:28:00.762379 11 tracking_utils.go:109] \"deleting tracking annotation UID expectations\" err=\"couldn't create key for object cronjob-2627/forbid-27599128: could not find key for obj \\\"cronjob-2627/forbid-27599128\\\"\" job=\"cronjob-2627/forbid-27599128\"\nI0623 01:28:00.768055 11 reconciler.go:250] \"attacherDetacher.DetachVolume started\" volume={AttachedVolume:{VolumeName:kubernetes.io/csi/pd.csi.storage.gke.io^projects/k8s-jkns-gce-soak-2/zones/us-west3-a/disks/pvc-8861af5d-f3f7-4131-bc49-3294fa79d49e VolumeSpec:0xc0025cc3c0 NodeName:nodes-us-west3-a-j1m9 PluginIsAttachable:true DevicePath: DeviceMountPath: PluginName:} MountedByNode:false DetachRequestedTime:2022-06-23 01:27:53.595969391 +0000 UTC m=+1071.542671925}\nI0623 01:28:00.801088 11 garbagecollector.go:616] \"Deleting object\" object=\"cronjob-2627/forbid-27599128-mqphx\" objectUID=24b726a8-a6a9-440d-8748-9e94dda18d31 kind=\"Pod\" propagationPolicy=Background\nI0623 01:28:00.801920 11 event.go:294] \"Event occurred\" object=\"cronjob-2627/forbid\" fieldPath=\"\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"MissingJob\" message=\"Active job went missing: forbid-27599128\"\nI0623 01:28:00.815470 11 operation_generator.go:1603] Verified volume is safe to detach for volume \"pvc-8861af5d-f3f7-4131-bc49-3294fa79d49e\" (UniqueName: \"kubernetes.io/csi/pd.csi.storage.gke.io^projects/k8s-jkns-gce-soak-2/zones/us-west3-a/disks/pvc-8861af5d-f3f7-4131-bc49-3294fa79d49e\") on node \"nodes-us-west3-a-j1m9\" \nI0623 01:28:00.837184 11 reconciler.go:250] \"attacherDetacher.DetachVolume started\" volume={AttachedVolume:{VolumeName:kubernetes.io/csi/csi-hostpath-ephemeral-2756^9890637c-f293-11ec-820a-ee75b91ad526 VolumeSpec:0xc000e40690 NodeName:nodes-us-west3-a-j1m9 PluginIsAttachable:true DevicePath: DeviceMountPath: PluginName:} MountedByNode:false DetachRequestedTime:2022-06-23 01:28:00.230320277 +0000 UTC m=+1078.177022775}\nI0623 01:28:00.887353 11 operation_generator.go:1603] Verified volume is safe to detach for volume \"pvc-983e5913-38a5-4529-90bf-4914fcbdf86e\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-2756^9890637c-f293-11ec-820a-ee75b91ad526\") on node \"nodes-us-west3-a-j1m9\" \nI0623 01:28:01.020395 11 namespace_controller.go:185] Namespace has been deleted ephemeral-61\nE0623 01:28:01.124620 11 tracking_utils.go:109] \"deleting tracking annotation UID expectations\" err=\"couldn't create key for object cronjob-2627/forbid-27599128: could not find key for obj \\\"cronjob-2627/forbid-27599128\\\"\" job=\"cronjob-2627/forbid-27599128\"\nI0623 01:28:01.163167 11 namespace_controller.go:185] Namespace has been deleted provisioning-1855-1443\nI0623 01:28:01.446637 11 operation_generator.go:513] DetachVolume.Detach succeeded for volume \"pvc-983e5913-38a5-4529-90bf-4914fcbdf86e\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-2756^9890637c-f293-11ec-820a-ee75b91ad526\") on node \"nodes-us-west3-a-j1m9\" \nI0623 01:28:01.627380 11 namespace_controller.go:185] Namespace has been deleted volumemode-7477\nI0623 01:28:02.070246 11 job_controller.go:504] enqueueing job ttlafterfinished-4583/rand-non-local\nE0623 01:28:02.071872 11 tracking_utils.go:109] \"deleting tracking annotation UID expectations\" err=\"couldn't create key for object ttlafterfinished-4583/rand-non-local: could not find key for obj \\\"ttlafterfinished-4583/rand-non-local\\\"\" job=\"ttlafterfinished-4583/rand-non-local\"\nW0623 01:28:02.129700 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:28:02.130468 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:28:02.766477 11 event.go:294] \"Event occurred\" object=\"statefulset-8729/ss2\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0623 01:28:03.026593 11 job_controller.go:504] enqueueing job job-7672/fail-once-non-local\nI0623 01:28:03.035879 11 job_controller.go:504] enqueueing job job-7672/fail-once-non-local\nI0623 01:28:03.036623 11 event.go:294] \"Event occurred\" object=\"job-7672/fail-once-non-local\" fieldPath=\"\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local-9kc52\"\nI0623 01:28:03.043249 11 job_controller.go:504] enqueueing job job-7672/fail-once-non-local\nI0623 01:28:03.045331 11 event.go:294] \"Event occurred\" object=\"job-7672/fail-once-non-local\" fieldPath=\"\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local-8x466\"\nI0623 01:28:03.052411 11 job_controller.go:504] enqueueing job job-7672/fail-once-non-local\nI0623 01:28:03.347137 11 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-6972/pvc-p6cd8\"\nI0623 01:28:03.363795 11 pv_controller.go:651] volume \"local-lhxjv\" is released and reclaim policy \"Retain\" will be executed\nI0623 01:28:03.372962 11 pv_controller.go:890] volume \"local-lhxjv\" entered phase \"Released\"\nI0623 01:28:03.382332 11 pv_controller_base.go:582] deletion of claim \"provisioning-6972/pvc-p6cd8\" was already processed\nI0623 01:28:03.719066 11 namespace_controller.go:185] Namespace has been deleted provisioning-772\nW0623 01:28:04.290501 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:28:04.290537 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:28:05.161464 11 event.go:294] \"Event occurred\" object=\"volumemode-2456-840/csi-hostpathplugin\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0623 01:28:05.252058 11 event.go:294] \"Event occurred\" object=\"volumemode-2456/csi-hostpathjn2km\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volumemode-2456\\\" or manually created by system administrator\"\nI0623 01:28:05.252215 11 event.go:294] \"Event occurred\" object=\"volumemode-2456/csi-hostpathjn2km\" fieldPath=\"\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volumemode-2456\\\" or manually created by system administrator\"\nI0623 01:28:05.287332 11 event.go:294] \"Event occurred\" object=\"statefulset-923/ss\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI0623 01:28:05.437851 11 stateful_set_control.go:535] \"Pod of StatefulSet is terminating for scale down\" statefulSet=\"statefulset-923/ss\" pod=\"statefulset-923/ss-1\"\nI0623 01:28:05.445465 11 event.go:294] \"Event occurred\" object=\"statefulset-923/ss\" fieldPath=\"\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nW0623 01:28:05.590343 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:28:05.590780 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:28:05.699459 11 reconciler.go:250] \"attacherDetacher.DetachVolume started\" volume={AttachedVolume:{VolumeName:kubernetes.io/csi/pd.csi.storage.gke.io^projects/k8s-jkns-gce-soak-2/zones/us-west3-a/disks/pvc-73740996-47d1-45b1-ba5f-42acd1d75114 VolumeSpec:0xc000372bb8 NodeName:nodes-us-west3-a-9jqc PluginIsAttachable:true DevicePath: DeviceMountPath: PluginName:} MountedByNode:false DetachRequestedTime:0001-01-01 00:00:00 +0000 UTC}\nI0623 01:28:05.704877 11 operation_generator.go:1603] Verified volume is safe to detach for volume \"pvc-73740996-47d1-45b1-ba5f-42acd1d75114\" (UniqueName: \"kubernetes.io/csi/pd.csi.storage.gke.io^projects/k8s-jkns-gce-soak-2/zones/us-west3-a/disks/pvc-73740996-47d1-45b1-ba5f-42acd1d75114\") on node \"nodes-us-west3-a-9jqc\" \nW0623 01:28:05.818398 11 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0623 01:28:05.818440 11 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0623 01:28:05.883814 11 operation_generator.go:513] DetachVolume.Detach succeeded for volume \"pvc-8861af5d-f3f7-4131-bc49-3294fa79d49e\" (UniqueName: \"kubernetes.io/csi/pd.csi.storage.gke.io^projects/k8s-jkns-gce-soak-2/zones/us-west3-a/disks/pvc-8861af5d-f3f7-4131-bc49-3294fa79d49e\") on node \"nodes-us-west3-a-j1m9\" \nI0623 01:28:06.397447 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-284zh\" objectUID=5cdf5c72-4d70-4fb2-945a-f4e6d25d95b4 kind=\"Pod\" virtual=false\nI0623 01:28:06.397896 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-6tfgj\" objectUID=389ada26-3b1f-41f3-b094-edeee58fd43f kind=\"Pod\" virtual=false\nI0623 01:28:06.398264 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-kmqtt\" objectUID=54be12ae-c98b-4b02-b961-a169c84cd2ea kind=\"Pod\" virtual=false\nI0623 01:28:06.398629 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-vl2sg\" objectUID=f6b7d52e-2442-4aaa-a549-0a9261f9df3b kind=\"Pod\" virtual=false\nI0623 01:28:06.398850 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-czkgg\" objectUID=7b83ddce-214d-4d6e-9f54-dd70af642120 kind=\"Pod\" virtual=false\nI0623 01:28:06.398974 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-66hdl\" objectUID=ffde1bdc-7f7e-45be-950c-4fde165a7dfb kind=\"Pod\" virtual=false\nI0623 01:28:06.399067 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-pmfkq\" objectUID=3de8e090-4088-4f34-9e1e-c0da51063d2c kind=\"Pod\" virtual=false\nI0623 01:28:06.399123 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-bv8sx\" objectUID=44db2105-2264-4aa8-b5ff-3943cb9de3be kind=\"Pod\" virtual=false\nI0623 01:28:06.399150 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-nqk6l\" objectUID=1cf481e7-ac0c-4b50-bde6-7edbbe618b84 kind=\"Pod\" virtual=false\nI0623 01:28:06.399179 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-qbfwr\" objectUID=672abe09-e6c6-48c7-8018-0e40bbdebe27 kind=\"Pod\" virtual=false\nI0623 01:28:06.399213 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-pc2ct\" objectUID=c0dd61d4-5839-4e99-891f-4687c0c95fc5 kind=\"Pod\" virtual=false\nI0623 01:28:06.399262 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-wph79\" objectUID=2c919855-013d-4992-adf5-a62f25ca0492 kind=\"Pod\" virtual=false\nI0623 01:28:06.399299 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-9ctxw\" objectUID=2b512a0f-084f-4b52-9d9a-20917b2d602f kind=\"Pod\" virtual=false\nI0623 01:28:06.399324 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-h4ld2\" objectUID=a9fd38ca-c3d4-4ccb-b269-65b28442e130 kind=\"Pod\" virtual=false\nI0623 01:28:06.399352 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-sdkx5\" objectUID=e741400b-06c2-471c-891b-2731a189fb12 kind=\"Pod\" virtual=false\nI0623 01:28:06.399376 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-ckvnk\" objectUID=b13472ad-0dd4-491d-8933-4e33bff7b00e kind=\"Pod\" virtual=false\nI0623 01:28:06.399412 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-945bm\" objectUID=47b89077-1d46-4b67-a112-7fabdf0c47a5 kind=\"Pod\" virtual=false\nI0623 01:28:06.399441 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-827ht\" objectUID=7b4bb726-5c98-4fa8-b829-aa18c6f5b1f0 kind=\"Pod\" virtual=false\nI0623 01:28:06.399468 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-cfvjn\" objectUID=175713b3-fb8c-4cc0-b382-ab843e8c6b96 kind=\"Pod\" virtual=false\nI0623 01:28:06.399857 11 garbagecollector.go:504] \"Processing object\" object=\"kubelet-2437/cleanup40-11c0c995-7a01-45eb-a658-c646e6ca5ffc-fm7hx\" objectUID=ad14bb01-accd-4459-8d0d-6ba998979991 kind=\"Pod\" virtual=false\nI0623 01:28:06.417529 11 namespace_controller.go:185] Namespace has been deleted provisioning-5722\n==== END logs for container kube-controller-manager of pod kube-system/kube-controller-manager-master-us-west3-a-bgwv ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-master-us-west3-a-bgwv ====\n2022/06/23 01:09:13 Running command:\nCommand env: (log-file=/var/log/kube-proxy.log, also-stdout=true, redirect-stderr=true)\nRun from directory: \nExecutable path: /usr/local/bin/kube-proxy\nArgs (comma-delimited): /usr/local/bin/kube-proxy,--cluster-cidr=100.96.0.0/11,--conntrack-max-per-core=131072,--hostname-override=master-us-west3-a-bgwv,--kubeconfig=/var/lib/kube-proxy/kubeconfig,--master=https://127.0.0.1,--oom-score-adj=-998,--v=2\n2022/06/23 01:09:13 Now listening for interrupts\nI0623 01:09:13.951820 10 flags.go:64] FLAG: --add-dir-header=\"false\"\nI0623 01:09:13.952799 10 flags.go:64] FLAG: --alsologtostderr=\"false\"\nI0623 01:09:13.952982 10 flags.go:64] FLAG: --bind-address=\"0.0.0.0\"\nI0623 01:09:13.953158 10 flags.go:64] FLAG: --bind-address-hard-fail=\"false\"\nI0623 01:09:13.953427 10 flags.go:64] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI0623 01:09:13.953596 10 flags.go:64] FLAG: --cleanup=\"false\"\nI0623 01:09:13.953714 10 flags.go:64] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI0623 01:09:13.953919 10 flags.go:64] FLAG: --config=\"\"\nI0623 01:09:13.960037 10 flags.go:64] FLAG: --config-sync-period=\"15m0s\"\nI0623 01:09:13.961281 10 flags.go:64] FLAG: --conntrack-max-per-core=\"131072\"\nI0623 01:09:13.961474 10 flags.go:64] FLAG: --conntrack-min=\"131072\"\nI0623 01:09:13.961782 10 flags.go:64] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI0623 01:09:13.961926 10 flags.go:64] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI0623 01:09:13.962082 10 flags.go:64] FLAG: --detect-local-mode=\"\"\nI0623 01:09:13.962245 10 flags.go:64] FLAG: --feature-gates=\"\"\nI0623 01:09:13.962393 10 flags.go:64] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI0623 01:09:13.962547 10 flags.go:64] FLAG: --healthz-port=\"10256\"\nI0623 01:09:13.962704 10 flags.go:64] FLAG: --help=\"false\"\nI0623 01:09:13.962858 10 flags.go:64] FLAG: --hostname-override=\"master-us-west3-a-bgwv\"\nI0623 01:09:13.963145 10 flags.go:64] FLAG: --iptables-masquerade-bit=\"14\"\nI0623 01:09:13.963312 10 flags.go:64] FLAG: --iptables-min-sync-period=\"1s\"\nI0623 01:09:13.963482 10 flags.go:64] FLAG: --iptables-sync-period=\"30s\"\nI0623 01:09:13.963652 10 flags.go:64] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI0623 01:09:13.963868 10 flags.go:64] FLAG: --ipvs-min-sync-period=\"0s\"\nI0623 01:09:13.964083 10 flags.go:64] FLAG: --ipvs-scheduler=\"\"\nI0623 01:09:13.964230 10 flags.go:64] FLAG: --ipvs-strict-arp=\"false\"\nI0623 01:09:13.964526 10 flags.go:64] FLAG: --ipvs-sync-period=\"30s\"\nI0623 01:09:13.964696 10 flags.go:64] FLAG: --ipvs-tcp-timeout=\"0s\"\nI0623 01:09:13.965132 10 flags.go:64] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI0623 01:09:13.965311 10 flags.go:64] FLAG: --ipvs-udp-timeout=\"0s\"\nI0623 01:09:13.965454 10 flags.go:64] FLAG: --kube-api-burst=\"10\"\nI0623 01:09:13.965605 10 flags.go:64] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0623 01:09:13.965903 10 flags.go:64] FLAG: --kube-api-qps=\"5\"\nI0623 01:09:13.966103 10 flags.go:64] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI0623 01:09:13.966246 10 flags.go:64] FLAG: --log-backtrace-at=\":0\"\nI0623 01:09:13.966414 10 flags.go:64] FLAG: --log-dir=\"\"\nI0623 01:09:13.966529 10 flags.go:64] FLAG: --log-file=\"\"\nI0623 01:09:13.966705 10 flags.go:64] FLAG: --log-file-max-size=\"1800\"\nI0623 01:09:13.966825 10 flags.go:64] FLAG: --log-flush-frequency=\"5s\"\nI0623 01:09:13.967018 10 flags.go:64] FLAG: --logtostderr=\"true\"\nI0623 01:09:13.967136 10 flags.go:64] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI0623 01:09:13.967458 10 flags.go:64] FLAG: --masquerade-all=\"false\"\nI0623 01:09:13.967585 10 flags.go:64] FLAG: --master=\"https://127.0.0.1\"\nI0623 01:09:13.967775 10 flags.go:64] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI0623 01:09:13.967918 10 flags.go:64] FLAG: --metrics-port=\"10249\"\nI0623 01:09:13.968072 10 flags.go:64] FLAG: --nodeport-addresses=\"[]\"\nI0623 01:09:13.968276 10 flags.go:64] FLAG: --one-output=\"false\"\nI0623 01:09:13.968420 10 flags.go:64] FLAG: --oom-score-adj=\"-998\"\nI0623 01:09:13.968703 10 flags.go:64] FLAG: --pod-bridge-interface=\"\"\nI0623 01:09:13.968845 10 flags.go:64] FLAG: --pod-interface-name-prefix=\"\"\nI0623 01:09:13.969013 10 flags.go:64] FLAG: --profiling=\"false\"\nI0623 01:09:13.969129 10 flags.go:64] FLAG: --proxy-mode=\"\"\nI0623 01:09:13.969333 10 flags.go:64] FLAG: --proxy-port-range=\"\"\nI0623 01:09:13.969476 10 flags.go:64] FLAG: --show-hidden-metrics-for-version=\"\"\nI0623 01:09:13.969678 10 flags.go:64] FLAG: --skip-headers=\"false\"\nI0623 01:09:13.969800 10 flags.go:64] FLAG: --skip-log-headers=\"false\"\nI0623 01:09:13.970116 10 flags.go:64] FLAG: --stderrthreshold=\"2\"\nI0623 01:09:13.970259 10 flags.go:64] FLAG: --udp-timeout=\"250ms\"\nI0623 01:09:13.970418 10 flags.go:64] FLAG: --v=\"2\"\nI0623 01:09:13.970558 10 flags.go:64] FLAG: --version=\"false\"\nI0623 01:09:13.970725 10 flags.go:64] FLAG: --vmodule=\"\"\nI0623 01:09:13.970866 10 flags.go:64] FLAG: --write-config-to=\"\"\nI0623 01:09:13.971045 10 server.go:231] \"Warning, all flags other than --config, --write-config-to, and --cleanup are deprecated, please begin using a config file ASAP\"\nI0623 01:09:13.971295 10 feature_gate.go:245] feature gates: &{map[]}\nI0623 01:09:13.971748 10 feature_gate.go:245] feature gates: &{map[]}\nE0623 01:09:24.036571 10 node.go:152] Failed to retrieve node info: Get \"https://127.0.0.1/api/v1/nodes/master-us-west3-a-bgwv\": net/http: TLS handshake timeout\nE0623 01:09:41.028075 10 node.go:152] Failed to retrieve node info: nodes \"master-us-west3-a-bgwv\" is forbidden: User \"system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope\nI0623 01:09:43.203893 10 node.go:163] Successfully retrieved node IP: 10.0.16.5\nI0623 01:09:43.203931 10 server_others.go:138] \"Detected node IP\" address=\"10.0.16.5\"\nI0623 01:09:43.203982 10 server_others.go:578] \"Unknown proxy mode, assuming iptables proxy\" proxyMode=\"\"\nI0623 01:09:43.204083 10 server_others.go:175] \"DetectLocalMode\" LocalMode=\"ClusterCIDR\"\nI0623 01:09:43.254230 10 server_others.go:206] \"Using iptables Proxier\"\nI0623 01:09:43.254267 10 server_others.go:213] \"kube-proxy running in dual-stack mode\" ipFamily=IPv4\nI0623 01:09:43.254281 10 server_others.go:214] \"Creating dualStackProxier for iptables\"\nI0623 01:09:43.254298 10 server_others.go:501] \"Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6\"\nI0623 01:09:43.254332 10 proxier.go:259] \"Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259\"\nI0623 01:09:43.254421 10 utils.go:431] \"Changed sysctl\" name=\"net/ipv4/conf/all/route_localnet\" before=0 after=1\nI0623 01:09:43.254473 10 proxier.go:275] \"Using iptables mark for masquerade\" ipFamily=IPv4 mark=\"0x00004000\"\nI0623 01:09:43.254523 10 proxier.go:319] \"Iptables sync params\" ipFamily=IPv4 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0623 01:09:43.254576 10 proxier.go:329] \"Iptables supports --random-fully\" ipFamily=IPv4\nI0623 01:09:43.254591 10 proxier.go:259] \"Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259\"\nI0623 01:09:43.254650 10 proxier.go:275] \"Using iptables mark for masquerade\" ipFamily=IPv6 mark=\"0x00004000\"\nI0623 01:09:43.254684 10 proxier.go:319] \"Iptables sync params\" ipFamily=IPv6 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0623 01:09:43.254704 10 proxier.go:329] \"Iptables supports --random-fully\" ipFamily=IPv6\nI0623 01:09:43.254880 10 server.go:661] \"Version info\" version=\"v1.25.0-alpha.1\"\nI0623 01:09:43.254892 10 server.go:663] \"Golang settings\" GOGC=\"\" GOMAXPROCS=\"\" GOTRACEBACK=\"\"\nI0623 01:09:43.256377 10 conntrack.go:52] \"Setting nf_conntrack_max\" nf_conntrack_max=262144\nI0623 01:09:43.256479 10 conntrack.go:100] \"Set sysctl\" entry=\"net/netfilter/nf_conntrack_tcp_timeout_close_wait\" value=3600\nI0623 01:09:43.257022 10 config.go:317] \"Starting service config controller\"\nI0623 01:09:43.257038 10 shared_informer.go:255] Waiting for caches to sync for service config\nI0623 01:09:43.257062 10 config.go:226] \"Starting endpoint slice config controller\"\nI0623 01:09:43.257067 10 shared_informer.go:255] Waiting for caches to sync for endpoint slice config\nI0623 01:09:43.257850 10 config.go:444] \"Starting node config controller\"\nI0623 01:09:43.257862 10 shared_informer.go:255] Waiting for caches to sync for node config\nI0623 01:09:43.263210 10 service.go:322] \"Service updated ports\" service=\"default/kubernetes\" portCount=1\nI0623 01:09:43.263263 10 service.go:322] \"Service updated ports\" service=\"kube-system/kube-dns\" portCount=3\nI0623 01:09:43.275766 10 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0623 01:09:43.275796 10 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0623 01:09:43.357592 10 shared_informer.go:262] Caches are synced for endpoint slice config\nI0623 01:09:43.357669 10 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0623 01:09:43.357690 10 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0623 01:09:43.357592 10 shared_informer.go:262] Caches are synced for service config\nI0623 01:09:43.357752 10 service.go:437] \"Adding new service port\" portName=\"default/kubernetes:https\" servicePort=\"100.64.0.1:443/TCP\"\nI0623 01:09:43.357770 10 service.go:437] \"Adding new service port\" portName=\"kube-system/kube-dns:metrics\" servicePort=\"100.64.0.10:9153/TCP\"\nI0623 01:09:43.357784 10 service.go:437] \"Adding new service port\" portName=\"kube-system/kube-dns:dns\" servicePort=\"100.64.0.10:53/UDP\"\nI0623 01:09:43.357798 10 service.go:437] \"Adding new service port\" portName=\"kube-system/kube-dns:dns-tcp\" servicePort=\"100.64.0.10:53/TCP\"\nI0623 01:09:43.357837 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:09:43.357971 10 shared_informer.go:262] Caches are synced for node config\nI0623 01:09:43.414052 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=1 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0623 01:09:43.437494 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"79.764811ms\"\nI0623 01:09:43.437529 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:09:43.528963 10 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0623 01:09:43.532047 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"94.51914ms\"\nI0623 01:09:46.265763 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:09:46.300833 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=1 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0623 01:09:46.305038 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.285048ms\"\nI0623 01:09:46.305064 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:09:46.333691 10 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0623 01:09:46.335636 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.569827ms\"\nI0623 01:10:16.535229 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:10:16.575844 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=1 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0623 01:10:16.583607 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.411718ms\"\nI0623 01:10:30.285767 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:10:30.348040 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=1 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0623 01:10:30.354295 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"68.535934ms\"\nI0623 01:10:30.354339 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:10:30.396921 10 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0623 01:10:30.399447 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.103983ms\"\nI0623 01:10:34.230209 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:10:34.286416 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=1 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0623 01:10:34.291264 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"61.060583ms\"\nI0623 01:10:34.291334 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:10:34.338550 10 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0623 01:10:34.340704 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.389882ms\"\nI0623 01:10:36.204138 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:10:36.302386 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=1 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0623 01:10:36.307391 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"103.262991ms\"\nI0623 01:10:36.307437 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:10:36.337845 10 proxier.go:1461] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0623 01:10:36.340046 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.502625ms\"\nI0623 01:11:26.707660 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:11:26.742774 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=4 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=10\nI0623 01:11:26.746609 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.994093ms\"\nI0623 01:11:27.719354 10 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"kube-system/kube-dns:dns\" clusterIP=\"100.64.0.10\"\nI0623 01:11:27.719384 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:11:27.762280 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=4 numFilterChains=4 numFilterRules=3 numNATChains=12 numNATRules=25\nI0623 01:11:27.770427 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.217356ms\"\nI0623 01:11:31.506829 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:11:31.559352 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=12 numNATRules=25\nI0623 01:11:31.563635 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.882385ms\"\nI0623 01:11:31.563782 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:11:31.609760 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0623 01:11:31.621851 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"58.170331ms\"\nI0623 01:15:00.324200 10 service.go:322] \"Service updated ports\" service=\"aggregator-8650/sample-api\" portCount=1\nI0623 01:15:00.324310 10 service.go:437] \"Adding new service port\" portName=\"aggregator-8650/sample-api\" servicePort=\"100.71.34.68:7443/TCP\"\nI0623 01:15:00.324334 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:15:00.449264 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0623 01:15:00.455275 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"130.974781ms\"\nI0623 01:15:00.455337 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:15:00.567123 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0623 01:15:00.582468 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"127.13887ms\"\nI0623 01:15:19.160294 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:15:19.204449 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0623 01:15:19.214469 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.208215ms\"\nI0623 01:15:21.215136 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:15:21.230812 10 service.go:322] \"Service updated ports\" service=\"aggregator-8650/sample-api\" portCount=0\nI0623 01:15:21.278936 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=36\nI0623 01:15:21.283635 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"68.54264ms\"\nI0623 01:15:21.283684 10 service.go:462] \"Removing service port\" portName=\"aggregator-8650/sample-api\"\nI0623 01:15:21.283718 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:15:21.322936 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0623 01:15:21.328635 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.94833ms\"\nI0623 01:15:21.606064 10 service.go:322] \"Service updated ports\" service=\"webhook-5770/e2e-test-webhook\" portCount=1\nI0623 01:15:22.329145 10 service.go:437] \"Adding new service port\" portName=\"webhook-5770/e2e-test-webhook\" servicePort=\"100.70.32.43:8443/TCP\"\nI0623 01:15:22.329384 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:15:22.380212 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0623 01:15:22.387002 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"57.897267ms\"\nI0623 01:15:22.804096 10 service.go:322] \"Service updated ports\" service=\"webhook-5770/e2e-test-webhook\" portCount=0\nI0623 01:15:23.387141 10 service.go:462] \"Removing service port\" portName=\"webhook-5770/e2e-test-webhook\"\nI0623 01:15:23.387191 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:15:23.425978 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=36\nI0623 01:15:23.431246 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.140128ms\"\nI0623 01:15:31.755476 10 service.go:322] \"Service updated ports\" service=\"services-7159/sourceip-test\" portCount=1\nI0623 01:15:31.755536 10 service.go:437] \"Adding new service port\" portName=\"services-7159/sourceip-test\" servicePort=\"100.64.174.213:8080/TCP\"\nI0623 01:15:31.755580 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:15:31.836866 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0623 01:15:31.852146 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"96.615262ms\"\nI0623 01:15:31.852201 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:15:31.923298 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0623 01:15:31.931505 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"79.312606ms\"\nI0623 01:15:35.763554 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:15:35.838593 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=39\nI0623 01:15:35.843631 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"80.253652ms\"\nI0623 01:15:48.945913 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:15:48.969890 10 service.go:322] \"Service updated ports\" service=\"services-7159/sourceip-test\" portCount=0\nI0623 01:15:49.017859 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=36\nI0623 01:15:49.025269 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"79.402238ms\"\nI0623 01:15:49.025304 10 service.go:462] \"Removing service port\" portName=\"services-7159/sourceip-test\"\nI0623 01:15:49.025340 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:15:49.092298 10 proxier.go:1461] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=34\nI0623 01:15:49.107010 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"81.700778ms\"\nI0623 01:15:56.524991 10 service.go:322] \"Service updated ports\" service=\"services-127/service-proxy-toggled\" portCount=1\nI0623 01:15:56.525046 10 service.go:437] \"Adding new service port\" portName=\"services-127/service-proxy-toggled\" servicePort=\"100.64.117.104:80/TCP\"\nI0623 01:15:56.525068 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:15:56.619829 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0623 01:15:56.628559 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"103.517011ms\"\nI0623 01:15:56.628600 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:15:56.663590 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=34\nI0623 01:15:56.669275 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.681076ms\"\nI0623 01:15:56.737084 10 service.go:322] \"Service updated ports\" service=\"deployment-7389/test-rolling-update-with-lb\" portCount=1\nI0623 01:15:56.761388 10 service.go:322] \"Service updated ports\" service=\"deployment-7389/test-rolling-update-with-lb\" portCount=1\nI0623 01:15:57.670662 10 service.go:437] \"Adding new service port\" portName=\"deployment-7389/test-rolling-update-with-lb\" servicePort=\"100.69.93.209:80/TCP\"\nI0623 01:15:57.670745 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:15:57.727641 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=51\nI0623 01:15:57.740950 10 service_health.go:124] \"Opening healthcheck\" service=\"deployment-7389/test-rolling-update-with-lb\" port=32103\nI0623 01:15:57.741882 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"71.267119ms\"\nI0623 01:16:02.012399 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:16:02.051665 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=56\nI0623 01:16:02.056288 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.929651ms\"\nI0623 01:16:04.792766 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:16:04.863233 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=59\nI0623 01:16:04.873163 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"80.446467ms\"\nI0623 01:16:05.351323 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:16:05.394064 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=25 numNATRules=62\nI0623 01:16:05.400380 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.118418ms\"\nI0623 01:16:19.873357 10 service.go:322] \"Service updated ports\" service=\"proxy-7634/proxy-service-bmkj2\" portCount=4\nI0623 01:16:19.873412 10 service.go:437] \"Adding new service port\" portName=\"proxy-7634/proxy-service-bmkj2:tlsportname2\" servicePort=\"100.65.242.79:444/TCP\"\nI0623 01:16:19.873428 10 service.go:437] \"Adding new service port\" portName=\"proxy-7634/proxy-service-bmkj2:portname1\" servicePort=\"100.65.242.79:80/TCP\"\nI0623 01:16:19.873443 10 service.go:437] \"Adding new service port\" portName=\"proxy-7634/proxy-service-bmkj2:portname2\" servicePort=\"100.65.242.79:81/TCP\"\nI0623 01:16:19.873462 10 service.go:437] \"Adding new service port\" portName=\"proxy-7634/proxy-service-bmkj2:tlsportname1\" servicePort=\"100.65.242.79:443/TCP\"\nI0623 01:16:19.873486 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:16:19.942316 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=13 numFilterChains=4 numFilterRules=8 numNATChains=25 numNATRules=62\nI0623 01:16:19.948320 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"74.916671ms\"\nI0623 01:16:19.948367 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:16:19.997754 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=13 numFilterChains=4 numFilterRules=8 numNATChains=25 numNATRules=62\nI0623 01:16:20.006770 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"58.410946ms\"\nI0623 01:16:31.910823 10 service.go:322] \"Service updated ports\" service=\"services-127/service-proxy-toggled\" portCount=0\nI0623 01:16:31.910870 10 service.go:462] \"Removing service port\" portName=\"services-127/service-proxy-toggled\"\nI0623 01:16:31.910895 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:16:31.964602 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=8 numNATChains=25 numNATRules=55\nI0623 01:16:31.969935 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"59.062055ms\"\nI0623 01:16:31.970023 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:16:32.015379 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=8 numNATChains=21 numNATRules=51\nI0623 01:16:32.021223 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.237625ms\"\nI0623 01:16:38.117903 10 service.go:322] \"Service updated ports\" service=\"deployment-7389/test-rolling-update-with-lb\" portCount=1\nI0623 01:16:38.117966 10 service.go:439] \"Updating existing service port\" portName=\"deployment-7389/test-rolling-update-with-lb\" servicePort=\"100.69.93.209:80/TCP\"\nI0623 01:16:38.117992 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:16:38.163520 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=8 numNATChains=21 numNATRules=52\nI0623 01:16:38.168747 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.788246ms\"\nI0623 01:16:55.581541 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:16:55.645627 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=29 numNATRules=72\nI0623 01:16:55.652163 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"70.701995ms\"\nI0623 01:16:56.591233 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:16:56.714782 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=15 numFilterChains=4 numFilterRules=4 numNATChains=30 numNATRules=75\nI0623 01:16:56.737723 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"146.552203ms\"\nI0623 01:16:56.737862 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:16:56.849340 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=30 numNATRules=73\nI0623 01:16:56.861160 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"123.388111ms\"\nI0623 01:16:57.863239 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:16:57.929753 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=14 numFilterChains=4 numFilterRules=8 numNATChains=29 numNATRules=60\nI0623 01:16:57.939081 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"75.965578ms\"\nI0623 01:16:58.463864 10 service.go:322] \"Service updated ports\" service=\"crd-webhook-8837/e2e-test-crd-conversion-webhook\" portCount=1\nI0623 01:16:58.494070 10 service.go:322] \"Service updated ports\" service=\"services-127/service-proxy-toggled\" portCount=1\nI0623 01:16:58.939221 10 service.go:437] \"Adding new service port\" portName=\"crd-webhook-8837/e2e-test-crd-conversion-webhook\" servicePort=\"100.66.250.85:9443/TCP\"\nI0623 01:16:58.939252 10 service.go:437] \"Adding new service port\" portName=\"services-127/service-proxy-toggled\" servicePort=\"100.64.117.104:80/TCP\"\nI0623 01:16:58.944830 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:16:58.992640 10 proxier.go:1461] \"Reloading service iptables data\" numServices=11 numEndpoints=18 numFilterChains=4 numFilterRules=8 numNATChains=27 numNATRules=68\nI0623 01:16:59.002461 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"63.274611ms\"\nI0623 01:16:59.942879 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:00.058572 10 proxier.go:1461] \"Reloading service iptables data\" numServices=11 numEndpoints=19 numFilterChains=4 numFilterRules=8 numNATChains=28 numNATRules=71\nI0623 01:17:00.067833 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"125.013983ms\"\nI0623 01:17:01.070505 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:01.176144 10 proxier.go:1461] \"Reloading service iptables data\" numServices=11 numEndpoints=18 numFilterChains=4 numFilterRules=8 numNATChains=28 numNATRules=69\nI0623 01:17:01.181138 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"110.753703ms\"\nI0623 01:17:02.658201 10 service.go:322] \"Service updated ports\" service=\"crd-webhook-8837/e2e-test-crd-conversion-webhook\" portCount=0\nI0623 01:17:02.658244 10 service.go:462] \"Removing service port\" portName=\"crd-webhook-8837/e2e-test-crd-conversion-webhook\"\nI0623 01:17:02.658274 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:02.765815 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=17 numFilterChains=4 numFilterRules=8 numNATChains=27 numNATRules=65\nI0623 01:17:02.784933 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"126.680575ms\"\nI0623 01:17:02.785012 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:02.891653 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=17 numFilterChains=4 numFilterRules=8 numNATChains=25 numNATRules=63\nI0623 01:17:02.904508 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"119.530122ms\"\nI0623 01:17:03.852397 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:03.953564 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=18 numFilterChains=4 numFilterRules=8 numNATChains=26 numNATRules=66\nI0623 01:17:03.992516 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"140.184176ms\"\nI0623 01:17:04.860367 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:04.922989 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=17 numFilterChains=4 numFilterRules=8 numNATChains=26 numNATRules=64\nI0623 01:17:04.937076 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"76.782862ms\"\nI0623 01:17:06.614066 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:06.715311 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=18 numFilterChains=4 numFilterRules=8 numNATChains=26 numNATRules=66\nI0623 01:17:06.745272 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"131.287059ms\"\nI0623 01:17:06.745383 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:06.808309 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=18 numFilterChains=4 numFilterRules=8 numNATChains=26 numNATRules=64\nI0623 01:17:06.835112 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"89.773383ms\"\nI0623 01:17:07.604642 10 service.go:322] \"Service updated ports\" service=\"services-1811/svc-tolerate-unready\" portCount=1\nI0623 01:17:07.840054 10 service.go:437] \"Adding new service port\" portName=\"services-1811/svc-tolerate-unready:http\" servicePort=\"100.71.130.96:80/TCP\"\nI0623 01:17:07.840172 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:07.931108 10 proxier.go:1461] \"Reloading service iptables data\" numServices=11 numEndpoints=17 numFilterChains=4 numFilterRules=10 numNATChains=25 numNATRules=63\nI0623 01:17:07.943124 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"103.100795ms\"\nI0623 01:17:08.785374 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:08.855105 10 proxier.go:1461] \"Reloading service iptables data\" numServices=11 numEndpoints=13 numFilterChains=4 numFilterRules=10 numNATChains=26 numNATRules=64\nI0623 01:17:08.865771 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"80.514105ms\"\nI0623 01:17:11.787405 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:11.952101 10 proxier.go:1461] \"Reloading service iptables data\" numServices=11 numEndpoints=14 numFilterChains=4 numFilterRules=10 numNATChains=26 numNATRules=66\nI0623 01:17:11.966152 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"178.8196ms\"\nI0623 01:17:11.966267 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:12.036698 10 proxier.go:1461] \"Reloading service iptables data\" numServices=11 numEndpoints=13 numFilterChains=4 numFilterRules=10 numNATChains=26 numNATRules=64\nI0623 01:17:12.059782 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"93.583024ms\"\nI0623 01:17:13.132349 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:13.178325 10 proxier.go:1461] \"Reloading service iptables data\" numServices=11 numEndpoints=14 numFilterChains=4 numFilterRules=8 numNATChains=28 numNATRules=71\nI0623 01:17:13.185303 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.999719ms\"\nI0623 01:17:14.188118 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:14.295012 10 service.go:322] \"Service updated ports\" service=\"proxy-7634/proxy-service-bmkj2\" portCount=0\nI0623 01:17:14.333508 10 proxier.go:1461] \"Reloading service iptables data\" numServices=11 numEndpoints=14 numFilterChains=4 numFilterRules=8 numNATChains=28 numNATRules=71\nI0623 01:17:14.364092 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"176.037082ms\"\nI0623 01:17:15.365408 10 service.go:462] \"Removing service port\" portName=\"proxy-7634/proxy-service-bmkj2:portname1\"\nI0623 01:17:15.365439 10 service.go:462] \"Removing service port\" portName=\"proxy-7634/proxy-service-bmkj2:portname2\"\nI0623 01:17:15.365449 10 service.go:462] \"Removing service port\" portName=\"proxy-7634/proxy-service-bmkj2:tlsportname1\"\nI0623 01:17:15.365458 10 service.go:462] \"Removing service port\" portName=\"proxy-7634/proxy-service-bmkj2:tlsportname2\"\nI0623 01:17:15.365488 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:15.437854 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=28 numNATRules=71\nI0623 01:17:15.445167 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"79.791685ms\"\nI0623 01:17:16.446104 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:16.497092 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=29 numNATRules=72\nI0623 01:17:16.503797 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"57.776531ms\"\nI0623 01:17:17.819285 10 service.go:322] \"Service updated ports\" service=\"dns-7912/dns-test-service-3\" portCount=1\nI0623 01:17:17.819341 10 service.go:437] \"Adding new service port\" portName=\"dns-7912/dns-test-service-3:http\" servicePort=\"100.70.176.179:80/TCP\"\nI0623 01:17:17.819369 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:17.917140 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=5 numNATChains=28 numNATRules=71\nI0623 01:17:17.931348 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"112.006805ms\"\nI0623 01:17:18.180946 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:18.322179 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=5 numNATChains=29 numNATRules=74\nI0623 01:17:18.330297 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"149.869248ms\"\nI0623 01:17:19.330709 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:19.378079 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=5 numNATChains=29 numNATRules=72\nI0623 01:17:19.384658 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.037191ms\"\nI0623 01:17:19.881039 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:20.041881 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=5 numNATChains=28 numNATRules=71\nI0623 01:17:20.054359 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"173.370104ms\"\nI0623 01:17:21.652154 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:21.719992 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=5 numNATChains=28 numNATRules=69\nI0623 01:17:21.731347 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"79.248379ms\"\nI0623 01:17:22.080226 10 service.go:322] \"Service updated ports\" service=\"services-127/service-proxy-toggled\" portCount=0\nI0623 01:17:22.080271 10 service.go:462] \"Removing service port\" portName=\"services-127/service-proxy-toggled\"\nI0623 01:17:22.080324 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:22.120433 10 service.go:322] \"Service updated ports\" service=\"dns-7912/dns-test-service-3\" portCount=0\nI0623 01:17:22.164174 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=27 numNATRules=63\nI0623 01:17:22.177244 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"96.968601ms\"\nI0623 01:17:23.178196 10 service.go:462] \"Removing service port\" portName=\"dns-7912/dns-test-service-3:http\"\nI0623 01:17:23.178245 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:23.249189 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=60\nI0623 01:17:23.261226 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"83.050025ms\"\nI0623 01:17:23.858174 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:23.963197 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=25 numNATRules=63\nI0623 01:17:23.978159 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"120.067108ms\"\nI0623 01:17:24.978400 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:25.036228 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=25 numNATRules=61\nI0623 01:17:25.042876 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"64.541238ms\"\nI0623 01:17:51.557437 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:17:51.643804 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=60\nI0623 01:17:51.651237 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"93.873807ms\"\nI0623 01:18:12.032924 10 service.go:322] \"Service updated ports\" service=\"sctp-7261/sctp-endpoint-test\" portCount=1\nI0623 01:18:12.032974 10 service.go:437] \"Adding new service port\" portName=\"sctp-7261/sctp-endpoint-test\" servicePort=\"100.64.71.7:5060/SCTP\"\nI0623 01:18:12.033000 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:12.127584 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=24 numNATRules=60\nI0623 01:18:12.146595 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"113.619446ms\"\nI0623 01:18:12.146656 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:12.209068 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=24 numNATRules=60\nI0623 01:18:12.219616 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"72.975977ms\"\nI0623 01:18:19.694166 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:19.746178 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=26 numNATRules=65\nI0623 01:18:19.752799 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"58.945623ms\"\nI0623 01:18:20.380146 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:20.458813 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=26 numNATRules=62\nI0623 01:18:20.468347 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"88.2461ms\"\nI0623 01:18:21.469295 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:21.545323 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=24 numNATRules=60\nI0623 01:18:21.554790 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"85.564955ms\"\nI0623 01:18:22.664820 10 service.go:322] \"Service updated ports\" service=\"services-2156/clusterip-service\" portCount=1\nI0623 01:18:22.664874 10 service.go:437] \"Adding new service port\" portName=\"services-2156/clusterip-service\" servicePort=\"100.69.231.223:80/TCP\"\nI0623 01:18:22.664899 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:22.703481 10 service.go:322] \"Service updated ports\" service=\"services-2156/externalsvc\" portCount=1\nI0623 01:18:22.771323 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=6 numNATChains=24 numNATRules=60\nI0623 01:18:22.786652 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"121.7763ms\"\nI0623 01:18:22.786708 10 service.go:437] \"Adding new service port\" portName=\"services-2156/externalsvc\" servicePort=\"100.64.181.229:80/TCP\"\nI0623 01:18:22.786765 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:22.915999 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=11 numFilterChains=4 numFilterRules=7 numNATChains=24 numNATRules=60\nI0623 01:18:22.931536 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"144.835454ms\"\nI0623 01:18:25.048872 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:25.162040 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=26 numNATRules=65\nI0623 01:18:25.196880 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"148.045044ms\"\nI0623 01:18:26.308196 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:26.352250 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=13 numFilterChains=4 numFilterRules=6 numNATChains=27 numNATRules=68\nI0623 01:18:26.360263 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.129485ms\"\nI0623 01:18:27.962720 10 service.go:322] \"Service updated ports\" service=\"services-1811/svc-tolerate-unready\" portCount=0\nI0623 01:18:27.962768 10 service.go:462] \"Removing service port\" portName=\"services-1811/svc-tolerate-unready:http\"\nI0623 01:18:27.962796 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:28.066555 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=27 numNATRules=63\nI0623 01:18:28.078951 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"116.177491ms\"\nI0623 01:18:28.079036 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:28.173912 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=24 numNATRules=60\nI0623 01:18:28.191786 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"112.787458ms\"\nI0623 01:18:28.867631 10 service.go:322] \"Service updated ports\" service=\"services-2156/clusterip-service\" portCount=0\nI0623 01:18:29.192413 10 service.go:462] \"Removing service port\" portName=\"services-2156/clusterip-service\"\nI0623 01:18:29.192454 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:29.248241 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=24 numNATRules=60\nI0623 01:18:29.256879 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"64.476038ms\"\nI0623 01:18:31.215216 10 service.go:322] \"Service updated ports\" service=\"sctp-7261/sctp-endpoint-test\" portCount=0\nI0623 01:18:31.215256 10 service.go:462] \"Removing service port\" portName=\"sctp-7261/sctp-endpoint-test\"\nI0623 01:18:31.215287 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:31.298581 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=60\nI0623 01:18:31.306510 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"91.234598ms\"\nI0623 01:18:31.306630 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:31.389057 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=7 numNATChains=24 numNATRules=48\nI0623 01:18:31.413546 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"106.978824ms\"\nI0623 01:18:39.473421 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:39.533018 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=8 numNATChains=18 numNATRules=37\nI0623 01:18:39.537399 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"64.032258ms\"\nI0623 01:18:40.979244 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:41.032798 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=8 numNATChains=15 numNATRules=34\nI0623 01:18:41.044654 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"65.445264ms\"\nI0623 01:18:41.545499 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:41.597108 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=8 numNATChains=15 numNATRules=34\nI0623 01:18:41.602208 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.75712ms\"\nI0623 01:18:42.234146 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:42.270849 10 service.go:322] \"Service updated ports\" service=\"services-2156/externalsvc\" portCount=0\nI0623 01:18:42.318672 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=8 numNATChains=15 numNATRules=34\nI0623 01:18:42.326442 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"92.353683ms\"\nI0623 01:18:43.327353 10 service.go:462] \"Removing service port\" portName=\"services-2156/externalsvc\"\nI0623 01:18:43.327422 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:43.431129 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=34\nI0623 01:18:43.439017 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"111.684111ms\"\nI0623 01:18:46.855370 10 service.go:322] \"Service updated ports\" service=\"webhook-3314/e2e-test-webhook\" portCount=1\nI0623 01:18:46.855420 10 service.go:437] \"Adding new service port\" portName=\"webhook-3314/e2e-test-webhook\" servicePort=\"100.67.88.251:8443/TCP\"\nI0623 01:18:46.855447 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:46.994975 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=8 numNATChains=15 numNATRules=34\nI0623 01:18:47.006239 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"150.819091ms\"\nI0623 01:18:47.006318 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:47.079927 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=7 numNATChains=17 numNATRules=39\nI0623 01:18:47.125704 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"119.421488ms\"\nI0623 01:18:48.440463 10 service.go:322] \"Service updated ports\" service=\"webhook-3314/e2e-test-webhook\" portCount=0\nI0623 01:18:48.440510 10 service.go:462] \"Removing service port\" portName=\"webhook-3314/e2e-test-webhook\"\nI0623 01:18:48.440538 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:48.553291 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=17 numNATRules=36\nI0623 01:18:48.565317 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"124.801034ms\"\nI0623 01:18:49.565616 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:49.646556 10 proxier.go:1461] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=34\nI0623 01:18:49.660046 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"94.489947ms\"\nI0623 01:18:49.768616 10 service.go:322] \"Service updated ports\" service=\"conntrack-3098/svc-udp\" portCount=1\nI0623 01:18:50.663799 10 service.go:437] \"Adding new service port\" portName=\"conntrack-3098/svc-udp:udp\" servicePort=\"100.68.225.167:80/UDP\"\nI0623 01:18:50.663847 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:50.715273 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=9 numNATChains=15 numNATRules=34\nI0623 01:18:50.722154 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"58.40284ms\"\nI0623 01:18:52.250687 10 service.go:322] \"Service updated ports\" service=\"endpointslice-9712/example-int-port\" portCount=1\nI0623 01:18:52.250739 10 service.go:437] \"Adding new service port\" portName=\"endpointslice-9712/example-int-port:example\" servicePort=\"100.68.228.54:80/TCP\"\nI0623 01:18:52.250765 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:52.282119 10 service.go:322] \"Service updated ports\" service=\"endpointslice-9712/example-named-port\" portCount=1\nI0623 01:18:52.310838 10 proxier.go:1461] \"Reloading service iptables data\" numServices=7 numEndpoints=7 numFilterChains=4 numFilterRules=10 numNATChains=15 numNATRules=34\nI0623 01:18:52.323230 10 service.go:322] \"Service updated ports\" service=\"endpointslice-9712/example-no-match\" portCount=1\nI0623 01:18:52.324130 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"73.395874ms\"\nI0623 01:18:52.324176 10 service.go:437] \"Adding new service port\" portName=\"endpointslice-9712/example-named-port:http\" servicePort=\"100.71.124.128:80/TCP\"\nI0623 01:18:52.324196 10 service.go:437] \"Adding new service port\" portName=\"endpointslice-9712/example-no-match:example-no-match\" servicePort=\"100.71.54.97:80/TCP\"\nI0623 01:18:52.324229 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:52.373470 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=7 numFilterChains=4 numFilterRules=12 numNATChains=15 numNATRules=34\nI0623 01:18:52.378684 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.517905ms\"\nI0623 01:18:53.378867 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:53.460155 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=7 numFilterChains=4 numFilterRules=12 numNATChains=15 numNATRules=34\nI0623 01:18:53.474941 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"96.105864ms\"\nI0623 01:18:56.792422 10 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-3098/svc-udp:udp\" clusterIP=\"100.68.225.167\"\nI0623 01:18:56.792519 10 proxier.go:847] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-3098/svc-udp:udp\" nodePort=32324\nI0623 01:18:56.792529 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:56.846704 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=8 numFilterChains=4 numFilterRules=10 numNATChains=18 numNATRules=42\nI0623 01:18:56.874861 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"82.522091ms\"\nI0623 01:18:57.777172 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:57.902370 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=9 numFilterChains=4 numFilterRules=9 numNATChains=20 numNATRules=47\nI0623 01:18:57.923902 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"146.765423ms\"\nI0623 01:18:58.569033 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:58.618314 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=11 numFilterChains=4 numFilterRules=8 numNATChains=23 numNATRules=55\nI0623 01:18:58.623090 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.341692ms\"\nI0623 01:18:59.623563 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:18:59.678426 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=11 numFilterChains=4 numFilterRules=8 numNATChains=23 numNATRules=55\nI0623 01:18:59.685991 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"62.457163ms\"\nI0623 01:19:02.440525 10 service.go:322] \"Service updated ports\" service=\"deployment-7389/test-rolling-update-with-lb\" portCount=0\nI0623 01:19:02.440581 10 service.go:462] \"Removing service port\" portName=\"deployment-7389/test-rolling-update-with-lb\"\nI0623 01:19:02.440610 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:02.493146 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=55\nI0623 01:19:02.498754 10 service_health.go:107] \"Closing healthcheck\" service=\"deployment-7389/test-rolling-update-with-lb\" port=32103\nE0623 01:19:02.499030 10 service_health.go:187] \"Healthcheck closed\" err=\"accept tcp [::]:32103: use of closed network connection\" service=\"deployment-7389/test-rolling-update-with-lb\"\nI0623 01:19:02.499086 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"58.517156ms\"\nI0623 01:19:09.988605 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:10.072443 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=58\nI0623 01:19:10.084872 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"96.58966ms\"\nI0623 01:19:10.329544 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:10.372260 10 proxier.go:1461] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=56\nI0623 01:19:10.391270 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"61.789537ms\"\nI0623 01:19:12.003289 10 service.go:322] \"Service updated ports\" service=\"proxy-2966/test-service\" portCount=1\nI0623 01:19:12.003344 10 service.go:437] \"Adding new service port\" portName=\"proxy-2966/test-service\" servicePort=\"100.67.128.208:80/TCP\"\nI0623 01:19:12.003373 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:12.067379 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=55\nI0623 01:19:12.075079 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"71.737283ms\"\nI0623 01:19:12.075170 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:12.124371 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=25 numNATRules=60\nI0623 01:19:12.129211 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.094765ms\"\nI0623 01:19:12.726918 10 service.go:322] \"Service updated ports\" service=\"endpointslice-7076/example-empty-selector\" portCount=1\nI0623 01:19:13.129464 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:13.202124 10 proxier.go:1461] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=25 numNATRules=60\nI0623 01:19:13.212209 10 service.go:322] \"Service updated ports\" service=\"services-9737/nodeport-service\" portCount=1\nI0623 01:19:13.220905 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"91.502574ms\"\nI0623 01:19:13.241295 10 service.go:322] \"Service updated ports\" service=\"services-9737/externalsvc\" portCount=1\nI0623 01:19:13.988467 10 service.go:322] \"Service updated ports\" service=\"webhook-9322/e2e-test-webhook\" portCount=1\nI0623 01:19:14.221225 10 service.go:437] \"Adding new service port\" portName=\"services-9737/nodeport-service\" servicePort=\"100.67.51.148:80/TCP\"\nI0623 01:19:14.221255 10 service.go:437] \"Adding new service port\" portName=\"services-9737/externalsvc\" servicePort=\"100.64.236.169:80/TCP\"\nI0623 01:19:14.221270 10 service.go:437] \"Adding new service port\" portName=\"webhook-9322/e2e-test-webhook\" servicePort=\"100.64.96.50:8443/TCP\"\nI0623 01:19:14.221648 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:14.294922 10 proxier.go:1461] \"Reloading service iptables data\" numServices=12 numEndpoints=13 numFilterChains=4 numFilterRules=7 numNATChains=27 numNATRules=65\nI0623 01:19:14.304560 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"83.380283ms\"\nI0623 01:19:15.304793 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:15.376601 10 proxier.go:1461] \"Reloading service iptables data\" numServices=12 numEndpoints=14 numFilterChains=4 numFilterRules=6 numNATChains=29 numNATRules=70\nI0623 01:19:15.402498 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"97.784844ms\"\nI0623 01:19:17.483057 10 service.go:322] \"Service updated ports\" service=\"proxy-2966/test-service\" portCount=0\nI0623 01:19:17.483104 10 service.go:462] \"Removing service port\" portName=\"proxy-2966/test-service\"\nI0623 01:19:17.483134 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:17.507413 10 service.go:322] \"Service updated ports\" service=\"webhook-9322/e2e-test-webhook\" portCount=0\nI0623 01:19:17.565053 10 proxier.go:1461] \"Reloading service iptables data\" numServices=11 numEndpoints=13 numFilterChains=4 numFilterRules=6 numNATChains=29 numNATRules=67\nI0623 01:19:17.574561 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"91.45467ms\"\nI0623 01:19:17.574611 10 service.go:462] \"Removing service port\" portName=\"webhook-9322/e2e-test-webhook\"\nI0623 01:19:17.574694 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:17.648141 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=27 numNATRules=62\nI0623 01:19:17.655598 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"80.988471ms\"\nI0623 01:19:18.616617 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:18.685128 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=11 numFilterChains=4 numFilterRules=6 numNATChains=25 numNATRules=58\nI0623 01:19:18.693041 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"76.49452ms\"\nI0623 01:19:19.693438 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:19.763258 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=25 numNATRules=60\nI0623 01:19:19.774580 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"81.2347ms\"\nI0623 01:19:20.552516 10 service.go:322] \"Service updated ports\" service=\"dns-9022/test-service-2\" portCount=1\nI0623 01:19:20.552567 10 service.go:437] \"Adding new service port\" portName=\"dns-9022/test-service-2:http\" servicePort=\"100.66.141.170:80/TCP\"\nI0623 01:19:20.552598 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:20.605250 10 proxier.go:1461] \"Reloading service iptables data\" numServices=11 numEndpoints=12 numFilterChains=4 numFilterRules=7 numNATChains=25 numNATRules=60\nI0623 01:19:20.614589 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"62.024043ms\"\nI0623 01:19:20.847499 10 service.go:322] \"Service updated ports\" service=\"services-6444/svc-not-tolerate-unready\" portCount=1\nI0623 01:19:21.615511 10 service.go:437] \"Adding new service port\" portName=\"services-6444/svc-not-tolerate-unready:http\" servicePort=\"100.66.202.151:80/TCP\"\nI0623 01:19:21.615616 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:21.654535 10 proxier.go:1461] \"Reloading service iptables data\" numServices=12 numEndpoints=13 numFilterChains=4 numFilterRules=9 numNATChains=26 numNATRules=63\nI0623 01:19:21.660281 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.829945ms\"\nI0623 01:19:22.401077 10 service.go:322] \"Service updated ports\" service=\"services-9737/nodeport-service\" portCount=0\nI0623 01:19:22.663638 10 service.go:462] \"Removing service port\" portName=\"services-9737/nodeport-service\"\nI0623 01:19:22.663687 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:22.740279 10 proxier.go:1461] \"Reloading service iptables data\" numServices=11 numEndpoints=13 numFilterChains=4 numFilterRules=7 numNATChains=26 numNATRules=63\nI0623 01:19:22.755680 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"92.074458ms\"\nI0623 01:19:23.585971 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:23.653948 10 proxier.go:1461] \"Reloading service iptables data\" numServices=11 numEndpoints=14 numFilterChains=4 numFilterRules=7 numNATChains=26 numNATRules=63\nI0623 01:19:23.662853 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"76.923448ms\"\nI0623 01:19:24.664001 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:24.703817 10 proxier.go:1461] \"Reloading service iptables data\" numServices=11 numEndpoints=14 numFilterChains=4 numFilterRules=5 numNATChains=29 numNATRules=71\nI0623 01:19:24.709303 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.442255ms\"\nI0623 01:19:25.599884 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:25.635040 10 service.go:322] \"Service updated ports\" service=\"conntrack-3098/svc-udp\" portCount=0\nI0623 01:19:25.689185 10 proxier.go:1461] \"Reloading service iptables data\" numServices=11 numEndpoints=13 numFilterChains=4 numFilterRules=7 numNATChains=29 numNATRules=66\nI0623 01:19:25.711379 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"111.536173ms\"\nI0623 01:19:26.711809 10 service.go:462] \"Removing service port\" portName=\"conntrack-3098/svc-udp:udp\"\nI0623 01:19:26.711886 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:26.747142 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=13 numFilterChains=4 numFilterRules=5 numNATChains=26 numNATRules=63\nI0623 01:19:26.758701 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.912251ms\"\nI0623 01:19:29.165522 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:29.220584 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=13 numFilterChains=4 numFilterRules=5 numNATChains=26 numNATRules=61\nI0623 01:19:29.225530 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.066586ms\"\nI0623 01:19:29.225677 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:29.267041 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=13 numFilterChains=4 numFilterRules=8 numNATChains=25 numNATRules=52\nI0623 01:19:29.272105 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.537788ms\"\nI0623 01:19:31.394956 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:31.499831 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=14 numFilterChains=4 numFilterRules=7 numNATChains=22 numNATRules=52\nI0623 01:19:31.514800 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"119.88506ms\"\nI0623 01:19:31.771641 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:31.813672 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=14 numFilterChains=4 numFilterRules=7 numNATChains=22 numNATRules=52\nI0623 01:19:31.820414 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.84316ms\"\nI0623 01:19:32.574537 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:32.633886 10 proxier.go:1461] \"Reloading service iptables data\" numServices=10 numEndpoints=13 numFilterChains=4 numFilterRules=7 numNATChains=22 numNATRules=52\nI0623 01:19:32.649544 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"75.057467ms\"\nI0623 01:19:32.677118 10 service.go:322] \"Service updated ports\" service=\"services-9737/externalsvc\" portCount=0\nI0623 01:19:32.825709 10 service.go:322] \"Service updated ports\" service=\"endpointslice-9712/example-int-port\" portCount=0\nI0623 01:19:32.855509 10 service.go:322] \"Service updated ports\" service=\"endpointslice-9712/example-named-port\" portCount=0\nI0623 01:19:32.874428 10 service.go:322] \"Service updated ports\" service=\"endpointslice-9712/example-no-match\" portCount=0\nI0623 01:19:33.211855 10 service.go:322] \"Service updated ports\" service=\"services-4489/nodeport-reuse\" portCount=1\nI0623 01:19:33.574245 10 service.go:462] \"Removing service port\" portName=\"endpointslice-9712/example-no-match:example-no-match\"\nI0623 01:19:33.574271 10 service.go:462] \"Removing service port\" portName=\"services-9737/externalsvc\"\nI0623 01:19:33.574283 10 service.go:462] \"Removing service port\" portName=\"endpointslice-9712/example-int-port:example\"\nI0623 01:19:33.574294 10 service.go:462] \"Removing service port\" portName=\"endpointslice-9712/example-named-port:http\"\nI0623 01:19:33.574381 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:33.677056 10 proxier.go:1461] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=44\nI0623 01:19:33.685239 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"111.002842ms\"\nI0623 01:19:34.685561 10 proxier.go:853] \"Syncing iptables rules\"\nI0623 01:19:34.724872 10 p