This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-05-27 00:11
Elapsed39m39s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 124 lines ...
I0527 00:18:54.600920    4049 up.go:43] Cleaning up any leaked resources from previous cluster
I0527 00:18:54.600942    4049 dumplogs.go:38] /logs/artifacts/0045e0db-be80-11eb-b3db-1ecf15fc999e/kops toolbox dump --name e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0527 00:18:54.616340    4069 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0527 00:18:54.616544    4069 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true

Cluster.kops.k8s.io "e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io" not found
W0527 00:18:55.130368    4049 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0527 00:18:55.130438    4049 down.go:48] /logs/artifacts/0045e0db-be80-11eb-b3db-1ecf15fc999e/kops delete cluster --name e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --yes
I0527 00:18:55.149702    4079 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0527 00:18:55.150461    4079 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io" not found
I0527 00:18:55.680134    4049 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/05/27 00:18:55 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0527 00:18:55.687790    4049 http.go:37] curl https://ip.jsb.workers.dev
I0527 00:18:55.787001    4049 up.go:144] /logs/artifacts/0045e0db-be80-11eb-b3db-1ecf15fc999e/kops create cluster --name e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.20.7 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210518 --channel=alpha --networking=kubenet --container-runtime=docker --admin-access 34.71.37.131/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-southeast-1a --master-size c5.large
I0527 00:18:55.804707    4089 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0527 00:18:55.804827    4089 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0527 00:18:55.852917    4089 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0527 00:18:56.341799    4089 new_cluster.go:1011]  Cloud Provider ID = aws
... skipping 42 lines ...

I0527 00:19:25.999052    4049 up.go:181] /logs/artifacts/0045e0db-be80-11eb-b3db-1ecf15fc999e/kops validate cluster --name e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0527 00:19:26.020545    4108 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0527 00:19:26.020812    4108 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io

W0527 00:19:27.751509    4108 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0527 00:19:37.782530    4108 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:19:47.821332    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:19:57.849279    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:20:07.884451    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:20:17.916703    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:20:27.950688    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:20:37.996320    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:20:48.058684    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:20:58.093413    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:21:08.139233    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:21:18.168670    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:21:28.202669    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:21:38.249866    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:21:48.296393    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:21:58.334257    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:22:08.368922    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:22:18.400557    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:22:28.440828    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:22:38.475279    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:22:48.506134    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0527 00:22:58.543098    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
W0527 00:23:08.581074    4108 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0527 00:23:48.617726    4108 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 203.0.113.123:443: i/o timeout
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
... skipping 5 lines ...
ip-172-20-42-187.ap-southeast-1.compute.internal	master	True

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/coredns-8f5559c9b-r6f6x	system-cluster-critical pod "coredns-8f5559c9b-r6f6x" is not ready (coredns)

Validation Failed
W0527 00:24:03.308861    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

... skipping 338 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 408 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 746 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:26:43.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":1,"skipped":8,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 91 lines ...
May 27 00:26:44.798: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [1.914 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:141

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 85 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:26:45.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-53" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:26:45.645: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 155 lines ...
May 27 00:26:46.776: INFO: AfterEach: Cleaning up test resources.
May 27 00:26:46.776: INFO: Deleting PersistentVolumeClaim "pvc-rbn77"
May 27 00:26:46.974: INFO: Deleting PersistentVolume "hostpath-h7dpx"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:26:47.181: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 214 lines ...
STEP: Building a namespace api object, basename downward-api
May 27 00:26:44.006: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:109
STEP: Creating a pod to test downward api env vars
May 27 00:26:44.590: INFO: Waiting up to 5m0s for pod "downward-api-61e14ab2-68b0-4154-b072-17c71b0190fc" in namespace "downward-api-7497" to be "Succeeded or Failed"
May 27 00:26:44.781: INFO: Pod "downward-api-61e14ab2-68b0-4154-b072-17c71b0190fc": Phase="Pending", Reason="", readiness=false. Elapsed: 190.795726ms
May 27 00:26:46.971: INFO: Pod "downward-api-61e14ab2-68b0-4154-b072-17c71b0190fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.381541258s
May 27 00:26:49.162: INFO: Pod "downward-api-61e14ab2-68b0-4154-b072-17c71b0190fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.572622518s
May 27 00:26:51.353: INFO: Pod "downward-api-61e14ab2-68b0-4154-b072-17c71b0190fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.763474262s
STEP: Saw pod success
May 27 00:26:51.353: INFO: Pod "downward-api-61e14ab2-68b0-4154-b072-17c71b0190fc" satisfied condition "Succeeded or Failed"
May 27 00:26:51.549: INFO: Trying to get logs from node ip-172-20-33-93.ap-southeast-1.compute.internal pod downward-api-61e14ab2-68b0-4154-b072-17c71b0190fc container dapi-container: <nil>
STEP: delete the pod
May 27 00:26:52.031: INFO: Waiting for pod downward-api-61e14ab2-68b0-4154-b072-17c71b0190fc to disappear
May 27 00:26:52.225: INFO: Pod downward-api-61e14ab2-68b0-4154-b072-17c71b0190fc no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:9.909 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:109
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":1,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:26:52.843: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 38 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:26:54.051: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 54 lines ...
• [SLOW TEST:16.544 seconds]
[k8s.io] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:27:01.253: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
May 27 00:26:43.627: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-map-ca84b741-8717-43a3-89c3-ba1ad415b3d1
STEP: Creating a pod to test consume configMaps
May 27 00:26:44.403: INFO: Waiting up to 5m0s for pod "pod-configmaps-686ca045-1004-414a-8e52-8fa6da114df3" in namespace "configmap-8530" to be "Succeeded or Failed"
May 27 00:26:44.594: INFO: Pod "pod-configmaps-686ca045-1004-414a-8e52-8fa6da114df3": Phase="Pending", Reason="", readiness=false. Elapsed: 191.480177ms
May 27 00:26:46.785: INFO: Pod "pod-configmaps-686ca045-1004-414a-8e52-8fa6da114df3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382453365s
May 27 00:26:48.978: INFO: Pod "pod-configmaps-686ca045-1004-414a-8e52-8fa6da114df3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.574755786s
May 27 00:26:51.179: INFO: Pod "pod-configmaps-686ca045-1004-414a-8e52-8fa6da114df3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.776385002s
May 27 00:26:53.371: INFO: Pod "pod-configmaps-686ca045-1004-414a-8e52-8fa6da114df3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.968207946s
May 27 00:26:55.564: INFO: Pod "pod-configmaps-686ca045-1004-414a-8e52-8fa6da114df3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.160798834s
May 27 00:26:57.755: INFO: Pod "pod-configmaps-686ca045-1004-414a-8e52-8fa6da114df3": Phase="Pending", Reason="", readiness=false. Elapsed: 13.351741031s
May 27 00:26:59.945: INFO: Pod "pod-configmaps-686ca045-1004-414a-8e52-8fa6da114df3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.542507826s
STEP: Saw pod success
May 27 00:26:59.946: INFO: Pod "pod-configmaps-686ca045-1004-414a-8e52-8fa6da114df3" satisfied condition "Succeeded or Failed"
May 27 00:27:00.136: INFO: Trying to get logs from node ip-172-20-41-144.ap-southeast-1.compute.internal pod pod-configmaps-686ca045-1004-414a-8e52-8fa6da114df3 container agnhost-container: <nil>
STEP: delete the pod
May 27 00:27:00.532: INFO: Waiting for pod pod-configmaps-686ca045-1004-414a-8e52-8fa6da114df3 to disappear
May 27 00:27:00.723: INFO: Pod pod-configmaps-686ca045-1004-414a-8e52-8fa6da114df3 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:18.431 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:27:01.300: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 46 lines ...
• [SLOW TEST:18.458 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:27:01.342: INFO: Driver hostPathSymlink doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 21 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-8b1a28c2-bed4-4195-8fd0-ed492b426d74
STEP: Creating a pod to test consume secrets
May 27 00:26:46.561: INFO: Waiting up to 5m0s for pod "pod-secrets-5ae0099a-6f91-41ba-a7d7-e0e2112083bc" in namespace "secrets-1252" to be "Succeeded or Failed"
May 27 00:26:46.758: INFO: Pod "pod-secrets-5ae0099a-6f91-41ba-a7d7-e0e2112083bc": Phase="Pending", Reason="", readiness=false. Elapsed: 196.494257ms
May 27 00:26:48.954: INFO: Pod "pod-secrets-5ae0099a-6f91-41ba-a7d7-e0e2112083bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.39326971s
May 27 00:26:51.172: INFO: Pod "pod-secrets-5ae0099a-6f91-41ba-a7d7-e0e2112083bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.610821432s
May 27 00:26:53.369: INFO: Pod "pod-secrets-5ae0099a-6f91-41ba-a7d7-e0e2112083bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.807472155s
May 27 00:26:55.565: INFO: Pod "pod-secrets-5ae0099a-6f91-41ba-a7d7-e0e2112083bc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.003638453s
May 27 00:26:57.761: INFO: Pod "pod-secrets-5ae0099a-6f91-41ba-a7d7-e0e2112083bc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.199903327s
May 27 00:26:59.958: INFO: Pod "pod-secrets-5ae0099a-6f91-41ba-a7d7-e0e2112083bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.397065361s
STEP: Saw pod success
May 27 00:26:59.958: INFO: Pod "pod-secrets-5ae0099a-6f91-41ba-a7d7-e0e2112083bc" satisfied condition "Succeeded or Failed"
May 27 00:27:00.157: INFO: Trying to get logs from node ip-172-20-40-209.ap-southeast-1.compute.internal pod pod-secrets-5ae0099a-6f91-41ba-a7d7-e0e2112083bc container secret-volume-test: <nil>
STEP: delete the pod
May 27 00:27:00.857: INFO: Waiting for pod pod-secrets-5ae0099a-6f91-41ba-a7d7-e0e2112083bc to disappear
May 27 00:27:01.053: INFO: Pod pod-secrets-5ae0099a-6f91-41ba-a7d7-e0e2112083bc no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 14 lines ...
STEP: Building a namespace api object, basename emptydir
May 27 00:26:44.859: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on tmpfs
May 27 00:26:45.440: INFO: Waiting up to 5m0s for pod "pod-577d00df-3f8a-4cdc-ab6d-335bc464c368" in namespace "emptydir-6789" to be "Succeeded or Failed"
May 27 00:26:45.634: INFO: Pod "pod-577d00df-3f8a-4cdc-ab6d-335bc464c368": Phase="Pending", Reason="", readiness=false. Elapsed: 192.799395ms
May 27 00:26:47.828: INFO: Pod "pod-577d00df-3f8a-4cdc-ab6d-335bc464c368": Phase="Pending", Reason="", readiness=false. Elapsed: 2.386007687s
May 27 00:26:50.021: INFO: Pod "pod-577d00df-3f8a-4cdc-ab6d-335bc464c368": Phase="Pending", Reason="", readiness=false. Elapsed: 4.579232047s
May 27 00:26:52.214: INFO: Pod "pod-577d00df-3f8a-4cdc-ab6d-335bc464c368": Phase="Pending", Reason="", readiness=false. Elapsed: 6.772801235s
May 27 00:26:54.420: INFO: Pod "pod-577d00df-3f8a-4cdc-ab6d-335bc464c368": Phase="Pending", Reason="", readiness=false. Elapsed: 8.978666443s
May 27 00:26:56.614: INFO: Pod "pod-577d00df-3f8a-4cdc-ab6d-335bc464c368": Phase="Pending", Reason="", readiness=false. Elapsed: 11.172036682s
May 27 00:26:58.807: INFO: Pod "pod-577d00df-3f8a-4cdc-ab6d-335bc464c368": Phase="Pending", Reason="", readiness=false. Elapsed: 13.36562825s
May 27 00:27:01.000: INFO: Pod "pod-577d00df-3f8a-4cdc-ab6d-335bc464c368": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.558803946s
STEP: Saw pod success
May 27 00:27:01.000: INFO: Pod "pod-577d00df-3f8a-4cdc-ab6d-335bc464c368" satisfied condition "Succeeded or Failed"
May 27 00:27:01.194: INFO: Trying to get logs from node ip-172-20-41-144.ap-southeast-1.compute.internal pod pod-577d00df-3f8a-4cdc-ab6d-335bc464c368 container test-container: <nil>
STEP: delete the pod
May 27 00:27:01.589: INFO: Waiting for pod pod-577d00df-3f8a-4cdc-ab6d-335bc464c368 to disappear
May 27 00:27:01.782: INFO: Pod pod-577d00df-3f8a-4cdc-ab6d-335bc464c368 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:19.397 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:27:02.409: INFO: Only supported for providers [gce gke] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1304
------------------------------
... skipping 6 lines ...
STEP: Building a namespace api object, basename security-context
May 27 00:26:45.466: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:118
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
May 27 00:26:46.082: INFO: Waiting up to 5m0s for pod "security-context-04cd9158-4761-493c-a3ae-cfb448186e77" in namespace "security-context-194" to be "Succeeded or Failed"
May 27 00:26:46.279: INFO: Pod "security-context-04cd9158-4761-493c-a3ae-cfb448186e77": Phase="Pending", Reason="", readiness=false. Elapsed: 197.473011ms
May 27 00:26:48.537: INFO: Pod "security-context-04cd9158-4761-493c-a3ae-cfb448186e77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.454773699s
May 27 00:26:50.777: INFO: Pod "security-context-04cd9158-4761-493c-a3ae-cfb448186e77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.694753692s
May 27 00:26:52.975: INFO: Pod "security-context-04cd9158-4761-493c-a3ae-cfb448186e77": Phase="Pending", Reason="", readiness=false. Elapsed: 6.892988925s
May 27 00:26:55.171: INFO: Pod "security-context-04cd9158-4761-493c-a3ae-cfb448186e77": Phase="Pending", Reason="", readiness=false. Elapsed: 9.089066918s
May 27 00:26:57.367: INFO: Pod "security-context-04cd9158-4761-493c-a3ae-cfb448186e77": Phase="Pending", Reason="", readiness=false. Elapsed: 11.285098871s
May 27 00:26:59.563: INFO: Pod "security-context-04cd9158-4761-493c-a3ae-cfb448186e77": Phase="Pending", Reason="", readiness=false. Elapsed: 13.48115663s
May 27 00:27:01.759: INFO: Pod "security-context-04cd9158-4761-493c-a3ae-cfb448186e77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.677244026s
STEP: Saw pod success
May 27 00:27:01.759: INFO: Pod "security-context-04cd9158-4761-493c-a3ae-cfb448186e77" satisfied condition "Succeeded or Failed"
May 27 00:27:01.957: INFO: Trying to get logs from node ip-172-20-40-209.ap-southeast-1.compute.internal pod security-context-04cd9158-4761-493c-a3ae-cfb448186e77 container test-container: <nil>
STEP: delete the pod
May 27 00:27:02.357: INFO: Waiting for pod security-context-04cd9158-4761-493c-a3ae-cfb448186e77 to disappear
May 27 00:27:02.553: INFO: Pod security-context-04cd9158-4761-493c-a3ae-cfb448186e77 no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:20.103 seconds]
[k8s.io] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:118
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":1,"skipped":24,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:27:03.174: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 149 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":18,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 44 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
May 27 00:27:02.273: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 27 00:27:02.464: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-h5qx
STEP: Creating a pod to test subpath
May 27 00:27:02.661: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-h5qx" in namespace "provisioning-212" to be "Succeeded or Failed"
May 27 00:27:02.851: INFO: Pod "pod-subpath-test-inlinevolume-h5qx": Phase="Pending", Reason="", readiness=false. Elapsed: 190.527906ms
May 27 00:27:05.045: INFO: Pod "pod-subpath-test-inlinevolume-h5qx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.384100781s
May 27 00:27:07.237: INFO: Pod "pod-subpath-test-inlinevolume-h5qx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.576116469s
STEP: Saw pod success
May 27 00:27:07.237: INFO: Pod "pod-subpath-test-inlinevolume-h5qx" satisfied condition "Succeeded or Failed"
May 27 00:27:07.429: INFO: Trying to get logs from node ip-172-20-40-196.ap-southeast-1.compute.internal pod pod-subpath-test-inlinevolume-h5qx container test-container-subpath-inlinevolume-h5qx: <nil>
STEP: delete the pod
May 27 00:27:07.821: INFO: Waiting for pod pod-subpath-test-inlinevolume-h5qx to disappear
May 27 00:27:08.011: INFO: Pod pod-subpath-test-inlinevolume-h5qx no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-h5qx
May 27 00:27:08.012: INFO: Deleting pod "pod-subpath-test-inlinevolume-h5qx" in namespace "provisioning-212"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:27:08.786: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 119 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
May 27 00:27:06.345: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-9eb0a55b-e1fa-43e0-a307-018896c7d663" in namespace "security-context-test-1844" to be "Succeeded or Failed"
May 27 00:27:06.540: INFO: Pod "busybox-privileged-false-9eb0a55b-e1fa-43e0-a307-018896c7d663": Phase="Pending", Reason="", readiness=false. Elapsed: 194.680133ms
May 27 00:27:08.734: INFO: Pod "busybox-privileged-false-9eb0a55b-e1fa-43e0-a307-018896c7d663": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.388957576s
May 27 00:27:08.734: INFO: Pod "busybox-privileged-false-9eb0a55b-e1fa-43e0-a307-018896c7d663" satisfied condition "Succeeded or Failed"
May 27 00:27:08.945: INFO: Got logs for pod "busybox-privileged-false-9eb0a55b-e1fa-43e0-a307-018896c7d663": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:27:08.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1844" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:27:09.346: INFO: Only supported for providers [azure] (not aws)
... skipping 172 lines ...
May 27 00:27:00.973: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-7ct7s] to have phase Bound
May 27 00:27:01.165: INFO: PersistentVolumeClaim pvc-7ct7s found and phase=Bound (191.753416ms)
May 27 00:27:01.165: INFO: Waiting up to 3m0s for PersistentVolume local-9m6l7 to have phase Bound
May 27 00:27:01.357: INFO: PersistentVolume local-9m6l7 found and phase=Bound (191.598229ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-6ml6
STEP: Creating a pod to test subpath
May 27 00:27:01.940: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6ml6" in namespace "provisioning-6517" to be "Succeeded or Failed"
May 27 00:27:02.132: INFO: Pod "pod-subpath-test-preprovisionedpv-6ml6": Phase="Pending", Reason="", readiness=false. Elapsed: 191.830538ms
May 27 00:27:04.324: INFO: Pod "pod-subpath-test-preprovisionedpv-6ml6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.384331403s
May 27 00:27:06.520: INFO: Pod "pod-subpath-test-preprovisionedpv-6ml6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.579568596s
STEP: Saw pod success
May 27 00:27:06.520: INFO: Pod "pod-subpath-test-preprovisionedpv-6ml6" satisfied condition "Succeeded or Failed"
May 27 00:27:06.725: INFO: Trying to get logs from node ip-172-20-33-93.ap-southeast-1.compute.internal pod pod-subpath-test-preprovisionedpv-6ml6 container test-container-subpath-preprovisionedpv-6ml6: <nil>
STEP: delete the pod
May 27 00:27:07.118: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6ml6 to disappear
May 27 00:27:07.309: INFO: Pod pod-subpath-test-preprovisionedpv-6ml6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-6ml6
May 27 00:27:07.309: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6ml6" in namespace "provisioning-6517"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:27:09.911: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 38 lines ...
• [SLOW TEST:6.672 seconds]
[sig-api-machinery] Generated clientset
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/generated_clientset.go:105
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod","total":-1,"completed":1,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:27:01.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap configmap-3134/configmap-test-a0bff2af-9335-496e-b2af-db8d49ce10b4
STEP: Creating a pod to test consume configMaps
May 27 00:27:02.654: INFO: Waiting up to 5m0s for pod "pod-configmaps-a1e4608b-1f2e-41e3-93a4-a2aaf0b2af59" in namespace "configmap-3134" to be "Succeeded or Failed"
May 27 00:27:02.848: INFO: Pod "pod-configmaps-a1e4608b-1f2e-41e3-93a4-a2aaf0b2af59": Phase="Pending", Reason="", readiness=false. Elapsed: 193.841447ms
May 27 00:27:05.043: INFO: Pod "pod-configmaps-a1e4608b-1f2e-41e3-93a4-a2aaf0b2af59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.389110488s
May 27 00:27:07.237: INFO: Pod "pod-configmaps-a1e4608b-1f2e-41e3-93a4-a2aaf0b2af59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.582856807s
May 27 00:27:09.435: INFO: Pod "pod-configmaps-a1e4608b-1f2e-41e3-93a4-a2aaf0b2af59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.780946695s
May 27 00:27:11.629: INFO: Pod "pod-configmaps-a1e4608b-1f2e-41e3-93a4-a2aaf0b2af59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.974762896s
STEP: Saw pod success
May 27 00:27:11.629: INFO: Pod "pod-configmaps-a1e4608b-1f2e-41e3-93a4-a2aaf0b2af59" satisfied condition "Succeeded or Failed"
May 27 00:27:11.822: INFO: Trying to get logs from node ip-172-20-40-209.ap-southeast-1.compute.internal pod pod-configmaps-a1e4608b-1f2e-41e3-93a4-a2aaf0b2af59 container env-test: <nil>
STEP: delete the pod
May 27 00:27:12.217: INFO: Waiting for pod pod-configmaps-a1e4608b-1f2e-41e3-93a4-a2aaf0b2af59 to disappear
May 27 00:27:12.410: INFO: Pod pod-configmaps-a1e4608b-1f2e-41e3-93a4-a2aaf0b2af59 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:11.511 seconds]
[sig-node] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:27:12.807: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 30 lines ...
May 27 00:26:45.691: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-9522-aws-scjgkck
STEP: creating a claim
May 27 00:26:45.883: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-tnlq
STEP: Creating a pod to test subpath
May 27 00:26:46.503: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-tnlq" in namespace "provisioning-9522" to be "Succeeded or Failed"
May 27 00:26:46.694: INFO: Pod "pod-subpath-test-dynamicpv-tnlq": Phase="Pending", Reason="", readiness=false. Elapsed: 190.987437ms
May 27 00:26:48.886: INFO: Pod "pod-subpath-test-dynamicpv-tnlq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383456202s
May 27 00:26:51.121: INFO: Pod "pod-subpath-test-dynamicpv-tnlq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.618382993s
May 27 00:26:53.312: INFO: Pod "pod-subpath-test-dynamicpv-tnlq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.809217414s
May 27 00:26:55.505: INFO: Pod "pod-subpath-test-dynamicpv-tnlq": Phase="Pending", Reason="", readiness=false. Elapsed: 9.001862732s
May 27 00:26:57.695: INFO: Pod "pod-subpath-test-dynamicpv-tnlq": Phase="Pending", Reason="", readiness=false. Elapsed: 11.192619507s
May 27 00:26:59.886: INFO: Pod "pod-subpath-test-dynamicpv-tnlq": Phase="Pending", Reason="", readiness=false. Elapsed: 13.383572002s
May 27 00:27:02.078: INFO: Pod "pod-subpath-test-dynamicpv-tnlq": Phase="Pending", Reason="", readiness=false. Elapsed: 15.575233956s
May 27 00:27:04.270: INFO: Pod "pod-subpath-test-dynamicpv-tnlq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.767637213s
STEP: Saw pod success
May 27 00:27:04.271: INFO: Pod "pod-subpath-test-dynamicpv-tnlq" satisfied condition "Succeeded or Failed"
May 27 00:27:04.461: INFO: Trying to get logs from node ip-172-20-40-196.ap-southeast-1.compute.internal pod pod-subpath-test-dynamicpv-tnlq container test-container-volume-dynamicpv-tnlq: <nil>
STEP: delete the pod
May 27 00:27:04.869: INFO: Waiting for pod pod-subpath-test-dynamicpv-tnlq to disappear
May 27 00:27:05.060: INFO: Pod pod-subpath-test-dynamicpv-tnlq no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-tnlq
May 27 00:27:05.060: INFO: Deleting pod "pod-subpath-test-dynamicpv-tnlq" in namespace "provisioning-9522"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:27:17.411: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 250 lines ...
May 27 00:27:01.002: INFO: PersistentVolumeClaim pvc-6n2z4 found but phase is Pending instead of Bound.
May 27 00:27:03.193: INFO: PersistentVolumeClaim pvc-6n2z4 found and phase=Bound (4.573433586s)
May 27 00:27:03.193: INFO: Waiting up to 3m0s for PersistentVolume local-k752x to have phase Bound
May 27 00:27:03.384: INFO: PersistentVolume local-k752x found and phase=Bound (190.957976ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-cvw9
STEP: Creating a pod to test atomic-volume-subpath
May 27 00:27:03.959: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-cvw9" in namespace "provisioning-7741" to be "Succeeded or Failed"
May 27 00:27:04.151: INFO: Pod "pod-subpath-test-preprovisionedpv-cvw9": Phase="Pending", Reason="", readiness=false. Elapsed: 191.352201ms
May 27 00:27:06.347: INFO: Pod "pod-subpath-test-preprovisionedpv-cvw9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.387877538s
May 27 00:27:08.553: INFO: Pod "pod-subpath-test-preprovisionedpv-cvw9": Phase="Running", Reason="", readiness=true. Elapsed: 4.594066306s
May 27 00:27:10.745: INFO: Pod "pod-subpath-test-preprovisionedpv-cvw9": Phase="Running", Reason="", readiness=true. Elapsed: 6.785515748s
May 27 00:27:12.937: INFO: Pod "pod-subpath-test-preprovisionedpv-cvw9": Phase="Running", Reason="", readiness=true. Elapsed: 8.978115011s
May 27 00:27:15.129: INFO: Pod "pod-subpath-test-preprovisionedpv-cvw9": Phase="Running", Reason="", readiness=true. Elapsed: 11.169307486s
May 27 00:27:17.320: INFO: Pod "pod-subpath-test-preprovisionedpv-cvw9": Phase="Running", Reason="", readiness=true. Elapsed: 13.361051063s
May 27 00:27:19.512: INFO: Pod "pod-subpath-test-preprovisionedpv-cvw9": Phase="Running", Reason="", readiness=true. Elapsed: 15.552242754s
May 27 00:27:21.703: INFO: Pod "pod-subpath-test-preprovisionedpv-cvw9": Phase="Running", Reason="", readiness=true. Elapsed: 17.743469442s
May 27 00:27:23.895: INFO: Pod "pod-subpath-test-preprovisionedpv-cvw9": Phase="Running", Reason="", readiness=true. Elapsed: 19.935281009s
May 27 00:27:26.086: INFO: Pod "pod-subpath-test-preprovisionedpv-cvw9": Phase="Running", Reason="", readiness=true. Elapsed: 22.126564735s
May 27 00:27:28.285: INFO: Pod "pod-subpath-test-preprovisionedpv-cvw9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.326148863s
STEP: Saw pod success
May 27 00:27:28.286: INFO: Pod "pod-subpath-test-preprovisionedpv-cvw9" satisfied condition "Succeeded or Failed"
May 27 00:27:28.479: INFO: Trying to get logs from node ip-172-20-41-144.ap-southeast-1.compute.internal pod pod-subpath-test-preprovisionedpv-cvw9 container test-container-subpath-preprovisionedpv-cvw9: <nil>
STEP: delete the pod
May 27 00:27:28.877: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-cvw9 to disappear
May 27 00:27:29.068: INFO: Pod pod-subpath-test-preprovisionedpv-cvw9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-cvw9
May 27 00:27:29.068: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-cvw9" in namespace "provisioning-7741"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:27:31.696: INFO: Only supported for providers [openstack] (not aws)
... skipping 81 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:27:34.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "clientset-7725" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1beta1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":2,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:27:34.444: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 302 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:27:34.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7154" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace","total":-1,"completed":1,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:27:34.706: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 110 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":41,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:27:35.316: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 56 lines ...
May 27 00:26:45.310: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-6282-aws-sc78hx6
STEP: creating a claim
May 27 00:26:45.512: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-5h5b
STEP: Creating a pod to test exec-volume-test
May 27 00:26:46.155: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-5h5b" in namespace "volume-6282" to be "Succeeded or Failed"
May 27 00:26:46.356: INFO: Pod "exec-volume-test-dynamicpv-5h5b": Phase="Pending", Reason="", readiness=false. Elapsed: 200.877982ms
May 27 00:26:48.600: INFO: Pod "exec-volume-test-dynamicpv-5h5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.44445275s
May 27 00:26:50.835: INFO: Pod "exec-volume-test-dynamicpv-5h5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.680160105s
May 27 00:26:53.035: INFO: Pod "exec-volume-test-dynamicpv-5h5b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.880138979s
May 27 00:26:55.241: INFO: Pod "exec-volume-test-dynamicpv-5h5b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.085865214s
May 27 00:26:57.441: INFO: Pod "exec-volume-test-dynamicpv-5h5b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.285753183s
... skipping 2 lines ...
May 27 00:27:04.051: INFO: Pod "exec-volume-test-dynamicpv-5h5b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.896159696s
May 27 00:27:06.255: INFO: Pod "exec-volume-test-dynamicpv-5h5b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.10005297s
May 27 00:27:08.456: INFO: Pod "exec-volume-test-dynamicpv-5h5b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.3011431s
May 27 00:27:10.656: INFO: Pod "exec-volume-test-dynamicpv-5h5b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.501057025s
May 27 00:27:12.856: INFO: Pod "exec-volume-test-dynamicpv-5h5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.701098042s
STEP: Saw pod success
May 27 00:27:12.856: INFO: Pod "exec-volume-test-dynamicpv-5h5b" satisfied condition "Succeeded or Failed"
May 27 00:27:13.056: INFO: Trying to get logs from node ip-172-20-33-93.ap-southeast-1.compute.internal pod exec-volume-test-dynamicpv-5h5b container exec-container-dynamicpv-5h5b: <nil>
STEP: delete the pod
May 27 00:27:13.469: INFO: Waiting for pod exec-volume-test-dynamicpv-5h5b to disappear
May 27 00:27:13.668: INFO: Pod exec-volume-test-dynamicpv-5h5b no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-5h5b
May 27 00:27:13.669: INFO: Deleting pod "exec-volume-test-dynamicpv-5h5b" in namespace "volume-6282"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 10 lines ...
May 27 00:26:44.375: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-2841-aws-scm444w
STEP: creating a claim
May 27 00:26:44.578: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-4lww
STEP: Creating a pod to test subpath
May 27 00:26:45.213: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-4lww" in namespace "provisioning-2841" to be "Succeeded or Failed"
May 27 00:26:45.416: INFO: Pod "pod-subpath-test-dynamicpv-4lww": Phase="Pending", Reason="", readiness=false. Elapsed: 202.688577ms
May 27 00:26:47.618: INFO: Pod "pod-subpath-test-dynamicpv-4lww": Phase="Pending", Reason="", readiness=false. Elapsed: 2.404940327s
May 27 00:26:49.822: INFO: Pod "pod-subpath-test-dynamicpv-4lww": Phase="Pending", Reason="", readiness=false. Elapsed: 4.608661949s
May 27 00:26:52.025: INFO: Pod "pod-subpath-test-dynamicpv-4lww": Phase="Pending", Reason="", readiness=false. Elapsed: 6.811566496s
May 27 00:26:54.261: INFO: Pod "pod-subpath-test-dynamicpv-4lww": Phase="Pending", Reason="", readiness=false. Elapsed: 9.047723709s
May 27 00:26:56.463: INFO: Pod "pod-subpath-test-dynamicpv-4lww": Phase="Pending", Reason="", readiness=false. Elapsed: 11.250154855s
... skipping 5 lines ...
May 27 00:27:09.695: INFO: Pod "pod-subpath-test-dynamicpv-4lww": Phase="Pending", Reason="", readiness=false. Elapsed: 24.48228854s
May 27 00:27:11.897: INFO: Pod "pod-subpath-test-dynamicpv-4lww": Phase="Pending", Reason="", readiness=false. Elapsed: 26.684447372s
May 27 00:27:14.100: INFO: Pod "pod-subpath-test-dynamicpv-4lww": Phase="Pending", Reason="", readiness=false. Elapsed: 28.886907448s
May 27 00:27:16.305: INFO: Pod "pod-subpath-test-dynamicpv-4lww": Phase="Pending", Reason="", readiness=false. Elapsed: 31.091711467s
May 27 00:27:18.508: INFO: Pod "pod-subpath-test-dynamicpv-4lww": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.295247828s
STEP: Saw pod success
May 27 00:27:18.509: INFO: Pod "pod-subpath-test-dynamicpv-4lww" satisfied condition "Succeeded or Failed"
May 27 00:27:18.719: INFO: Trying to get logs from node ip-172-20-40-209.ap-southeast-1.compute.internal pod pod-subpath-test-dynamicpv-4lww container test-container-subpath-dynamicpv-4lww: <nil>
STEP: delete the pod
May 27 00:27:19.133: INFO: Waiting for pod pod-subpath-test-dynamicpv-4lww to disappear
May 27 00:27:19.335: INFO: Pod pod-subpath-test-dynamicpv-4lww no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-4lww
May 27 00:27:19.335: INFO: Deleting pod "pod-subpath-test-dynamicpv-4lww" in namespace "provisioning-2841"
... skipping 40 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:27:38.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslicemirroring-323" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete","total":-1,"completed":2,"skipped":13,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:27:39.151: INFO: Only supported for providers [gce gke] (not aws)
... skipping 128 lines ...
May 27 00:27:30.064: INFO: PersistentVolumeClaim pvc-ldqf9 found but phase is Pending instead of Bound.
May 27 00:27:32.257: INFO: PersistentVolumeClaim pvc-ldqf9 found and phase=Bound (8.960720087s)
May 27 00:27:32.257: INFO: Waiting up to 3m0s for PersistentVolume local-mlst8 to have phase Bound
May 27 00:27:32.448: INFO: PersistentVolume local-mlst8 found and phase=Bound (191.579088ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-7rsj
STEP: Creating a pod to test subpath
May 27 00:27:33.025: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7rsj" in namespace "provisioning-1176" to be "Succeeded or Failed"
May 27 00:27:33.217: INFO: Pod "pod-subpath-test-preprovisionedpv-7rsj": Phase="Pending", Reason="", readiness=false. Elapsed: 191.856239ms
May 27 00:27:35.409: INFO: Pod "pod-subpath-test-preprovisionedpv-7rsj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.384232137s
May 27 00:27:37.602: INFO: Pod "pod-subpath-test-preprovisionedpv-7rsj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.576753273s
May 27 00:27:39.801: INFO: Pod "pod-subpath-test-preprovisionedpv-7rsj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.775893778s
STEP: Saw pod success
May 27 00:27:39.801: INFO: Pod "pod-subpath-test-preprovisionedpv-7rsj" satisfied condition "Succeeded or Failed"
May 27 00:27:39.995: INFO: Trying to get logs from node ip-172-20-40-196.ap-southeast-1.compute.internal pod pod-subpath-test-preprovisionedpv-7rsj container test-container-volume-preprovisionedpv-7rsj: <nil>
STEP: delete the pod
May 27 00:27:40.393: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7rsj to disappear
May 27 00:27:40.586: INFO: Pod pod-subpath-test-preprovisionedpv-7rsj no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7rsj
May 27 00:27:40.586: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7rsj" in namespace "provisioning-1176"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":2,"skipped":7,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:27:34.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap configmap-8131/configmap-test-6cf47d9d-41a9-4915-beb5-53ca99ba7872
STEP: Creating a pod to test consume configMaps
May 27 00:27:35.986: INFO: Waiting up to 5m0s for pod "pod-configmaps-22d72c54-b212-4d93-b5f7-7cd883483a3e" in namespace "configmap-8131" to be "Succeeded or Failed"
May 27 00:27:36.177: INFO: Pod "pod-configmaps-22d72c54-b212-4d93-b5f7-7cd883483a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 190.971642ms
May 27 00:27:38.369: INFO: Pod "pod-configmaps-22d72c54-b212-4d93-b5f7-7cd883483a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383174433s
May 27 00:27:40.562: INFO: Pod "pod-configmaps-22d72c54-b212-4d93-b5f7-7cd883483a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.576109174s
May 27 00:27:42.754: INFO: Pod "pod-configmaps-22d72c54-b212-4d93-b5f7-7cd883483a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.767918024s
May 27 00:27:44.945: INFO: Pod "pod-configmaps-22d72c54-b212-4d93-b5f7-7cd883483a3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.959179183s
STEP: Saw pod success
May 27 00:27:44.945: INFO: Pod "pod-configmaps-22d72c54-b212-4d93-b5f7-7cd883483a3e" satisfied condition "Succeeded or Failed"
May 27 00:27:45.136: INFO: Trying to get logs from node ip-172-20-40-196.ap-southeast-1.compute.internal pod pod-configmaps-22d72c54-b212-4d93-b5f7-7cd883483a3e container env-test: <nil>
STEP: delete the pod
May 27 00:27:45.525: INFO: Waiting for pod pod-configmaps-22d72c54-b212-4d93-b5f7-7cd883483a3e to disappear
May 27 00:27:45.716: INFO: Pod pod-configmaps-22d72c54-b212-4d93-b5f7-7cd883483a3e no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:11.458 seconds]
[sig-node] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:27:46.115: INFO: Driver local doesn't support ntfs -- skipping
... skipping 79 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:27:46.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5662" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:27:47.082: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 134 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 128 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 103 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:555
    should expand volume without restarting pod if nodeExpansion=off
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:584
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":1,"skipped":1,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:27:53.208: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 135 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277
May 27 00:27:48.284: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-d226cd3c-47f2-46f5-bfc3-90aa9c90cb87" in namespace "security-context-test-9855" to be "Succeeded or Failed"
May 27 00:27:48.476: INFO: Pod "busybox-privileged-true-d226cd3c-47f2-46f5-bfc3-90aa9c90cb87": Phase="Pending", Reason="", readiness=false. Elapsed: 191.576966ms
May 27 00:27:50.668: INFO: Pod "busybox-privileged-true-d226cd3c-47f2-46f5-bfc3-90aa9c90cb87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383687667s
May 27 00:27:52.860: INFO: Pod "busybox-privileged-true-d226cd3c-47f2-46f5-bfc3-90aa9c90cb87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.575716786s
May 27 00:27:52.860: INFO: Pod "busybox-privileged-true-d226cd3c-47f2-46f5-bfc3-90aa9c90cb87" satisfied condition "Succeeded or Failed"
May 27 00:27:53.055: INFO: Got logs for pod "busybox-privileged-true-d226cd3c-47f2-46f5-bfc3-90aa9c90cb87": ""
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:27:53.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9855" for this suite.

... skipping 273 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":28,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 3 lines ...
[BeforeEach] [k8s.io] Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446
[It] should not create extra sandbox if all containers are done
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450
STEP: creating the pod that should always exit 0
STEP: submitting the pod to kubernetes
May 27 00:27:54.690: INFO: Waiting up to 5m0s for pod "pod-always-succeed1c638fca-dc98-4c86-893a-b18681dc97ba" in namespace "pods-5063" to be "Succeeded or Failed"
May 27 00:27:54.878: INFO: Pod "pod-always-succeed1c638fca-dc98-4c86-893a-b18681dc97ba": Phase="Pending", Reason="", readiness=false. Elapsed: 187.816792ms
May 27 00:27:57.070: INFO: Pod "pod-always-succeed1c638fca-dc98-4c86-893a-b18681dc97ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.380280727s
STEP: Saw pod success
May 27 00:27:57.070: INFO: Pod "pod-always-succeed1c638fca-dc98-4c86-893a-b18681dc97ba" satisfied condition "Succeeded or Failed"
STEP: Getting events about the pod
STEP: Checking events about the pod
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:27:59.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  [k8s.io] Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    should not create extra sandbox if all containers are done
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":2,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:27:59.849: INFO: Only supported for providers [azure] (not aws)
... skipping 35 lines ...
      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:833
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:27:36.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:28:01.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8056" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":3,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
... skipping 155 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  [k8s.io] Pod Container Status
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    should never report success for a pending container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pod Container Status should never report success for a pending container","total":-1,"completed":1,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:28:07.254: INFO: Only supported for providers [azure] (not aws)
... skipping 171 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:20.119 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":3,"skipped":36,"failed":0}
[BeforeEach] [sig-scheduling] Multi-AZ Cluster Volumes [sig-storage]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:28:09.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename multi-az
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 45 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:50
[It] files with FSGroup ownership should support (root,0644,tmpfs)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:67
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 27 00:28:02.835: INFO: Waiting up to 5m0s for pod "pod-cf6dd946-9d3e-4af9-80c4-2dd1852f9a6d" in namespace "emptydir-9141" to be "Succeeded or Failed"
May 27 00:28:03.037: INFO: Pod "pod-cf6dd946-9d3e-4af9-80c4-2dd1852f9a6d": Phase="Pending", Reason="", readiness=false. Elapsed: 201.997057ms
May 27 00:28:05.239: INFO: Pod "pod-cf6dd946-9d3e-4af9-80c4-2dd1852f9a6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.404476155s
May 27 00:28:07.442: INFO: Pod "pod-cf6dd946-9d3e-4af9-80c4-2dd1852f9a6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.606760362s
May 27 00:28:09.644: INFO: Pod "pod-cf6dd946-9d3e-4af9-80c4-2dd1852f9a6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.809089135s
STEP: Saw pod success
May 27 00:28:09.644: INFO: Pod "pod-cf6dd946-9d3e-4af9-80c4-2dd1852f9a6d" satisfied condition "Succeeded or Failed"
May 27 00:28:09.846: INFO: Trying to get logs from node ip-172-20-41-144.ap-southeast-1.compute.internal pod pod-cf6dd946-9d3e-4af9-80c4-2dd1852f9a6d container test-container: <nil>
STEP: delete the pod
May 27 00:28:10.257: INFO: Waiting for pod pod-cf6dd946-9d3e-4af9-80c4-2dd1852f9a6d to disappear
May 27 00:28:10.459: INFO: Pod pod-cf6dd946-9d3e-4af9-80c4-2dd1852f9a6d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:48
    files with FSGroup ownership should support (root,0644,tmpfs)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:67
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":2,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:88
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:28:10.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
May 27 00:28:12.092: INFO: found topology map[failure-domain.beta.kubernetes.io/zone:ap-southeast-1a]
May 27 00:28:12.092: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
May 27 00:28:12.092: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 10 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
May 27 00:28:09.792: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
May 27 00:28:09.792: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-tlrd
STEP: Creating a pod to test subpath
May 27 00:28:09.993: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-tlrd" in namespace "provisioning-322" to be "Succeeded or Failed"
May 27 00:28:10.191: INFO: Pod "pod-subpath-test-inlinevolume-tlrd": Phase="Pending", Reason="", readiness=false. Elapsed: 198.205304ms
May 27 00:28:12.391: INFO: Pod "pod-subpath-test-inlinevolume-tlrd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.397944881s
May 27 00:28:14.591: INFO: Pod "pod-subpath-test-inlinevolume-tlrd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.598222149s
STEP: Saw pod success
May 27 00:28:14.591: INFO: Pod "pod-subpath-test-inlinevolume-tlrd" satisfied condition "Succeeded or Failed"
May 27 00:28:14.789: INFO: Trying to get logs from node ip-172-20-40-209.ap-southeast-1.compute.internal pod pod-subpath-test-inlinevolume-tlrd container test-container-subpath-inlinevolume-tlrd: <nil>
STEP: delete the pod
May 27 00:28:15.198: INFO: Waiting for pod pod-subpath-test-inlinevolume-tlrd to disappear
May 27 00:28:15.400: INFO: Pod pod-subpath-test-inlinevolume-tlrd no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-tlrd
May 27 00:28:15.400: INFO: Deleting pod "pod-subpath-test-inlinevolume-tlrd" in namespace "provisioning-322"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":12,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:28:16.238: INFO: Only supported for providers [gce gke] (not aws)
... skipping 91 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:28:15.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8269" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:28:16.404: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 30 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191

      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:833
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":2,"skipped":33,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:27:50.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 64 lines ...
• [SLOW TEST:92.114 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:28:19.458: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 105 lines ...
• [SLOW TEST:6.992 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, replicaSet, percentage => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":4,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 45 lines ...
May 27 00:27:27.453: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-gflxk] to have phase Bound
May 27 00:27:27.644: INFO: PersistentVolumeClaim pvc-gflxk found and phase=Bound (191.288333ms)
STEP: Deleting the previously created pod
May 27 00:27:34.604: INFO: Deleting pod "pvc-volume-tester-f5lf7" in namespace "csi-mock-volumes-3671"
May 27 00:27:34.797: INFO: Wait up to 5m0s for pod "pvc-volume-tester-f5lf7" to be fully deleted
STEP: Checking CSI driver logs
May 27 00:27:39.396: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/ca10345f-24a6-440e-aa48-1ae353795ab2/volumes/kubernetes.io~csi/pvc-70d9d5f6-7944-4f15-9f05-f16ee39ba000/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-f5lf7
May 27 00:27:39.396: INFO: Deleting pod "pvc-volume-tester-f5lf7" in namespace "csi-mock-volumes-3671"
STEP: Deleting claim pvc-gflxk
May 27 00:27:39.975: INFO: Waiting up to 2m0s for PersistentVolume pvc-70d9d5f6-7944-4f15-9f05-f16ee39ba000 to get deleted
May 27 00:27:40.168: INFO: PersistentVolume pvc-70d9d5f6-7944-4f15-9f05-f16ee39ba000 found and phase=Released (192.302618ms)
May 27 00:27:42.360: INFO: PersistentVolume pvc-70d9d5f6-7944-4f15-9f05-f16ee39ba000 was removed
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:437
    should not be passed when CSIDriver does not exist
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:487
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":2,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:28:23.543: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 30 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1304
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":4,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:27:53.449: INFO: >>> kubeConfig: /root/.kube/config
... skipping 4 lines ...
May 27 00:27:54.409: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
May 27 00:27:55.511: INFO: Successfully created a new PD: "aws://ap-southeast-1a/vol-02793441cbb6afbb6".
May 27 00:27:55.511: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-wqmr
STEP: Creating a pod to test exec-volume-test
May 27 00:27:55.705: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-wqmr" in namespace "volume-633" to be "Succeeded or Failed"
May 27 00:27:55.897: INFO: Pod "exec-volume-test-inlinevolume-wqmr": Phase="Pending", Reason="", readiness=false. Elapsed: 191.61535ms
May 27 00:27:58.089: INFO: Pod "exec-volume-test-inlinevolume-wqmr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.384155912s
May 27 00:28:00.282: INFO: Pod "exec-volume-test-inlinevolume-wqmr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.576573598s
May 27 00:28:02.474: INFO: Pod "exec-volume-test-inlinevolume-wqmr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.768471076s
May 27 00:28:04.665: INFO: Pod "exec-volume-test-inlinevolume-wqmr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.960260202s
May 27 00:28:06.859: INFO: Pod "exec-volume-test-inlinevolume-wqmr": Phase="Pending", Reason="", readiness=false. Elapsed: 11.153932634s
May 27 00:28:09.051: INFO: Pod "exec-volume-test-inlinevolume-wqmr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.346066215s
STEP: Saw pod success
May 27 00:28:09.051: INFO: Pod "exec-volume-test-inlinevolume-wqmr" satisfied condition "Succeeded or Failed"
May 27 00:28:09.243: INFO: Trying to get logs from node ip-172-20-40-196.ap-southeast-1.compute.internal pod exec-volume-test-inlinevolume-wqmr container exec-container-inlinevolume-wqmr: <nil>
STEP: delete the pod
May 27 00:28:09.634: INFO: Waiting for pod exec-volume-test-inlinevolume-wqmr to disappear
May 27 00:28:09.825: INFO: Pod exec-volume-test-inlinevolume-wqmr no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-wqmr
May 27 00:28:09.825: INFO: Deleting pod "exec-volume-test-inlinevolume-wqmr" in namespace "volume-633"
May 27 00:28:10.329: INFO: Couldn't delete PD "aws://ap-southeast-1a/vol-02793441cbb6afbb6", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-02793441cbb6afbb6 is currently attached to i-063fbd80874e99720
	status code: 400, request id: 396a645d-2e0a-45f9-a437-000d9b5fc630
May 27 00:28:16.220: INFO: Couldn't delete PD "aws://ap-southeast-1a/vol-02793441cbb6afbb6", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-02793441cbb6afbb6 is currently attached to i-063fbd80874e99720
	status code: 400, request id: c9d8910b-2f7b-4608-aa06-98ce6d32a8db
May 27 00:28:22.114: INFO: Couldn't delete PD "aws://ap-southeast-1a/vol-02793441cbb6afbb6", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-02793441cbb6afbb6 is currently attached to i-063fbd80874e99720
	status code: 400, request id: 559b8320-e383-4d8a-9d09-b1adeaee55a8
May 27 00:28:28.051: INFO: Successfully deleted PD "aws://ap-southeast-1a/vol-02793441cbb6afbb6".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:28:28.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-633" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":21,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:27:39.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-map-4368b050-82b1-433a-b577-dcbabc59ae26
STEP: Creating a pod to test consume secrets
May 27 00:27:40.663: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-be9d408a-dcc5-4948-a83a-46cc7913f857" in namespace "projected-1540" to be "Succeeded or Failed"
May 27 00:27:40.864: INFO: Pod "pod-projected-secrets-be9d408a-dcc5-4948-a83a-46cc7913f857": Phase="Pending", Reason="", readiness=false. Elapsed: 200.485141ms
May 27 00:27:43.064: INFO: Pod "pod-projected-secrets-be9d408a-dcc5-4948-a83a-46cc7913f857": Phase="Pending", Reason="", readiness=false. Elapsed: 2.400858466s
May 27 00:27:45.264: INFO: Pod "pod-projected-secrets-be9d408a-dcc5-4948-a83a-46cc7913f857": Phase="Pending", Reason="", readiness=false. Elapsed: 4.600692452s
May 27 00:28:29.913: INFO: Pod "pod-projected-secrets-be9d408a-dcc5-4948-a83a-46cc7913f857": Phase="Succeeded", Reason="", readiness=false. Elapsed: 49.249937048s
STEP: Saw pod success
May 27 00:28:29.913: INFO: Pod "pod-projected-secrets-be9d408a-dcc5-4948-a83a-46cc7913f857" satisfied condition "Succeeded or Failed"
May 27 00:28:30.113: INFO: Trying to get logs from node ip-172-20-40-196.ap-southeast-1.compute.internal pod pod-projected-secrets-be9d408a-dcc5-4948-a83a-46cc7913f857 container projected-secret-volume-test: <nil>
STEP: delete the pod
May 27 00:28:30.522: INFO: Waiting for pod pod-projected-secrets-be9d408a-dcc5-4948-a83a-46cc7913f857 to disappear
May 27 00:28:30.722: INFO: Pod pod-projected-secrets-be9d408a-dcc5-4948-a83a-46cc7913f857 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:51.879 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 72 lines ...
May 27 00:27:36.318: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-7938-aws-scdbj44
STEP: creating a claim
May 27 00:27:36.514: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-lq2n
STEP: Creating a pod to test subpath
May 27 00:27:37.103: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-lq2n" in namespace "provisioning-7938" to be "Succeeded or Failed"
May 27 00:27:37.297: INFO: Pod "pod-subpath-test-dynamicpv-lq2n": Phase="Pending", Reason="", readiness=false. Elapsed: 193.793174ms
May 27 00:27:39.492: INFO: Pod "pod-subpath-test-dynamicpv-lq2n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.388318403s
May 27 00:27:41.686: INFO: Pod "pod-subpath-test-dynamicpv-lq2n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.582718479s
May 27 00:27:43.882: INFO: Pod "pod-subpath-test-dynamicpv-lq2n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.778494115s
May 27 00:27:46.076: INFO: Pod "pod-subpath-test-dynamicpv-lq2n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.97230022s
May 27 00:27:48.271: INFO: Pod "pod-subpath-test-dynamicpv-lq2n": Phase="Pending", Reason="", readiness=false. Elapsed: 11.167029519s
... skipping 3 lines ...
May 27 00:27:57.048: INFO: Pod "pod-subpath-test-dynamicpv-lq2n": Phase="Pending", Reason="", readiness=false. Elapsed: 19.944051842s
May 27 00:27:59.241: INFO: Pod "pod-subpath-test-dynamicpv-lq2n": Phase="Pending", Reason="", readiness=false. Elapsed: 22.137990633s
May 27 00:28:01.441: INFO: Pod "pod-subpath-test-dynamicpv-lq2n": Phase="Pending", Reason="", readiness=false. Elapsed: 24.337302936s
May 27 00:28:03.635: INFO: Pod "pod-subpath-test-dynamicpv-lq2n": Phase="Pending", Reason="", readiness=false. Elapsed: 26.531529512s
May 27 00:28:05.829: INFO: Pod "pod-subpath-test-dynamicpv-lq2n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.725429022s
STEP: Saw pod success
May 27 00:28:05.829: INFO: Pod "pod-subpath-test-dynamicpv-lq2n" satisfied condition "Succeeded or Failed"
May 27 00:28:06.023: INFO: Trying to get logs from node ip-172-20-33-93.ap-southeast-1.compute.internal pod pod-subpath-test-dynamicpv-lq2n container test-container-volume-dynamicpv-lq2n: <nil>
STEP: delete the pod
May 27 00:28:06.420: INFO: Waiting for pod pod-subpath-test-dynamicpv-lq2n to disappear
May 27 00:28:06.627: INFO: Pod pod-subpath-test-dynamicpv-lq2n no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-lq2n
May 27 00:28:06.627: INFO: Deleting pod "pod-subpath-test-dynamicpv-lq2n" in namespace "provisioning-7938"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":49,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:28:34.371: INFO: Only supported for providers [vsphere] (not aws)
... skipping 26 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 99 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-map-da060a3f-cd7d-412b-b3fa-94074756c8a4
STEP: Creating a pod to test consume configMaps
May 27 00:28:32.542: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a861c3dc-918e-491b-a62a-46c0f9525bcc" in namespace "projected-491" to be "Succeeded or Failed"
May 27 00:28:32.742: INFO: Pod "pod-projected-configmaps-a861c3dc-918e-491b-a62a-46c0f9525bcc": Phase="Pending", Reason="", readiness=false. Elapsed: 199.846803ms
May 27 00:28:34.942: INFO: Pod "pod-projected-configmaps-a861c3dc-918e-491b-a62a-46c0f9525bcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.400103933s
STEP: Saw pod success
May 27 00:28:34.943: INFO: Pod "pod-projected-configmaps-a861c3dc-918e-491b-a62a-46c0f9525bcc" satisfied condition "Succeeded or Failed"
May 27 00:28:35.142: INFO: Trying to get logs from node ip-172-20-41-144.ap-southeast-1.compute.internal pod pod-projected-configmaps-a861c3dc-918e-491b-a62a-46c0f9525bcc container agnhost-container: <nil>
STEP: delete the pod
May 27 00:28:35.552: INFO: Waiting for pod pod-projected-configmaps-a861c3dc-918e-491b-a62a-46c0f9525bcc to disappear
May 27 00:28:35.752: INFO: Pod pod-projected-configmaps-a861c3dc-918e-491b-a62a-46c0f9525bcc no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:5.017 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:28:36.174: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 34 lines ...
• [SLOW TEST:61.702 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:28:36.456: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 214 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":7,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:28:38.043: INFO: Only supported for providers [azure] (not aws)
... skipping 37 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:28:38.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9360" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":3,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 121 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:50
[It] volume on default medium should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:71
STEP: Creating a pod to test emptydir volume type on node default medium
May 27 00:28:37.159: INFO: Waiting up to 5m0s for pod "pod-cf35d8da-be2f-42e2-ac37-03ecb056df0d" in namespace "emptydir-4142" to be "Succeeded or Failed"
May 27 00:28:37.352: INFO: Pod "pod-cf35d8da-be2f-42e2-ac37-03ecb056df0d": Phase="Pending", Reason="", readiness=false. Elapsed: 193.289136ms
May 27 00:28:39.553: INFO: Pod "pod-cf35d8da-be2f-42e2-ac37-03ecb056df0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.393789089s
STEP: Saw pod success
May 27 00:28:39.553: INFO: Pod "pod-cf35d8da-be2f-42e2-ac37-03ecb056df0d" satisfied condition "Succeeded or Failed"
May 27 00:28:39.747: INFO: Trying to get logs from node ip-172-20-40-196.ap-southeast-1.compute.internal pod pod-cf35d8da-be2f-42e2-ac37-03ecb056df0d container test-container: <nil>
STEP: delete the pod
May 27 00:28:40.142: INFO: Waiting for pod pod-cf35d8da-be2f-42e2-ac37-03ecb056df0d to disappear
May 27 00:28:40.336: INFO: Pod pod-cf35d8da-be2f-42e2-ac37-03ecb056df0d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:28:40.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4142" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":6,"skipped":60,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:28:40.736: INFO: Only supported for providers [openstack] (not aws)
... skipping 103 lines ...
• [SLOW TEST:88.135 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:28:40.971: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 157 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1440
------------------------------
... skipping 79 lines ...
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103
STEP: Creating service test in namespace statefulset-4343
[It] should perform rolling updates and roll backs of template modifications with PVCs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:284
STEP: Creating a new StatefulSet with PVCs
May 27 00:28:42.522: INFO: error finding default storageClass : No default storage class found
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114
May 27 00:28:42.523: INFO: Deleting all statefulset in ns statefulset-4343
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:28:43.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    should perform rolling updates and roll backs of template modifications with PVCs [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:284

    error finding default storageClass : No default storage class found

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:830
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:28:43.504: INFO: Only supported for providers [vsphere] (not aws)
... skipping 211 lines ...
May 27 00:27:57.308: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 27 00:27:57.308: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 27 00:27:57.308: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-1612-aws-scql8l6      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1612    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1612-aws-scql8l6,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1612    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1612-aws-scql8l6,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a StorageClass provisioning-1612-aws-scql8l6
STEP: creating a claim
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
May 27 00:27:58.083: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-qhz69" in namespace "provisioning-1612" to be "Succeeded or Failed"
May 27 00:27:58.276: INFO: Pod "pvc-volume-tester-writer-qhz69": Phase="Pending", Reason="", readiness=false. Elapsed: 193.88032ms
May 27 00:28:00.470: INFO: Pod "pvc-volume-tester-writer-qhz69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.387012927s
May 27 00:28:02.663: INFO: Pod "pvc-volume-tester-writer-qhz69": Phase="Pending", Reason="", readiness=false. Elapsed: 4.580236864s
May 27 00:28:04.857: INFO: Pod "pvc-volume-tester-writer-qhz69": Phase="Pending", Reason="", readiness=false. Elapsed: 6.774306714s
May 27 00:28:07.050: INFO: Pod "pvc-volume-tester-writer-qhz69": Phase="Pending", Reason="", readiness=false. Elapsed: 8.967396562s
May 27 00:28:09.244: INFO: Pod "pvc-volume-tester-writer-qhz69": Phase="Pending", Reason="", readiness=false. Elapsed: 11.161007698s
May 27 00:28:11.437: INFO: Pod "pvc-volume-tester-writer-qhz69": Phase="Pending", Reason="", readiness=false. Elapsed: 13.354309382s
May 27 00:28:13.631: INFO: Pod "pvc-volume-tester-writer-qhz69": Phase="Pending", Reason="", readiness=false. Elapsed: 15.547899604s
May 27 00:28:15.838: INFO: Pod "pvc-volume-tester-writer-qhz69": Phase="Pending", Reason="", readiness=false. Elapsed: 17.755439429s
May 27 00:28:18.031: INFO: Pod "pvc-volume-tester-writer-qhz69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.948871705s
STEP: Saw pod success
May 27 00:28:18.032: INFO: Pod "pvc-volume-tester-writer-qhz69" satisfied condition "Succeeded or Failed"
May 27 00:28:18.432: INFO: Pod pvc-volume-tester-writer-qhz69 has the following logs: 
May 27 00:28:18.432: INFO: Deleting pod "pvc-volume-tester-writer-qhz69" in namespace "provisioning-1612"
May 27 00:28:18.628: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-qhz69" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-40-209.ap-southeast-1.compute.internal"
May 27 00:28:19.401: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-h5vbm" in namespace "provisioning-1612" to be "Succeeded or Failed"
May 27 00:28:19.594: INFO: Pod "pvc-volume-tester-reader-h5vbm": Phase="Pending", Reason="", readiness=false. Elapsed: 193.551723ms
May 27 00:28:21.789: INFO: Pod "pvc-volume-tester-reader-h5vbm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.387791527s
May 27 00:28:23.983: INFO: Pod "pvc-volume-tester-reader-h5vbm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.581875937s
STEP: Saw pod success
May 27 00:28:23.983: INFO: Pod "pvc-volume-tester-reader-h5vbm" satisfied condition "Succeeded or Failed"
May 27 00:28:24.179: INFO: Pod pvc-volume-tester-reader-h5vbm has the following logs: hello world

May 27 00:28:24.179: INFO: Deleting pod "pvc-volume-tester-reader-h5vbm" in namespace "provisioning-1612"
May 27 00:28:24.376: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-h5vbm" to be fully deleted
May 27 00:28:24.569: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-cjq67] to have phase Bound
May 27 00:28:24.762: INFO: PersistentVolumeClaim pvc-cjq67 found and phase=Bound (192.771912ms)
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":3,"skipped":30,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:28:47.124: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 58 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:81
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:206
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":2,"skipped":3,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:12.058 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:996
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":5,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:28:48.249: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 187 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":7,"skipped":67,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:28:56.614: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 34 lines ...
May 27 00:28:45.888: INFO: PersistentVolumeClaim pvc-xlxrk found but phase is Pending instead of Bound.
May 27 00:28:48.077: INFO: PersistentVolumeClaim pvc-xlxrk found and phase=Bound (4.565606675s)
May 27 00:28:48.077: INFO: Waiting up to 3m0s for PersistentVolume local-p7z5s to have phase Bound
May 27 00:28:48.266: INFO: PersistentVolume local-p7z5s found and phase=Bound (188.296935ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-vv4k
STEP: Creating a pod to test subpath
May 27 00:28:48.832: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vv4k" in namespace "provisioning-7100" to be "Succeeded or Failed"
May 27 00:28:49.021: INFO: Pod "pod-subpath-test-preprovisionedpv-vv4k": Phase="Pending", Reason="", readiness=false. Elapsed: 188.852731ms
May 27 00:28:51.210: INFO: Pod "pod-subpath-test-preprovisionedpv-vv4k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.377709267s
May 27 00:28:53.400: INFO: Pod "pod-subpath-test-preprovisionedpv-vv4k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.567411891s
STEP: Saw pod success
May 27 00:28:53.400: INFO: Pod "pod-subpath-test-preprovisionedpv-vv4k" satisfied condition "Succeeded or Failed"
May 27 00:28:53.588: INFO: Trying to get logs from node ip-172-20-41-144.ap-southeast-1.compute.internal pod pod-subpath-test-preprovisionedpv-vv4k container test-container-subpath-preprovisionedpv-vv4k: <nil>
STEP: delete the pod
May 27 00:28:53.976: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vv4k to disappear
May 27 00:28:54.164: INFO: Pod pod-subpath-test-preprovisionedpv-vv4k no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vv4k
May 27 00:28:54.164: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vv4k" in namespace "provisioning-7100"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:28:56.837: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:29:00.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-8263" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: no PDB =\u003e should allow an eviction","total":-1,"completed":4,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:01.143: INFO: Only supported for providers [azure] (not aws)
... skipping 35 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":3,"skipped":34,"failed":0}
[BeforeEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:28:32.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume-provisioning
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 31 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [k8s.io] GlusterDynamicProvisioner
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    should create and delete persistent volumes [fast]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:749
------------------------------
{"msg":"PASSED [sig-storage] Dynamic Provisioning [k8s.io] GlusterDynamicProvisioner should create and delete persistent volumes [fast]","total":-1,"completed":4,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:02.102: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 19 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-986709ed-ebbc-4a62-a769-199c63a71691
STEP: Creating a pod to test consume configMaps
May 27 00:29:02.499: INFO: Waiting up to 5m0s for pod "pod-configmaps-91b2b19d-2193-4680-a376-e87cb5b210c9" in namespace "configmap-8713" to be "Succeeded or Failed"
May 27 00:29:02.688: INFO: Pod "pod-configmaps-91b2b19d-2193-4680-a376-e87cb5b210c9": Phase="Pending", Reason="", readiness=false. Elapsed: 188.458987ms
May 27 00:29:04.876: INFO: Pod "pod-configmaps-91b2b19d-2193-4680-a376-e87cb5b210c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.377308292s
STEP: Saw pod success
May 27 00:29:04.877: INFO: Pod "pod-configmaps-91b2b19d-2193-4680-a376-e87cb5b210c9" satisfied condition "Succeeded or Failed"
May 27 00:29:05.065: INFO: Trying to get logs from node ip-172-20-41-144.ap-southeast-1.compute.internal pod pod-configmaps-91b2b19d-2193-4680-a376-e87cb5b210c9 container agnhost-container: <nil>
STEP: delete the pod
May 27 00:29:05.456: INFO: Waiting for pod pod-configmaps-91b2b19d-2193-4680-a376-e87cb5b210c9 to disappear
May 27 00:29:05.645: INFO: Pod pod-configmaps-91b2b19d-2193-4680-a376-e87cb5b210c9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:29:05.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8713" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":24,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 135 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":4,"skipped":87,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:12.885: INFO: Driver emptydir doesn't support ntfs -- skipping
... skipping 81 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:100
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
... skipping 37 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:241
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":2,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:27:09.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename cronjob
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:58
[It] should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:273
STEP: Creating an AllowConcurrent cronjob with custom history limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
STEP: Destroying namespace "cronjob-2507" for this suite.


• [SLOW TEST:126.956 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:273
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":2,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:16.345: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 87 lines ...
May 27 00:28:43.875: INFO: Unable to read jessie_udp@dns-test-service.dns-3202 from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:28:44.069: INFO: Unable to read jessie_tcp@dns-test-service.dns-3202 from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:28:44.263: INFO: Unable to read jessie_udp@dns-test-service.dns-3202.svc from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:28:44.455: INFO: Unable to read jessie_tcp@dns-test-service.dns-3202.svc from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:28:44.648: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3202.svc from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:28:44.840: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3202.svc from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:28:45.996: INFO: Lookups using dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3202 wheezy_tcp@dns-test-service.dns-3202 wheezy_udp@dns-test-service.dns-3202.svc wheezy_tcp@dns-test-service.dns-3202.svc wheezy_udp@_http._tcp.dns-test-service.dns-3202.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3202.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3202 jessie_tcp@dns-test-service.dns-3202 jessie_udp@dns-test-service.dns-3202.svc jessie_tcp@dns-test-service.dns-3202.svc jessie_udp@_http._tcp.dns-test-service.dns-3202.svc jessie_tcp@_http._tcp.dns-test-service.dns-3202.svc]

May 27 00:28:51.191: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:28:51.383: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:28:51.575: INFO: Unable to read wheezy_udp@dns-test-service.dns-3202 from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:28:51.767: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3202 from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:28:51.959: INFO: Unable to read wheezy_udp@dns-test-service.dns-3202.svc from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
... skipping 5 lines ...
May 27 00:28:54.278: INFO: Unable to read jessie_udp@dns-test-service.dns-3202 from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:28:54.470: INFO: Unable to read jessie_tcp@dns-test-service.dns-3202 from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:28:54.663: INFO: Unable to read jessie_udp@dns-test-service.dns-3202.svc from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:28:54.854: INFO: Unable to read jessie_tcp@dns-test-service.dns-3202.svc from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:28:55.046: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3202.svc from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:28:55.238: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3202.svc from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:28:56.411: INFO: Lookups using dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3202 wheezy_tcp@dns-test-service.dns-3202 wheezy_udp@dns-test-service.dns-3202.svc wheezy_tcp@dns-test-service.dns-3202.svc wheezy_udp@_http._tcp.dns-test-service.dns-3202.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3202.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3202 jessie_tcp@dns-test-service.dns-3202 jessie_udp@dns-test-service.dns-3202.svc jessie_tcp@dns-test-service.dns-3202.svc jessie_udp@_http._tcp.dns-test-service.dns-3202.svc jessie_tcp@_http._tcp.dns-test-service.dns-3202.svc]

May 27 00:29:01.189: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:29:01.382: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:29:01.574: INFO: Unable to read wheezy_udp@dns-test-service.dns-3202 from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:29:02.343: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3202.svc from pod dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318: the server could not find the requested resource (get pods dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318)
May 27 00:29:06.391: INFO: Lookups using dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3202 wheezy_udp@_http._tcp.dns-test-service.dns-3202.svc]

May 27 00:29:16.388: INFO: DNS probes using dns-3202/dns-test-bfc7ebc4-d754-4e0e-b47c-a37f41b18318 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:169
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":4,"skipped":59,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-scheduling] Multi-AZ Clusters
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 89 lines ...
• [SLOW TEST:7.624 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":103,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:20.590: INFO: Only supported for providers [gce gke] (not aws)
... skipping 46 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64
[It] should support unsafe sysctls which are actually whitelisted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:29:20.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-7348" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":-1,"completed":3,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:20.739: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 59 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : secret
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":4,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:20.901: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 150 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:29:22.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-limits-on-node-4598" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Volume limits should verify that all nodes have volume limits","total":-1,"completed":4,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:22.533: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 110 lines ...
• [SLOW TEST:60.305 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should not be ready until startupProbe succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:396
------------------------------
{"msg":"PASSED [k8s.io] Probing container should not be ready until startupProbe succeeds","total":-1,"completed":3,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:23.878: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 66 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is non-root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:59
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 27 00:29:22.182: INFO: Waiting up to 5m0s for pod "pod-fd89769d-bbe9-4ab7-9314-61956b78a4dd" in namespace "emptydir-9238" to be "Succeeded or Failed"
May 27 00:29:22.384: INFO: Pod "pod-fd89769d-bbe9-4ab7-9314-61956b78a4dd": Phase="Pending", Reason="", readiness=false. Elapsed: 202.141609ms
May 27 00:29:24.586: INFO: Pod "pod-fd89769d-bbe9-4ab7-9314-61956b78a4dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.404386176s
May 27 00:29:26.789: INFO: Pod "pod-fd89769d-bbe9-4ab7-9314-61956b78a4dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.60678906s
STEP: Saw pod success
May 27 00:29:26.789: INFO: Pod "pod-fd89769d-bbe9-4ab7-9314-61956b78a4dd" satisfied condition "Succeeded or Failed"
May 27 00:29:26.991: INFO: Trying to get logs from node ip-172-20-41-144.ap-southeast-1.compute.internal pod pod-fd89769d-bbe9-4ab7-9314-61956b78a4dd container test-container: <nil>
STEP: delete the pod
May 27 00:29:27.404: INFO: Waiting for pod pod-fd89769d-bbe9-4ab7-9314-61956b78a4dd to disappear
May 27 00:29:27.606: INFO: Pod pod-fd89769d-bbe9-4ab7-9314-61956b78a4dd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:48
    new files should be created with FSGroup ownership when container is non-root
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:59
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":5,"skipped":37,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:28.035: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 79 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    should implement legacy replacement when the update strategy is OnDelete
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:499
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","total":-1,"completed":2,"skipped":18,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
May 27 00:29:24.873: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
May 27 00:29:24.873: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-d55z
STEP: Creating a pod to test subpath
May 27 00:29:25.069: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-d55z" in namespace "provisioning-8299" to be "Succeeded or Failed"
May 27 00:29:25.261: INFO: Pod "pod-subpath-test-inlinevolume-d55z": Phase="Pending", Reason="", readiness=false. Elapsed: 192.340466ms
May 27 00:29:27.453: INFO: Pod "pod-subpath-test-inlinevolume-d55z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383906642s
May 27 00:29:29.646: INFO: Pod "pod-subpath-test-inlinevolume-d55z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.57709855s
STEP: Saw pod success
May 27 00:29:29.646: INFO: Pod "pod-subpath-test-inlinevolume-d55z" satisfied condition "Succeeded or Failed"
May 27 00:29:29.837: INFO: Trying to get logs from node ip-172-20-41-144.ap-southeast-1.compute.internal pod pod-subpath-test-inlinevolume-d55z container test-container-subpath-inlinevolume-d55z: <nil>
STEP: delete the pod
May 27 00:29:30.230: INFO: Waiting for pod pod-subpath-test-inlinevolume-d55z to disappear
May 27 00:29:30.421: INFO: Pod pod-subpath-test-inlinevolume-d55z no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-d55z
May 27 00:29:30.421: INFO: Deleting pod "pod-subpath-test-inlinevolume-d55z" in namespace "provisioning-8299"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:31.269: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 101 lines ...
• [SLOW TEST:11.367 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":6,"skipped":106,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:31.985: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 88 lines ...
STEP: Creating a kubernetes client
May 27 00:29:28.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: creating the pod
May 27 00:29:29.073: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:29:32.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3738" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":6,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 52 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:33.754: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 156 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:179
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume","total":-1,"completed":1,"skipped":0,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":-1,"completed":8,"skipped":68,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:28:59.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 24 lines ...
May 27 00:29:14.294: INFO: PersistentVolume nfs-mfjlk found and phase=Bound (193.304145ms)
May 27 00:29:14.489: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-zgq5n] to have phase Bound
May 27 00:29:14.684: INFO: PersistentVolumeClaim pvc-zgq5n found and phase=Bound (195.819806ms)
STEP: Checking pod has write access to PersistentVolumes
May 27 00:29:14.878: INFO: Creating nfs test pod
May 27 00:29:15.073: INFO: Pod should terminate with exitcode 0 (success)
May 27 00:29:15.073: INFO: Waiting up to 5m0s for pod "pvc-tester-dcplh" in namespace "pv-1283" to be "Succeeded or Failed"
May 27 00:29:15.266: INFO: Pod "pvc-tester-dcplh": Phase="Pending", Reason="", readiness=false. Elapsed: 193.40374ms
May 27 00:29:17.460: INFO: Pod "pvc-tester-dcplh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.387556648s
STEP: Saw pod success
May 27 00:29:17.460: INFO: Pod "pvc-tester-dcplh" satisfied condition "Succeeded or Failed"
May 27 00:29:17.460: INFO: Pod pvc-tester-dcplh succeeded 
May 27 00:29:17.460: INFO: Deleting pod "pvc-tester-dcplh" in namespace "pv-1283"
May 27 00:29:17.658: INFO: Wait up to 5m0s for pod "pvc-tester-dcplh" to be fully deleted
May 27 00:29:18.045: INFO: Creating nfs test pod
May 27 00:29:18.239: INFO: Pod should terminate with exitcode 0 (success)
May 27 00:29:18.239: INFO: Waiting up to 5m0s for pod "pvc-tester-dxqv7" in namespace "pv-1283" to be "Succeeded or Failed"
May 27 00:29:18.432: INFO: Pod "pvc-tester-dxqv7": Phase="Pending", Reason="", readiness=false. Elapsed: 193.281896ms
May 27 00:29:20.626: INFO: Pod "pvc-tester-dxqv7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.386870353s
STEP: Saw pod success
May 27 00:29:20.626: INFO: Pod "pvc-tester-dxqv7" satisfied condition "Succeeded or Failed"
May 27 00:29:20.626: INFO: Pod pvc-tester-dxqv7 succeeded 
May 27 00:29:20.626: INFO: Deleting pod "pvc-tester-dxqv7" in namespace "pv-1283"
May 27 00:29:20.831: INFO: Wait up to 5m0s for pod "pvc-tester-dxqv7" to be fully deleted
May 27 00:29:21.219: INFO: Creating nfs test pod
May 27 00:29:21.414: INFO: Pod should terminate with exitcode 0 (success)
May 27 00:29:21.414: INFO: Waiting up to 5m0s for pod "pvc-tester-8ns6n" in namespace "pv-1283" to be "Succeeded or Failed"
May 27 00:29:21.607: INFO: Pod "pvc-tester-8ns6n": Phase="Pending", Reason="", readiness=false. Elapsed: 193.30061ms
May 27 00:29:23.801: INFO: Pod "pvc-tester-8ns6n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.387040888s
May 27 00:29:25.994: INFO: Pod "pvc-tester-8ns6n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.580565109s
STEP: Saw pod success
May 27 00:29:25.994: INFO: Pod "pvc-tester-8ns6n" satisfied condition "Succeeded or Failed"
May 27 00:29:25.994: INFO: Pod pvc-tester-8ns6n succeeded 
May 27 00:29:25.994: INFO: Deleting pod "pvc-tester-8ns6n" in namespace "pv-1283"
May 27 00:29:26.192: INFO: Wait up to 5m0s for pod "pvc-tester-8ns6n" to be fully deleted
STEP: Deleting PVCs to invoke reclaim policy
May 27 00:29:26.774: INFO: Deleting PVC pvc-zgq5n to trigger reclamation of PV nfs-mfjlk
May 27 00:29:26.774: INFO: Deleting PersistentVolumeClaim "pvc-zgq5n"
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with multiple PVs and PVCs all in same ns
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:212
      should create 3 PVs and 3 PVCs: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:243
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","total":-1,"completed":9,"skipped":68,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:36.919: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 42 lines ...
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:29:36.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
May 27 00:29:38.112: INFO: found topology map[failure-domain.beta.kubernetes.io/zone:ap-southeast-1a]
May 27 00:29:38.112: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
May 27 00:29:38.112: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 73 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:90
STEP: Creating projection with secret that has name projected-secret-test-a3f38343-790c-4aeb-8e30-317e7ba4dacc
STEP: Creating a pod to test consume secrets
May 27 00:29:31.644: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9c7ed308-2a3e-4dad-9236-b9a76d368e67" in namespace "projected-155" to be "Succeeded or Failed"
May 27 00:29:31.833: INFO: Pod "pod-projected-secrets-9c7ed308-2a3e-4dad-9236-b9a76d368e67": Phase="Pending", Reason="", readiness=false. Elapsed: 189.039799ms
May 27 00:29:34.024: INFO: Pod "pod-projected-secrets-9c7ed308-2a3e-4dad-9236-b9a76d368e67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379604257s
May 27 00:29:36.214: INFO: Pod "pod-projected-secrets-9c7ed308-2a3e-4dad-9236-b9a76d368e67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.569394779s
May 27 00:29:38.403: INFO: Pod "pod-projected-secrets-9c7ed308-2a3e-4dad-9236-b9a76d368e67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.758531511s
STEP: Saw pod success
May 27 00:29:38.403: INFO: Pod "pod-projected-secrets-9c7ed308-2a3e-4dad-9236-b9a76d368e67" satisfied condition "Succeeded or Failed"
May 27 00:29:38.592: INFO: Trying to get logs from node ip-172-20-33-93.ap-southeast-1.compute.internal pod pod-projected-secrets-9c7ed308-2a3e-4dad-9236-b9a76d368e67 container projected-secret-volume-test: <nil>
STEP: delete the pod
May 27 00:29:38.987: INFO: Waiting for pod pod-projected-secrets-9c7ed308-2a3e-4dad-9236-b9a76d368e67 to disappear
May 27 00:29:39.176: INFO: Pod pod-projected-secrets-9c7ed308-2a3e-4dad-9236-b9a76d368e67 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 5 lines ...
• [SLOW TEST:10.248 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:90
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":3,"skipped":21,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:29:40.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4915" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:40.415: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 166 lines ...
• [SLOW TEST:9.135 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":7,"skipped":120,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:41.210: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 22 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run with an image specified user ID
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146
May 27 00:29:33.827: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-1293" to be "Succeeded or Failed"
May 27 00:29:34.029: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 201.852657ms
May 27 00:29:36.231: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.404346233s
May 27 00:29:38.433: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.606508136s
May 27 00:29:40.635: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.808708988s
May 27 00:29:40.635: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:29:40.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1293" for this suite.


... skipping 36 lines ...
May 27 00:29:40.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 27 00:29:41.632: INFO: Waiting up to 5m0s for pod "pod-b88e27d7-9cc8-423b-bf9f-16761786a2f7" in namespace "emptydir-5626" to be "Succeeded or Failed"
May 27 00:29:41.821: INFO: Pod "pod-b88e27d7-9cc8-423b-bf9f-16761786a2f7": Phase="Pending", Reason="", readiness=false. Elapsed: 188.681091ms
May 27 00:29:44.010: INFO: Pod "pod-b88e27d7-9cc8-423b-bf9f-16761786a2f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.377833203s
STEP: Saw pod success
May 27 00:29:44.010: INFO: Pod "pod-b88e27d7-9cc8-423b-bf9f-16761786a2f7" satisfied condition "Succeeded or Failed"
May 27 00:29:44.199: INFO: Trying to get logs from node ip-172-20-40-209.ap-southeast-1.compute.internal pod pod-b88e27d7-9cc8-423b-bf9f-16761786a2f7 container test-container: <nil>
STEP: delete the pod
May 27 00:29:44.585: INFO: Waiting for pod pod-b88e27d7-9cc8-423b-bf9f-16761786a2f7 to disappear
May 27 00:29:44.774: INFO: Pod pod-b88e27d7-9cc8-423b-bf9f-16761786a2f7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:29:44.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5626" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] PrivilegedPod [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:13.899 seconds]
[k8s.io] PrivilegedPod [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should enable privileged commands [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49
------------------------------
{"msg":"PASSED [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":5,"skipped":31,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:45.289: INFO: Only supported for providers [gce gke] (not aws)
... skipping 14 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1304
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":5,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:29:12.167: INFO: >>> kubeConfig: /root/.kube/config
... skipping 18 lines ...
May 27 00:29:31.099: INFO: PersistentVolumeClaim pvc-c2xdk found but phase is Pending instead of Bound.
May 27 00:29:33.297: INFO: PersistentVolumeClaim pvc-c2xdk found and phase=Bound (15.627439157s)
May 27 00:29:33.298: INFO: Waiting up to 3m0s for PersistentVolume local-6dvcr to have phase Bound
May 27 00:29:33.496: INFO: PersistentVolume local-6dvcr found and phase=Bound (198.293552ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wvlq
STEP: Creating a pod to test subpath
May 27 00:29:34.100: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wvlq" in namespace "provisioning-8056" to be "Succeeded or Failed"
May 27 00:29:34.298: INFO: Pod "pod-subpath-test-preprovisionedpv-wvlq": Phase="Pending", Reason="", readiness=false. Elapsed: 198.38876ms
May 27 00:29:36.502: INFO: Pod "pod-subpath-test-preprovisionedpv-wvlq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.40241785s
May 27 00:29:38.702: INFO: Pod "pod-subpath-test-preprovisionedpv-wvlq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.601891764s
May 27 00:29:40.900: INFO: Pod "pod-subpath-test-preprovisionedpv-wvlq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.800490051s
May 27 00:29:43.099: INFO: Pod "pod-subpath-test-preprovisionedpv-wvlq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.999583963s
STEP: Saw pod success
May 27 00:29:43.099: INFO: Pod "pod-subpath-test-preprovisionedpv-wvlq" satisfied condition "Succeeded or Failed"
May 27 00:29:43.298: INFO: Trying to get logs from node ip-172-20-33-93.ap-southeast-1.compute.internal pod pod-subpath-test-preprovisionedpv-wvlq container test-container-subpath-preprovisionedpv-wvlq: <nil>
STEP: delete the pod
May 27 00:29:43.712: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wvlq to disappear
May 27 00:29:43.910: INFO: Pod pod-subpath-test-preprovisionedpv-wvlq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wvlq
May 27 00:29:43.910: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wvlq" in namespace "provisioning-8056"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:46.577: INFO: Driver local doesn't support ext3 -- skipping
... skipping 125 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
SS
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":7,"skipped":41,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:29:41.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 46 lines ...
• [SLOW TEST:8.330 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should get a host IP [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":8,"skipped":41,"failed":0}

S
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":90,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:47.119: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 51 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
May 27 00:29:44.553: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6ea0f578-3797-4098-b95e-439f8c8dbf48"
May 27 00:29:44.553: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6ea0f578-3797-4098-b95e-439f8c8dbf48" in namespace "pods-2629" to be "terminated due to deadline exceeded"
May 27 00:29:44.744: INFO: Pod "pod-update-activedeadlineseconds-6ea0f578-3797-4098-b95e-439f8c8dbf48": Phase="Running", Reason="", readiness=true. Elapsed: 190.82008ms
May 27 00:29:46.934: INFO: Pod "pod-update-activedeadlineseconds-6ea0f578-3797-4098-b95e-439f8c8dbf48": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.380085133s
May 27 00:29:46.934: INFO: Pod "pod-update-activedeadlineseconds-6ea0f578-3797-4098-b95e-439f8c8dbf48" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:29:46.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2629" for this suite.


• [SLOW TEST:7.539 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:47.325: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 217 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":3,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:48.999: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 50 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1517
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":6,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] [sig-node] AppArmor
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 217 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:310
    should preserve attachment policy when no CSIDriver present
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:332
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":3,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:52.325: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 56 lines ...
• [SLOW TEST:6.948 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage =\u003e should allow an eviction","total":-1,"completed":11,"skipped":92,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:29:54.127: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 14 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:265
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":4,"skipped":38,"failed":0}
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:29:48.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename disruption
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
• [SLOW TEST:5.939 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update/patch PodDisruptionBudget status
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:115
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:112
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
... skipping 10 lines ...
May 27 00:29:37.800: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 27 00:29:37.800: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 27 00:29:37.800: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-5871-nfs-scvggxz      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:example.com/nfs-provisioning-5871,Parameters:map[string]string{mountOptions: vers=4.1,},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-5871    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-5871-nfs-scvggxz,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-5871    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-5871-nfs-scvggxz,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a StorageClass provisioning-5871-nfs-scvggxz
STEP: creating a claim
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
May 27 00:29:38.571: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-9jmbh" in namespace "provisioning-5871" to be "Succeeded or Failed"
May 27 00:29:38.766: INFO: Pod "pvc-volume-tester-writer-9jmbh": Phase="Pending", Reason="", readiness=false. Elapsed: 195.204073ms
May 27 00:29:40.958: INFO: Pod "pvc-volume-tester-writer-9jmbh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.387024802s
May 27 00:29:43.150: INFO: Pod "pvc-volume-tester-writer-9jmbh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.579100269s
May 27 00:29:45.342: INFO: Pod "pvc-volume-tester-writer-9jmbh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.771050316s
May 27 00:29:47.535: INFO: Pod "pvc-volume-tester-writer-9jmbh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.963520892s
STEP: Saw pod success
May 27 00:29:47.535: INFO: Pod "pvc-volume-tester-writer-9jmbh" satisfied condition "Succeeded or Failed"
May 27 00:29:47.921: INFO: Pod pvc-volume-tester-writer-9jmbh has the following logs: 
May 27 00:29:47.921: INFO: Deleting pod "pvc-volume-tester-writer-9jmbh" in namespace "provisioning-5871"
May 27 00:29:48.117: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-9jmbh" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-33-93.ap-southeast-1.compute.internal"
May 27 00:29:48.889: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-bk8gs" in namespace "provisioning-5871" to be "Succeeded or Failed"
May 27 00:29:49.082: INFO: Pod "pvc-volume-tester-reader-bk8gs": Phase="Pending", Reason="", readiness=false. Elapsed: 193.396039ms
May 27 00:29:51.274: INFO: Pod "pvc-volume-tester-reader-bk8gs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.385554596s
May 27 00:29:53.466: INFO: Pod "pvc-volume-tester-reader-bk8gs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.577714344s
May 27 00:29:55.658: INFO: Pod "pvc-volume-tester-reader-bk8gs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.769776893s
STEP: Saw pod success
May 27 00:29:55.659: INFO: Pod "pvc-volume-tester-reader-bk8gs" satisfied condition "Succeeded or Failed"
May 27 00:29:55.871: INFO: Pod pvc-volume-tester-reader-bk8gs has the following logs: hello world

May 27 00:29:55.871: INFO: Deleting pod "pvc-volume-tester-reader-bk8gs" in namespace "provisioning-5871"
May 27 00:29:56.071: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-bk8gs" to be fully deleted
May 27 00:29:56.263: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-5ld6l] to have phase Bound
May 27 00:29:56.454: INFO: PersistentVolumeClaim pvc-5ld6l found and phase=Bound (191.661028ms)
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":7,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:00.792: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 63 lines ...
• [SLOW TEST:54.826 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":32,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
... skipping 106 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:265
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":8,"skipped":122,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:29:44.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":9,"skipped":122,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:04.720: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 44 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-map-be491bd0-e344-4fe4-b16f-e22dc340937c
STEP: Creating a pod to test consume configMaps
May 27 00:30:02.181: INFO: Waiting up to 5m0s for pod "pod-configmaps-f6b67499-dd6c-472e-b30a-28c12e73b256" in namespace "configmap-2379" to be "Succeeded or Failed"
May 27 00:30:02.373: INFO: Pod "pod-configmaps-f6b67499-dd6c-472e-b30a-28c12e73b256": Phase="Pending", Reason="", readiness=false. Elapsed: 192.255543ms
May 27 00:30:04.566: INFO: Pod "pod-configmaps-f6b67499-dd6c-472e-b30a-28c12e73b256": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.385195974s
STEP: Saw pod success
May 27 00:30:04.566: INFO: Pod "pod-configmaps-f6b67499-dd6c-472e-b30a-28c12e73b256" satisfied condition "Succeeded or Failed"
May 27 00:30:04.759: INFO: Trying to get logs from node ip-172-20-40-209.ap-southeast-1.compute.internal pod pod-configmaps-f6b67499-dd6c-472e-b30a-28c12e73b256 container agnhost-container: <nil>
STEP: delete the pod
May 27 00:30:05.159: INFO: Waiting for pod pod-configmaps-f6b67499-dd6c-472e-b30a-28c12e73b256 to disappear
May 27 00:30:05.351: INFO: Pod pod-configmaps-f6b67499-dd6c-472e-b30a-28c12e73b256 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:30:05.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2379" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:05.757: INFO: Only supported for providers [gce gke] (not aws)
... skipping 67 lines ...
May 27 00:29:49.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
May 27 00:29:49.952: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 27 00:29:50.332: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-932" in namespace "provisioning-932" to be "Succeeded or Failed"
May 27 00:29:50.520: INFO: Pod "hostpath-symlink-prep-provisioning-932": Phase="Pending", Reason="", readiness=false. Elapsed: 187.62764ms
May 27 00:29:52.708: INFO: Pod "hostpath-symlink-prep-provisioning-932": Phase="Pending", Reason="", readiness=false. Elapsed: 2.375954855s
May 27 00:29:54.896: INFO: Pod "hostpath-symlink-prep-provisioning-932": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.563741598s
STEP: Saw pod success
May 27 00:29:54.896: INFO: Pod "hostpath-symlink-prep-provisioning-932" satisfied condition "Succeeded or Failed"
May 27 00:29:54.896: INFO: Deleting pod "hostpath-symlink-prep-provisioning-932" in namespace "provisioning-932"
May 27 00:29:55.089: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-932" to be fully deleted
May 27 00:29:55.277: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-tqch
STEP: Creating a pod to test subpath
May 27 00:29:55.473: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-tqch" in namespace "provisioning-932" to be "Succeeded or Failed"
May 27 00:29:55.660: INFO: Pod "pod-subpath-test-inlinevolume-tqch": Phase="Pending", Reason="", readiness=false. Elapsed: 187.525093ms
May 27 00:29:57.849: INFO: Pod "pod-subpath-test-inlinevolume-tqch": Phase="Pending", Reason="", readiness=false. Elapsed: 2.375688836s
May 27 00:30:00.036: INFO: Pod "pod-subpath-test-inlinevolume-tqch": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.563651899s
STEP: Saw pod success
May 27 00:30:00.037: INFO: Pod "pod-subpath-test-inlinevolume-tqch" satisfied condition "Succeeded or Failed"
May 27 00:30:00.224: INFO: Trying to get logs from node ip-172-20-33-93.ap-southeast-1.compute.internal pod pod-subpath-test-inlinevolume-tqch container test-container-volume-inlinevolume-tqch: <nil>
STEP: delete the pod
May 27 00:30:00.611: INFO: Waiting for pod pod-subpath-test-inlinevolume-tqch to disappear
May 27 00:30:00.799: INFO: Pod pod-subpath-test-inlinevolume-tqch no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-tqch
May 27 00:30:00.799: INFO: Deleting pod "pod-subpath-test-inlinevolume-tqch" in namespace "provisioning-932"
STEP: Deleting pod
May 27 00:30:00.987: INFO: Deleting pod "pod-subpath-test-inlinevolume-tqch" in namespace "provisioning-932"
May 27 00:30:01.368: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-932" in namespace "provisioning-932" to be "Succeeded or Failed"
May 27 00:30:01.556: INFO: Pod "hostpath-symlink-prep-provisioning-932": Phase="Pending", Reason="", readiness=false. Elapsed: 188.061924ms
May 27 00:30:03.745: INFO: Pod "hostpath-symlink-prep-provisioning-932": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376691802s
May 27 00:30:05.939: INFO: Pod "hostpath-symlink-prep-provisioning-932": Phase="Pending", Reason="", readiness=false. Elapsed: 4.571095929s
May 27 00:30:08.127: INFO: Pod "hostpath-symlink-prep-provisioning-932": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.758969219s
STEP: Saw pod success
May 27 00:30:08.127: INFO: Pod "hostpath-symlink-prep-provisioning-932" satisfied condition "Succeeded or Failed"
May 27 00:30:08.127: INFO: Deleting pod "hostpath-symlink-prep-provisioning-932" in namespace "provisioning-932"
May 27 00:30:08.319: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-932" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:30:08.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-932" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":15,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:08.912: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 99 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:206
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents","total":-1,"completed":6,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:12.215: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 104 lines ...
May 27 00:30:00.619: INFO: PersistentVolumeClaim pvc-tfmlc found but phase is Pending instead of Bound.
May 27 00:30:02.808: INFO: PersistentVolumeClaim pvc-tfmlc found and phase=Bound (8.946679016s)
May 27 00:30:02.808: INFO: Waiting up to 3m0s for PersistentVolume local-8tdx9 to have phase Bound
May 27 00:30:02.997: INFO: PersistentVolume local-8tdx9 found and phase=Bound (188.607506ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-24c5
STEP: Creating a pod to test subpath
May 27 00:30:03.565: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-24c5" in namespace "provisioning-4581" to be "Succeeded or Failed"
May 27 00:30:03.754: INFO: Pod "pod-subpath-test-preprovisionedpv-24c5": Phase="Pending", Reason="", readiness=false. Elapsed: 188.813363ms
May 27 00:30:05.952: INFO: Pod "pod-subpath-test-preprovisionedpv-24c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.386996115s
May 27 00:30:08.145: INFO: Pod "pod-subpath-test-preprovisionedpv-24c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.580393295s
May 27 00:30:10.334: INFO: Pod "pod-subpath-test-preprovisionedpv-24c5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.769136108s
May 27 00:30:12.528: INFO: Pod "pod-subpath-test-preprovisionedpv-24c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.963254801s
STEP: Saw pod success
May 27 00:30:12.528: INFO: Pod "pod-subpath-test-preprovisionedpv-24c5" satisfied condition "Succeeded or Failed"
May 27 00:30:12.717: INFO: Trying to get logs from node ip-172-20-33-93.ap-southeast-1.compute.internal pod pod-subpath-test-preprovisionedpv-24c5 container test-container-subpath-preprovisionedpv-24c5: <nil>
STEP: delete the pod
May 27 00:30:13.103: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-24c5 to disappear
May 27 00:30:13.293: INFO: Pod pod-subpath-test-preprovisionedpv-24c5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-24c5
May 27 00:30:13.293: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-24c5" in namespace "provisioning-4581"
STEP: Creating pod pod-subpath-test-preprovisionedpv-24c5
STEP: Creating a pod to test subpath
May 27 00:30:13.672: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-24c5" in namespace "provisioning-4581" to be "Succeeded or Failed"
May 27 00:30:13.861: INFO: Pod "pod-subpath-test-preprovisionedpv-24c5": Phase="Pending", Reason="", readiness=false. Elapsed: 188.735547ms
May 27 00:30:16.052: INFO: Pod "pod-subpath-test-preprovisionedpv-24c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.379444831s
STEP: Saw pod success
May 27 00:30:16.052: INFO: Pod "pod-subpath-test-preprovisionedpv-24c5" satisfied condition "Succeeded or Failed"
May 27 00:30:16.241: INFO: Trying to get logs from node ip-172-20-33-93.ap-southeast-1.compute.internal pod pod-subpath-test-preprovisionedpv-24c5 container test-container-subpath-preprovisionedpv-24c5: <nil>
STEP: delete the pod
May 27 00:30:16.628: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-24c5 to disappear
May 27 00:30:16.817: INFO: Pod pod-subpath-test-preprovisionedpv-24c5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-24c5
May 27 00:30:16.817: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-24c5" in namespace "provisioning-4581"
... skipping 52 lines ...
• [SLOW TEST:98.838 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should be restarted startup probe fails
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:309
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted startup probe fails","total":-1,"completed":4,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:26.055: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361

      Distro debian doesn't support ntfs -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:184
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":20,"failed":0}
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:30:21.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename disruption
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:30:26.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-1977" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":5,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
... skipping 44 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":7,"skipped":60,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:27.328: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 257 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
    when starting a container that exits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:30.709: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 107 lines ...
• [SLOW TEST:117.023 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not emit unexpected warnings
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:200
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not emit unexpected warnings","total":-1,"completed":4,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:36.243: INFO: Driver csi-hostpath doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 73 lines ...
• [SLOW TEST:9.522 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":8,"skipped":71,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:36.927: INFO: Only supported for providers [azure] (not aws)
... skipping 94 lines ...
May 27 00:29:30.581: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 27 00:29:30.794: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathxpsw5] to have phase Bound
May 27 00:29:30.997: INFO: PersistentVolumeClaim csi-hostpathxpsw5 found but phase is Pending instead of Bound.
May 27 00:29:33.197: INFO: PersistentVolumeClaim csi-hostpathxpsw5 found and phase=Bound (2.403016924s)
STEP: Creating pod pod-subpath-test-dynamicpv-g6zs
STEP: Creating a pod to test subpath
May 27 00:29:33.797: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-g6zs" in namespace "provisioning-7905" to be "Succeeded or Failed"
May 27 00:29:34.000: INFO: Pod "pod-subpath-test-dynamicpv-g6zs": Phase="Pending", Reason="", readiness=false. Elapsed: 202.617181ms
May 27 00:29:36.199: INFO: Pod "pod-subpath-test-dynamicpv-g6zs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.40219258s
May 27 00:29:38.399: INFO: Pod "pod-subpath-test-dynamicpv-g6zs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.601595254s
May 27 00:29:40.598: INFO: Pod "pod-subpath-test-dynamicpv-g6zs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.801184059s
May 27 00:29:42.799: INFO: Pod "pod-subpath-test-dynamicpv-g6zs": Phase="Pending", Reason="", readiness=false. Elapsed: 9.001750646s
May 27 00:29:44.999: INFO: Pod "pod-subpath-test-dynamicpv-g6zs": Phase="Pending", Reason="", readiness=false. Elapsed: 11.201578509s
May 27 00:29:47.198: INFO: Pod "pod-subpath-test-dynamicpv-g6zs": Phase="Pending", Reason="", readiness=false. Elapsed: 13.401378247s
May 27 00:29:49.398: INFO: Pod "pod-subpath-test-dynamicpv-g6zs": Phase="Pending", Reason="", readiness=false. Elapsed: 15.600693822s
May 27 00:29:51.598: INFO: Pod "pod-subpath-test-dynamicpv-g6zs": Phase="Pending", Reason="", readiness=false. Elapsed: 17.80060855s
May 27 00:29:53.797: INFO: Pod "pod-subpath-test-dynamicpv-g6zs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.000195766s
STEP: Saw pod success
May 27 00:29:53.797: INFO: Pod "pod-subpath-test-dynamicpv-g6zs" satisfied condition "Succeeded or Failed"
May 27 00:29:53.997: INFO: Trying to get logs from node ip-172-20-41-144.ap-southeast-1.compute.internal pod pod-subpath-test-dynamicpv-g6zs container test-container-subpath-dynamicpv-g6zs: <nil>
STEP: delete the pod
May 27 00:29:54.412: INFO: Waiting for pod pod-subpath-test-dynamicpv-g6zs to disappear
May 27 00:29:54.612: INFO: Pod pod-subpath-test-dynamicpv-g6zs no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-g6zs
May 27 00:29:54.612: INFO: Deleting pod "pod-subpath-test-dynamicpv-g6zs" in namespace "provisioning-7905"
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":5,"skipped":72,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:37.173: INFO: Only supported for providers [openstack] (not aws)
... skipping 23 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:65
STEP: Creating a pod to test hostPath r/w
May 27 00:30:38.099: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5058" to be "Succeeded or Failed"
May 27 00:30:38.289: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 190.579264ms
May 27 00:30:40.480: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.38147133s
STEP: Saw pod success
May 27 00:30:40.480: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
May 27 00:30:40.672: INFO: Trying to get logs from node ip-172-20-33-93.ap-southeast-1.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
May 27 00:30:41.062: INFO: Waiting for pod pod-host-path-test to disappear
May 27 00:30:41.256: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:30:41.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-5058" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":9,"skipped":77,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:41.665: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 161 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:30:43.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-472" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":10,"skipped":92,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:43.490: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 40 lines ...
• [SLOW TEST:17.492 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should support pod readiness gates [NodeFeature:PodReadinessGate]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:778
------------------------------
{"msg":"PASSED [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":5,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:43.587: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 80 lines ...
• [SLOW TEST:9.040 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":6,"skipped":74,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:46.238: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 48 lines ...
May 27 00:30:31.158: INFO: PersistentVolumeClaim pvc-v5d8r found and phase=Bound (13.359376991s)
May 27 00:30:31.159: INFO: Waiting up to 3m0s for PersistentVolume nfs-cwkpc to have phase Bound
May 27 00:30:31.352: INFO: PersistentVolume nfs-cwkpc found and phase=Bound (193.751675ms)
STEP: Checking pod has write access to PersistentVolume
May 27 00:30:31.740: INFO: Creating nfs test pod
May 27 00:30:31.934: INFO: Pod should terminate with exitcode 0 (success)
May 27 00:30:31.934: INFO: Waiting up to 5m0s for pod "pvc-tester-jt6bl" in namespace "pv-9197" to be "Succeeded or Failed"
May 27 00:30:32.128: INFO: Pod "pvc-tester-jt6bl": Phase="Pending", Reason="", readiness=false. Elapsed: 193.74981ms
May 27 00:30:34.322: INFO: Pod "pvc-tester-jt6bl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.387693372s
STEP: Saw pod success
May 27 00:30:34.322: INFO: Pod "pvc-tester-jt6bl" satisfied condition "Succeeded or Failed"
May 27 00:30:34.322: INFO: Pod pvc-tester-jt6bl succeeded 
May 27 00:30:34.322: INFO: Deleting pod "pvc-tester-jt6bl" in namespace "pv-9197"
May 27 00:30:34.520: INFO: Wait up to 5m0s for pod "pvc-tester-jt6bl" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
May 27 00:30:34.714: INFO: Deleting PVC pvc-v5d8r to trigger reclamation of PV nfs-cwkpc
May 27 00:30:34.714: INFO: Deleting PersistentVolumeClaim "pvc-v5d8r"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PV and a pre-bound PVC: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","total":-1,"completed":10,"skipped":126,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:46.700: INFO: Only supported for providers [gce gke] (not aws)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
May 27 00:30:46.150: INFO: Waiting up to 5m0s for pod "downwardapi-volume-36ffc28f-e800-4b01-a8df-4468bae2d93f" in namespace "downward-api-2594" to be "Succeeded or Failed"
May 27 00:30:46.344: INFO: Pod "downwardapi-volume-36ffc28f-e800-4b01-a8df-4468bae2d93f": Phase="Pending", Reason="", readiness=false. Elapsed: 193.420359ms
May 27 00:30:48.537: INFO: Pod "downwardapi-volume-36ffc28f-e800-4b01-a8df-4468bae2d93f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.386664873s
STEP: Saw pod success
May 27 00:30:48.537: INFO: Pod "downwardapi-volume-36ffc28f-e800-4b01-a8df-4468bae2d93f" satisfied condition "Succeeded or Failed"
May 27 00:30:48.730: INFO: Trying to get logs from node ip-172-20-33-93.ap-southeast-1.compute.internal pod downwardapi-volume-36ffc28f-e800-4b01-a8df-4468bae2d93f container client-container: <nil>
STEP: delete the pod
May 27 00:30:49.124: INFO: Waiting for pod downwardapi-volume-36ffc28f-e800-4b01-a8df-4468bae2d93f to disappear
May 27 00:30:49.317: INFO: Pod downwardapi-volume-36ffc28f-e800-4b01-a8df-4468bae2d93f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:30:49.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2594" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":57,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:49.722: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 30 lines ...
May 27 00:30:13.266: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-7241-aws-scjnn65
STEP: creating a claim
May 27 00:30:13.470: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-rpvk
STEP: Creating a pod to test subpath
May 27 00:30:14.076: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-rpvk" in namespace "provisioning-7241" to be "Succeeded or Failed"
May 27 00:30:14.276: INFO: Pod "pod-subpath-test-dynamicpv-rpvk": Phase="Pending", Reason="", readiness=false. Elapsed: 199.825802ms
May 27 00:30:16.476: INFO: Pod "pod-subpath-test-dynamicpv-rpvk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.400167479s
May 27 00:30:18.677: INFO: Pod "pod-subpath-test-dynamicpv-rpvk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.600744904s
May 27 00:30:20.881: INFO: Pod "pod-subpath-test-dynamicpv-rpvk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.80471492s
May 27 00:30:23.081: INFO: Pod "pod-subpath-test-dynamicpv-rpvk": Phase="Pending", Reason="", readiness=false. Elapsed: 9.004874925s
May 27 00:30:25.281: INFO: Pod "pod-subpath-test-dynamicpv-rpvk": Phase="Pending", Reason="", readiness=false. Elapsed: 11.205084816s
May 27 00:30:27.481: INFO: Pod "pod-subpath-test-dynamicpv-rpvk": Phase="Pending", Reason="", readiness=false. Elapsed: 13.40512337s
May 27 00:30:29.681: INFO: Pod "pod-subpath-test-dynamicpv-rpvk": Phase="Pending", Reason="", readiness=false. Elapsed: 15.605165153s
May 27 00:30:31.881: INFO: Pod "pod-subpath-test-dynamicpv-rpvk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.805261195s
STEP: Saw pod success
May 27 00:30:31.881: INFO: Pod "pod-subpath-test-dynamicpv-rpvk" satisfied condition "Succeeded or Failed"
May 27 00:30:32.081: INFO: Trying to get logs from node ip-172-20-41-144.ap-southeast-1.compute.internal pod pod-subpath-test-dynamicpv-rpvk container test-container-subpath-dynamicpv-rpvk: <nil>
STEP: delete the pod
May 27 00:30:32.493: INFO: Waiting for pod pod-subpath-test-dynamicpv-rpvk to disappear
May 27 00:30:32.693: INFO: Pod pod-subpath-test-dynamicpv-rpvk no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-rpvk
May 27 00:30:32.693: INFO: Deleting pod "pod-subpath-test-dynamicpv-rpvk" in namespace "provisioning-7241"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:50.109: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 94 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:30:50.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9408" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":7,"skipped":78,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:50.873: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 20 lines ...
May 27 00:30:46.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on node default medium
May 27 00:30:47.888: INFO: Waiting up to 5m0s for pod "pod-9026420b-40d6-46b5-ab56-858977e42e6e" in namespace "emptydir-5281" to be "Succeeded or Failed"
May 27 00:30:48.082: INFO: Pod "pod-9026420b-40d6-46b5-ab56-858977e42e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 193.618149ms
May 27 00:30:50.276: INFO: Pod "pod-9026420b-40d6-46b5-ab56-858977e42e6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.387767536s
STEP: Saw pod success
May 27 00:30:50.276: INFO: Pod "pod-9026420b-40d6-46b5-ab56-858977e42e6e" satisfied condition "Succeeded or Failed"
May 27 00:30:50.469: INFO: Trying to get logs from node ip-172-20-33-93.ap-southeast-1.compute.internal pod pod-9026420b-40d6-46b5-ab56-858977e42e6e container test-container: <nil>
STEP: delete the pod
May 27 00:30:50.864: INFO: Waiting for pod pod-9026420b-40d6-46b5-ab56-858977e42e6e to disappear
May 27 00:30:51.058: INFO: Pod pod-9026420b-40d6-46b5-ab56-858977e42e6e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:30:51.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5281" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":139,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:51.457: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 222 lines ...
STEP: Destroying namespace "services-7361" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749

•
------------------------------
{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":7,"skipped":59,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
May 27 00:30:52.687: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0e9d3281-c10b-4f74-b7d0-5bc12b5b6509" in namespace "projected-3065" to be "Succeeded or Failed"
May 27 00:30:52.881: INFO: Pod "downwardapi-volume-0e9d3281-c10b-4f74-b7d0-5bc12b5b6509": Phase="Pending", Reason="", readiness=false. Elapsed: 193.680043ms
May 27 00:30:55.076: INFO: Pod "downwardapi-volume-0e9d3281-c10b-4f74-b7d0-5bc12b5b6509": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.388087329s
STEP: Saw pod success
May 27 00:30:55.076: INFO: Pod "downwardapi-volume-0e9d3281-c10b-4f74-b7d0-5bc12b5b6509" satisfied condition "Succeeded or Failed"
May 27 00:30:55.270: INFO: Trying to get logs from node ip-172-20-33-93.ap-southeast-1.compute.internal pod downwardapi-volume-0e9d3281-c10b-4f74-b7d0-5bc12b5b6509 container client-container: <nil>
STEP: delete the pod
May 27 00:30:55.664: INFO: Waiting for pod downwardapi-volume-0e9d3281-c10b-4f74-b7d0-5bc12b5b6509 to disappear
May 27 00:30:55.858: INFO: Pod downwardapi-volume-0e9d3281-c10b-4f74-b7d0-5bc12b5b6509 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:30:55.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3065" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":149,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:30:56.256: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 37 lines ...
May 27 00:30:45.399: INFO: PersistentVolumeClaim pvc-9dnsc found but phase is Pending instead of Bound.
May 27 00:30:47.588: INFO: PersistentVolumeClaim pvc-9dnsc found and phase=Bound (4.565153196s)
May 27 00:30:47.588: INFO: Waiting up to 3m0s for PersistentVolume local-6brbp to have phase Bound
May 27 00:30:47.776: INFO: PersistentVolume local-6brbp found and phase=Bound (188.511538ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-ccfl
STEP: Creating a pod to test exec-volume-test
May 27 00:30:48.342: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-ccfl" in namespace "volume-1972" to be "Succeeded or Failed"
May 27 00:30:48.531: INFO: Pod "exec-volume-test-preprovisionedpv-ccfl": Phase="Pending", Reason="", readiness=false. Elapsed: 188.371599ms
May 27 00:30:50.719: INFO: Pod "exec-volume-test-preprovisionedpv-ccfl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.376813694s
STEP: Saw pod success
May 27 00:30:50.719: INFO: Pod "exec-volume-test-preprovisionedpv-ccfl" satisfied condition "Succeeded or Failed"
May 27 00:30:50.909: INFO: Trying to get logs from node ip-172-20-40-196.ap-southeast-1.compute.internal pod exec-volume-test-preprovisionedpv-ccfl container exec-container-preprovisionedpv-ccfl: <nil>
STEP: delete the pod
May 27 00:30:51.293: INFO: Waiting for pod exec-volume-test-preprovisionedpv-ccfl to disappear
May 27 00:30:51.481: INFO: Pod exec-volume-test-preprovisionedpv-ccfl no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-ccfl
May 27 00:30:51.481: INFO: Deleting pod "exec-volume-test-preprovisionedpv-ccfl" in namespace "volume-1972"
... skipping 132 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:310
    should require VolumeAttach for drivers with attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:332
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":5,"skipped":23,"failed":0}
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:30:57.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:31:01.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8336" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":6,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:31:01.671: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 18 lines ...
May 27 00:30:56.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test override command
May 27 00:30:57.431: INFO: Waiting up to 5m0s for pod "client-containers-7737e9db-0675-4fa6-baa9-82b2411982e5" in namespace "containers-8326" to be "Succeeded or Failed"
May 27 00:30:57.625: INFO: Pod "client-containers-7737e9db-0675-4fa6-baa9-82b2411982e5": Phase="Pending", Reason="", readiness=false. Elapsed: 193.470068ms
May 27 00:30:59.819: INFO: Pod "client-containers-7737e9db-0675-4fa6-baa9-82b2411982e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.387896431s
May 27 00:31:02.014: INFO: Pod "client-containers-7737e9db-0675-4fa6-baa9-82b2411982e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.582105029s
STEP: Saw pod success
May 27 00:31:02.014: INFO: Pod "client-containers-7737e9db-0675-4fa6-baa9-82b2411982e5" satisfied condition "Succeeded or Failed"
May 27 00:31:02.208: INFO: Trying to get logs from node ip-172-20-33-93.ap-southeast-1.compute.internal pod client-containers-7737e9db-0675-4fa6-baa9-82b2411982e5 container agnhost-container: <nil>
STEP: delete the pod
May 27 00:31:02.602: INFO: Waiting for pod client-containers-7737e9db-0675-4fa6-baa9-82b2411982e5 to disappear
May 27 00:31:02.795: INFO: Pod client-containers-7737e9db-0675-4fa6-baa9-82b2411982e5 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.920 seconds]
[k8s.io] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":150,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:31:03.194: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:206

      Driver "nfs" does not support FsGroup - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:84
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":43,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:30:56.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on node default medium
May 27 00:30:57.596: INFO: Waiting up to 5m0s for pod "pod-3e2b2c40-0814-4bed-baec-b93fdc236eef" in namespace "emptydir-8230" to be "Succeeded or Failed"
May 27 00:30:57.784: INFO: Pod "pod-3e2b2c40-0814-4bed-baec-b93fdc236eef": Phase="Pending", Reason="", readiness=false. Elapsed: 187.970457ms
May 27 00:30:59.972: INFO: Pod "pod-3e2b2c40-0814-4bed-baec-b93fdc236eef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376352205s
May 27 00:31:02.161: INFO: Pod "pod-3e2b2c40-0814-4bed-baec-b93fdc236eef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.565164802s
STEP: Saw pod success
May 27 00:31:02.161: INFO: Pod "pod-3e2b2c40-0814-4bed-baec-b93fdc236eef" satisfied condition "Succeeded or Failed"
May 27 00:31:02.349: INFO: Trying to get logs from node ip-172-20-33-93.ap-southeast-1.compute.internal pod pod-3e2b2c40-0814-4bed-baec-b93fdc236eef container test-container: <nil>
STEP: delete the pod
May 27 00:31:02.734: INFO: Waiting for pod pod-3e2b2c40-0814-4bed-baec-b93fdc236eef to disappear
May 27 00:31:02.922: INFO: Pod pod-3e2b2c40-0814-4bed-baec-b93fdc236eef no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.871 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":43,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:206
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":-1,"completed":4,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 376 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":8,"skipped":56,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:31:05.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
May 27 00:31:06.458: INFO: Waiting up to 5m0s for pod "downward-api-30f949f3-e9eb-4990-9c2b-d44030a2f53a" in namespace "downward-api-4477" to be "Succeeded or Failed"
May 27 00:31:06.657: INFO: Pod "downward-api-30f949f3-e9eb-4990-9c2b-d44030a2f53a": Phase="Pending", Reason="", readiness=false. Elapsed: 199.422745ms
May 27 00:31:08.859: INFO: Pod "downward-api-30f949f3-e9eb-4990-9c2b-d44030a2f53a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.400899842s
STEP: Saw pod success
May 27 00:31:08.859: INFO: Pod "downward-api-30f949f3-e9eb-4990-9c2b-d44030a2f53a" satisfied condition "Succeeded or Failed"
May 27 00:31:09.059: INFO: Trying to get logs from node ip-172-20-40-196.ap-southeast-1.compute.internal pod downward-api-30f949f3-e9eb-4990-9c2b-d44030a2f53a container dapi-container: <nil>
STEP: delete the pod
May 27 00:31:09.466: INFO: Waiting for pod downward-api-30f949f3-e9eb-4990-9c2b-d44030a2f53a to disappear
May 27 00:31:09.675: INFO: Pod downward-api-30f949f3-e9eb-4990-9c2b-d44030a2f53a no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:31:09.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4477" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":59,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:31:10.098: INFO: Driver local doesn't support ntfs -- skipping
... skipping 14 lines ...
      Driver local doesn't support ntfs -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:178
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":8,"skipped":62,"failed":0}
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:31:04.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-434b3c62-ed42-4baf-addf-2ed5300600e1
STEP: Creating a pod to test consume secrets
May 27 00:31:05.895: INFO: Waiting up to 5m0s for pod "pod-secrets-b73dc795-7ef4-4830-8cfb-cef6ab307d84" in namespace "secrets-9442" to be "Succeeded or Failed"
May 27 00:31:06.088: INFO: Pod "pod-secrets-b73dc795-7ef4-4830-8cfb-cef6ab307d84": Phase="Pending", Reason="", readiness=false. Elapsed: 192.981283ms
May 27 00:31:08.282: INFO: Pod "pod-secrets-b73dc795-7ef4-4830-8cfb-cef6ab307d84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.386309047s
May 27 00:31:10.475: INFO: Pod "pod-secrets-b73dc795-7ef4-4830-8cfb-cef6ab307d84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.579620428s
May 27 00:31:12.668: INFO: Pod "pod-secrets-b73dc795-7ef4-4830-8cfb-cef6ab307d84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.772809236s
STEP: Saw pod success
May 27 00:31:12.668: INFO: Pod "pod-secrets-b73dc795-7ef4-4830-8cfb-cef6ab307d84" satisfied condition "Succeeded or Failed"
May 27 00:31:12.862: INFO: Trying to get logs from node ip-172-20-33-93.ap-southeast-1.compute.internal pod pod-secrets-b73dc795-7ef4-4830-8cfb-cef6ab307d84 container secret-volume-test: <nil>
STEP: delete the pod
May 27 00:31:13.259: INFO: Waiting for pod pod-secrets-b73dc795-7ef4-4830-8cfb-cef6ab307d84 to disappear
May 27 00:31:13.452: INFO: Pod pod-secrets-b73dc795-7ef4-4830-8cfb-cef6ab307d84 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:9.302 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":62,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:31:13.856: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 242 lines ...
• [SLOW TEST:192.419 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not disrupt a cloud load-balancer's connectivity during rollout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:145
------------------------------
{"msg":"PASSED [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","total":-1,"completed":4,"skipped":41,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:31:14.051: INFO: Only supported for providers [gce gke] (not aws)
... skipping 187 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:502
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes","total":-1,"completed":2,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:31:13.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
May 27 00:31:15.078: INFO: Waiting up to 5m0s for pod "downward-api-874cb420-dc51-4dd8-96c5-006b625ab41b" in namespace "downward-api-9934" to be "Succeeded or Failed"
May 27 00:31:15.271: INFO: Pod "downward-api-874cb420-dc51-4dd8-96c5-006b625ab41b": Phase="Pending", Reason="", readiness=false. Elapsed: 192.955965ms
May 27 00:31:17.464: INFO: Pod "downward-api-874cb420-dc51-4dd8-96c5-006b625ab41b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.386414789s
STEP: Saw pod success
May 27 00:31:17.464: INFO: Pod "downward-api-874cb420-dc51-4dd8-96c5-006b625ab41b" satisfied condition "Succeeded or Failed"
May 27 00:31:17.657: INFO: Trying to get logs from node ip-172-20-41-144.ap-southeast-1.compute.internal pod downward-api-874cb420-dc51-4dd8-96c5-006b625ab41b container dapi-container: <nil>
STEP: delete the pod
May 27 00:31:18.053: INFO: Waiting for pod downward-api-874cb420-dc51-4dd8-96c5-006b625ab41b to disappear
May 27 00:31:18.246: INFO: Pod downward-api-874cb420-dc51-4dd8-96c5-006b625ab41b no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 26 lines ...
• [SLOW TEST:251.274 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should *not* be restarted by liveness probe because startup probe delays it
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:338
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":3,"skipped":22,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 50 lines ...
May 27 00:30:39.367: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 27 00:30:39.557: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpath4tpls] to have phase Bound
May 27 00:30:39.748: INFO: PersistentVolumeClaim csi-hostpath4tpls found but phase is Pending instead of Bound.
May 27 00:30:41.939: INFO: PersistentVolumeClaim csi-hostpath4tpls found and phase=Bound (2.381452544s)
STEP: Creating pod pod-subpath-test-dynamicpv-snrc
STEP: Creating a pod to test subpath
May 27 00:30:42.507: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-snrc" in namespace "provisioning-7710" to be "Succeeded or Failed"
May 27 00:30:42.695: INFO: Pod "pod-subpath-test-dynamicpv-snrc": Phase="Pending", Reason="", readiness=false. Elapsed: 188.767937ms
May 27 00:30:44.885: INFO: Pod "pod-subpath-test-dynamicpv-snrc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.378158077s
May 27 00:30:47.074: INFO: Pod "pod-subpath-test-dynamicpv-snrc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.56726495s
May 27 00:30:49.263: INFO: Pod "pod-subpath-test-dynamicpv-snrc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.756721558s
May 27 00:30:51.453: INFO: Pod "pod-subpath-test-dynamicpv-snrc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.945961254s
May 27 00:30:53.641: INFO: Pod "pod-subpath-test-dynamicpv-snrc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.134758357s
May 27 00:30:55.830: INFO: Pod "pod-subpath-test-dynamicpv-snrc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.32377914s
May 27 00:30:58.020: INFO: Pod "pod-subpath-test-dynamicpv-snrc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.512926215s
May 27 00:31:00.212: INFO: Pod "pod-subpath-test-dynamicpv-snrc": Phase="Pending", Reason="", readiness=false. Elapsed: 17.705171095s
May 27 00:31:02.401: INFO: Pod "pod-subpath-test-dynamicpv-snrc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.894219728s
STEP: Saw pod success
May 27 00:31:02.401: INFO: Pod "pod-subpath-test-dynamicpv-snrc" satisfied condition "Succeeded or Failed"
May 27 00:31:02.590: INFO: Trying to get logs from node ip-172-20-40-209.ap-southeast-1.compute.internal pod pod-subpath-test-dynamicpv-snrc container test-container-subpath-dynamicpv-snrc: <nil>
STEP: delete the pod
May 27 00:31:02.983: INFO: Waiting for pod pod-subpath-test-dynamicpv-snrc to disappear
May 27 00:31:03.171: INFO: Pod pod-subpath-test-dynamicpv-snrc no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-snrc
May 27 00:31:03.172: INFO: Deleting pod "pod-subpath-test-dynamicpv-snrc" in namespace "provisioning-7710"
... skipping 56 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":8,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:31:23.258: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 103 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":7,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":5,"skipped":9,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 98 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:555
    should not expand volume if resizingOnDriver=off, resizingOnSC=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:584
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":-1,"completed":3,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:31:27.091: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 26 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 212 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":6,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:31:28.967: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 194 lines ...
• [SLOW TEST:10.602 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":9,"skipped":67,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:31:33.909: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 169 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":4,"skipped":27,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:31:33.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
• [SLOW TEST:7.542 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:31:41.392: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 173 lines ...
May 27 00:31:18.070: INFO: Waiting up to 5m0s for PersistentVolumeClaims [nfsfww94] to have phase Bound
May 27 00:31:18.258: INFO: PersistentVolumeClaim nfsfww94 found but phase is Pending instead of Bound.
May 27 00:31:20.446: INFO: PersistentVolumeClaim nfsfww94 found but phase is Pending instead of Bound.
May 27 00:31:22.635: INFO: PersistentVolumeClaim nfsfww94 found and phase=Bound (4.565077937s)
STEP: Creating pod pod-subpath-test-dynamicpv-l6vv
STEP: Creating a pod to test subpath
May 27 00:31:23.201: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-l6vv" in namespace "provisioning-9403" to be "Succeeded or Failed"
May 27 00:31:23.390: INFO: Pod "pod-subpath-test-dynamicpv-l6vv": Phase="Pending", Reason="", readiness=false. Elapsed: 188.381574ms
May 27 00:31:25.578: INFO: Pod "pod-subpath-test-dynamicpv-l6vv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376763377s
May 27 00:31:27.767: INFO: Pod "pod-subpath-test-dynamicpv-l6vv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.565191586s
May 27 00:31:29.955: INFO: Pod "pod-subpath-test-dynamicpv-l6vv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.754018966s
STEP: Saw pod success
May 27 00:31:29.955: INFO: Pod "pod-subpath-test-dynamicpv-l6vv" satisfied condition "Succeeded or Failed"
May 27 00:31:30.144: INFO: Trying to get logs from node ip-172-20-40-209.ap-southeast-1.compute.internal pod pod-subpath-test-dynamicpv-l6vv container test-container-subpath-dynamicpv-l6vv: <nil>
STEP: delete the pod
May 27 00:31:30.530: INFO: Waiting for pod pod-subpath-test-dynamicpv-l6vv to disappear
May 27 00:31:30.718: INFO: Pod pod-subpath-test-dynamicpv-l6vv no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-l6vv
May 27 00:31:30.718: INFO: Deleting pod "pod-subpath-test-dynamicpv-l6vv" in namespace "provisioning-9403"
... skipping 53 lines ...
May 27 00:31:29.523: INFO: PersistentVolumeClaim pvc-sxltq found but phase is Pending instead of Bound.
May 27 00:31:31.724: INFO: PersistentVolumeClaim pvc-sxltq found and phase=Bound (13.4049328s)
May 27 00:31:31.724: INFO: Waiting up to 3m0s for PersistentVolume local-vqblv to have phase Bound
May 27 00:31:31.923: INFO: PersistentVolume local-vqblv found and phase=Bound (199.782366ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-v5r5
STEP: Creating a pod to test subpath
May 27 00:31:32.525: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-v5r5" in namespace "provisioning-3859" to be "Succeeded or Failed"
May 27 00:31:32.725: INFO: Pod "pod-subpath-test-preprovisionedpv-v5r5": Phase="Pending", Reason="", readiness=false. Elapsed: 200.05307ms
May 27 00:31:34.925: INFO: Pod "pod-subpath-test-preprovisionedpv-v5r5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.400147337s
May 27 00:31:37.126: INFO: Pod "pod-subpath-test-preprovisionedpv-v5r5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.600621787s
STEP: Saw pod success
May 27 00:31:37.126: INFO: Pod "pod-subpath-test-preprovisionedpv-v5r5" satisfied condition "Succeeded or Failed"
May 27 00:31:37.326: INFO: Trying to get logs from node ip-172-20-41-144.ap-southeast-1.compute.internal pod pod-subpath-test-preprovisionedpv-v5r5 container test-container-subpath-preprovisionedpv-v5r5: <nil>
STEP: delete the pod
May 27 00:31:37.734: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-v5r5 to disappear
May 27 00:31:37.934: INFO: Pod pod-subpath-test-preprovisionedpv-v5r5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-v5r5
May 27 00:31:37.934: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-v5r5" in namespace "provisioning-3859"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":10,"skipped":67,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 19 lines ...
May 27 00:31:14.776: INFO: PersistentVolumeClaim pvc-fp4xk found but phase is Pending instead of Bound.
May 27 00:31:16.970: INFO: PersistentVolumeClaim pvc-fp4xk found and phase=Bound (11.169794196s)
May 27 00:31:16.970: INFO: Waiting up to 3m0s for PersistentVolume aws-zn2fl to have phase Bound
May 27 00:31:17.163: INFO: PersistentVolume aws-zn2fl found and phase=Bound (193.506142ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-fjjw
STEP: Creating a pod to test exec-volume-test
May 27 00:31:17.745: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-fjjw" in namespace "volume-8407" to be "Succeeded or Failed"
May 27 00:31:17.939: INFO: Pod "exec-volume-test-preprovisionedpv-fjjw": Phase="Pending", Reason="", readiness=false. Elapsed: 193.666706ms
May 27 00:31:20.133: INFO: Pod "exec-volume-test-preprovisionedpv-fjjw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.387649075s
May 27 00:31:22.327: INFO: Pod "exec-volume-test-preprovisionedpv-fjjw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.581722443s
May 27 00:31:24.521: INFO: Pod "exec-volume-test-preprovisionedpv-fjjw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.775627997s
May 27 00:31:26.715: INFO: Pod "exec-volume-test-preprovisionedpv-fjjw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.969754432s
May 27 00:31:28.909: INFO: Pod "exec-volume-test-preprovisionedpv-fjjw": Phase="Pending", Reason="", readiness=false. Elapsed: 11.163853557s
May 27 00:31:31.104: INFO: Pod "exec-volume-test-preprovisionedpv-fjjw": Phase="Pending", Reason="", readiness=false. Elapsed: 13.358622355s
May 27 00:31:33.298: INFO: Pod "exec-volume-test-preprovisionedpv-fjjw": Phase="Pending", Reason="", readiness=false. Elapsed: 15.553168977s
May 27 00:31:35.492: INFO: Pod "exec-volume-test-preprovisionedpv-fjjw": Phase="Pending", Reason="", readiness=false. Elapsed: 17.74702621s
May 27 00:31:37.686: INFO: Pod "exec-volume-test-preprovisionedpv-fjjw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.941000557s
STEP: Saw pod success
May 27 00:31:37.686: INFO: Pod "exec-volume-test-preprovisionedpv-fjjw" satisfied condition "Succeeded or Failed"
May 27 00:31:37.880: INFO: Trying to get logs from node ip-172-20-40-196.ap-southeast-1.compute.internal pod exec-volume-test-preprovisionedpv-fjjw container exec-container-preprovisionedpv-fjjw: <nil>
STEP: delete the pod
May 27 00:31:38.283: INFO: Waiting for pod exec-volume-test-preprovisionedpv-fjjw to disappear
May 27 00:31:38.477: INFO: Pod exec-volume-test-preprovisionedpv-fjjw no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-fjjw
May 27 00:31:38.477: INFO: Deleting pod "exec-volume-test-preprovisionedpv-fjjw" in namespace "volume-8407"
STEP: Deleting pv and pvc
May 27 00:31:38.670: INFO: Deleting PersistentVolumeClaim "pvc-fp4xk"
May 27 00:31:38.872: INFO: Deleting PersistentVolume "aws-zn2fl"
May 27 00:31:39.414: INFO: Couldn't delete PD "aws://ap-southeast-1a/vol-006cad131fc72ea90", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-006cad131fc72ea90 is currently attached to i-063fbd80874e99720
	status code: 400, request id: f3164a34-64b7-46cd-a6c3-9c9abfddf49f
May 27 00:31:45.357: INFO: Successfully deleted PD "aws://ap-southeast-1a/vol-006cad131fc72ea90".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:31:45.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8407" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":14,"skipped":154,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:31:45.764: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 25 lines ...
May 27 00:31:29.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to unmount after the subpath directory is deleted [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
May 27 00:31:30.025: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 27 00:31:30.407: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7741" in namespace "provisioning-7741" to be "Succeeded or Failed"
May 27 00:31:30.595: INFO: Pod "hostpath-symlink-prep-provisioning-7741": Phase="Pending", Reason="", readiness=false. Elapsed: 188.340839ms
May 27 00:31:32.784: INFO: Pod "hostpath-symlink-prep-provisioning-7741": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.377157898s
STEP: Saw pod success
May 27 00:31:32.784: INFO: Pod "hostpath-symlink-prep-provisioning-7741" satisfied condition "Succeeded or Failed"
May 27 00:31:32.784: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7741" in namespace "provisioning-7741"
May 27 00:31:32.978: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7741" to be fully deleted
May 27 00:31:33.166: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-n24v
May 27 00:31:35.735: INFO: Running '/tmp/kubectl920392710/kubectl --server=https://api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=provisioning-7741 exec pod-subpath-test-inlinevolume-n24v --container test-container-volume-inlinevolume-n24v -- /bin/sh -c rm -r /test-volume/provisioning-7741'
May 27 00:31:37.693: INFO: stderr: ""
May 27 00:31:37.693: INFO: stdout: ""
STEP: Deleting pod pod-subpath-test-inlinevolume-n24v
May 27 00:31:37.693: INFO: Deleting pod "pod-subpath-test-inlinevolume-n24v" in namespace "provisioning-7741"
May 27 00:31:37.882: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-n24v" to be fully deleted
STEP: Deleting pod
May 27 00:31:42.265: INFO: Deleting pod "pod-subpath-test-inlinevolume-n24v" in namespace "provisioning-7741"
May 27 00:31:42.642: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7741" in namespace "provisioning-7741" to be "Succeeded or Failed"
May 27 00:31:42.831: INFO: Pod "hostpath-symlink-prep-provisioning-7741": Phase="Pending", Reason="", readiness=false. Elapsed: 188.724544ms
May 27 00:31:45.021: INFO: Pod "hostpath-symlink-prep-provisioning-7741": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.378171351s
STEP: Saw pod success
May 27 00:31:45.021: INFO: Pod "hostpath-symlink-prep-provisioning-7741" satisfied condition "Succeeded or Failed"
May 27 00:31:45.021: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7741" in namespace "provisioning-7741"
May 27 00:31:45.215: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7741" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:31:45.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-7741" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":7,"skipped":47,"failed":0}
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:31:45.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
May 27 00:31:46.926: INFO: Waiting up to 5m0s for pod "downward-api-c2f8e042-59b9-4a46-8cf3-5321e1ebcd4f" in namespace "downward-api-7359" to be "Succeeded or Failed"
May 27 00:31:47.115: INFO: Pod "downward-api-c2f8e042-59b9-4a46-8cf3-5321e1ebcd4f": Phase="Pending", Reason="", readiness=false. Elapsed: 188.615151ms
May 27 00:31:49.305: INFO: Pod "downward-api-c2f8e042-59b9-4a46-8cf3-5321e1ebcd4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.378176107s
STEP: Saw pod success
May 27 00:31:49.305: INFO: Pod "downward-api-c2f8e042-59b9-4a46-8cf3-5321e1ebcd4f" satisfied condition "Succeeded or Failed"
May 27 00:31:49.494: INFO: Trying to get logs from node ip-172-20-40-196.ap-southeast-1.compute.internal pod downward-api-c2f8e042-59b9-4a46-8cf3-5321e1ebcd4f container dapi-container: <nil>
STEP: delete the pod
May 27 00:31:49.881: INFO: Waiting for pod downward-api-c2f8e042-59b9-4a46-8cf3-5321e1ebcd4f to disappear
May 27 00:31:50.076: INFO: Pod downward-api-c2f8e042-59b9-4a46-8cf3-5321e1ebcd4f no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:31:50.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7359" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
May 27 00:31:46.748: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
May 27 00:31:46.748: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-wmn7
STEP: Creating a pod to test subpath
May 27 00:31:46.943: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-wmn7" in namespace "provisioning-4339" to be "Succeeded or Failed"
May 27 00:31:47.138: INFO: Pod "pod-subpath-test-inlinevolume-wmn7": Phase="Pending", Reason="", readiness=false. Elapsed: 194.659043ms
May 27 00:31:49.332: INFO: Pod "pod-subpath-test-inlinevolume-wmn7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.388841326s
STEP: Saw pod success
May 27 00:31:49.332: INFO: Pod "pod-subpath-test-inlinevolume-wmn7" satisfied condition "Succeeded or Failed"
May 27 00:31:49.526: INFO: Trying to get logs from node ip-172-20-40-209.ap-southeast-1.compute.internal pod pod-subpath-test-inlinevolume-wmn7 container test-container-volume-inlinevolume-wmn7: <nil>
STEP: delete the pod
May 27 00:31:49.922: INFO: Waiting for pod pod-subpath-test-inlinevolume-wmn7 to disappear
May 27 00:31:50.116: INFO: Pod pod-subpath-test-inlinevolume-wmn7 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-wmn7
May 27 00:31:50.116: INFO: Deleting pod "pod-subpath-test-inlinevolume-wmn7" in namespace "provisioning-4339"
... skipping 48 lines ...
• [SLOW TEST:9.470 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":6,"skipped":50,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:31:51.000: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 70 lines ...
May 27 00:31:17.531: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n"
May 27 00:31:17.531: INFO: stdout: "service-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1607
STEP: Deleting pod verify-service-up-exec-pod-pf4s5 in namespace services-1607
STEP: verifying service-disabled is not up
May 27 00:31:17.940: INFO: Creating new host exec pod
May 27 00:31:20.542: INFO: Running '/tmp/kubectl920392710/kubectl --server=https://api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1607 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.64.126.50:80 && echo service-down-failed'
May 27 00:31:24.507: INFO: rc: 28
May 27 00:31:24.507: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.64.126.50:80 && echo service-down-failed" in pod services-1607/verify-service-down-host-exec-pod: error running /tmp/kubectl920392710/kubectl --server=https://api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1607 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.64.126.50:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.64.126.50:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1607
STEP: adding service-proxy-name label
STEP: verifying service is not up
May 27 00:31:25.125: INFO: Creating new host exec pod
May 27 00:31:29.731: INFO: Running '/tmp/kubectl920392710/kubectl --server=https://api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1607 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.64.220.61:80 && echo service-down-failed'
May 27 00:31:33.680: INFO: rc: 28
May 27 00:31:33.680: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.64.220.61:80 && echo service-down-failed" in pod services-1607/verify-service-down-host-exec-pod: error running /tmp/kubectl920392710/kubectl --server=https://api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1607 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.64.220.61:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.64.220.61:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1607
STEP: removing service-proxy-name annotation
STEP: verifying service is up
May 27 00:31:34.288: INFO: Creating new host exec pod
... skipping 8 lines ...
May 27 00:31:43.869: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.220.61:80\n+ echo\n"
May 27 00:31:43.869: INFO: stdout: "service-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-4wns4\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\nservice-proxy-toggled-58bqk\nservice-proxy-toggled-gqkqp\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1607
STEP: Deleting pod verify-service-up-exec-pod-zcflt in namespace services-1607
STEP: verifying service-disabled is still not up
May 27 00:31:44.277: INFO: Creating new host exec pod
May 27 00:31:46.876: INFO: Running '/tmp/kubectl920392710/kubectl --server=https://api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1607 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.64.126.50:80 && echo service-down-failed'
May 27 00:31:50.848: INFO: rc: 28
May 27 00:31:50.848: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.64.126.50:80 && echo service-down-failed" in pod services-1607/verify-service-down-host-exec-pod: error running /tmp/kubectl920392710/kubectl --server=https://api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1607 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.64.126.50:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.64.126.50:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1607
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:31:51.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 114 lines ...
May 27 00:31:31.658: INFO: PersistentVolumeClaim pvc-ckcqn found and phase=Bound (4.575078491s)
May 27 00:31:31.658: INFO: Waiting up to 3m0s for PersistentVolume nfs-7t7nw to have phase Bound
May 27 00:31:31.846: INFO: PersistentVolume nfs-7t7nw found and phase=Bound (187.84017ms)
STEP: Checking pod has write access to PersistentVolume
May 27 00:31:32.222: INFO: Creating nfs test pod
May 27 00:31:32.411: INFO: Pod should terminate with exitcode 0 (success)
May 27 00:31:32.411: INFO: Waiting up to 5m0s for pod "pvc-tester-wpk6f" in namespace "pv-8213" to be "Succeeded or Failed"
May 27 00:31:32.598: INFO: Pod "pvc-tester-wpk6f": Phase="Pending", Reason="", readiness=false. Elapsed: 187.512614ms
May 27 00:31:34.786: INFO: Pod "pvc-tester-wpk6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.3754982s
STEP: Saw pod success
May 27 00:31:34.786: INFO: Pod "pvc-tester-wpk6f" satisfied condition "Succeeded or Failed"
May 27 00:31:34.786: INFO: Pod pvc-tester-wpk6f succeeded 
May 27 00:31:34.787: INFO: Deleting pod "pvc-tester-wpk6f" in namespace "pv-8213"
May 27 00:31:34.988: INFO: Wait up to 5m0s for pod "pvc-tester-wpk6f" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
May 27 00:31:35.175: INFO: Deleting PVC pvc-ckcqn to trigger reclamation of PV 
May 27 00:31:35.175: INFO: Deleting PersistentVolumeClaim "pvc-ckcqn"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PVC and non-pre-bound PV: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:178
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","total":-1,"completed":5,"skipped":54,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
May 27 00:31:53.529: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2a7e1f3-d33b-4509-b828-9f0991f2ea8b" in namespace "downward-api-9719" to be "Succeeded or Failed"
May 27 00:31:53.717: INFO: Pod "downwardapi-volume-b2a7e1f3-d33b-4509-b828-9f0991f2ea8b": Phase="Pending", Reason="", readiness=false. Elapsed: 188.330449ms
May 27 00:31:55.906: INFO: Pod "downwardapi-volume-b2a7e1f3-d33b-4509-b828-9f0991f2ea8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.37746641s
STEP: Saw pod success
May 27 00:31:55.906: INFO: Pod "downwardapi-volume-b2a7e1f3-d33b-4509-b828-9f0991f2ea8b" satisfied condition "Succeeded or Failed"
May 27 00:31:56.095: INFO: Trying to get logs from node ip-172-20-40-209.ap-southeast-1.compute.internal pod downwardapi-volume-b2a7e1f3-d33b-4509-b828-9f0991f2ea8b container client-container: <nil>
STEP: delete the pod
May 27 00:31:56.503: INFO: Waiting for pod downwardapi-volume-b2a7e1f3-d33b-4509-b828-9f0991f2ea8b to disappear
May 27 00:31:56.692: INFO: Pod downwardapi-volume-b2a7e1f3-d33b-4509-b828-9f0991f2ea8b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:31:56.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9719" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":62,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:31:57.093: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 110 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-f1f8696d-aaed-414b-86af-079cb7affaaf
STEP: Creating a pod to test consume configMaps
May 27 00:31:54.409: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0d2f71d0-75ab-4f78-9fe0-aa1a186838b6" in namespace "projected-7790" to be "Succeeded or Failed"
May 27 00:31:54.597: INFO: Pod "pod-projected-configmaps-0d2f71d0-75ab-4f78-9fe0-aa1a186838b6": Phase="Pending", Reason="", readiness=false. Elapsed: 187.735324ms
May 27 00:31:56.785: INFO: Pod "pod-projected-configmaps-0d2f71d0-75ab-4f78-9fe0-aa1a186838b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.375631012s
STEP: Saw pod success
May 27 00:31:56.785: INFO: Pod "pod-projected-configmaps-0d2f71d0-75ab-4f78-9fe0-aa1a186838b6" satisfied condition "Succeeded or Failed"
May 27 00:31:56.972: INFO: Trying to get logs from node ip-172-20-40-209.ap-southeast-1.compute.internal pod pod-projected-configmaps-0d2f71d0-75ab-4f78-9fe0-aa1a186838b6 container projected-configmap-volume-test: <nil>
STEP: delete the pod
May 27 00:31:57.376: INFO: Waiting for pod pod-projected-configmaps-0d2f71d0-75ab-4f78-9fe0-aa1a186838b6 to disappear
May 27 00:31:57.565: INFO: Pod pod-projected-configmaps-0d2f71d0-75ab-4f78-9fe0-aa1a186838b6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:31:57.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7790" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":56,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 186 lines ...
May 27 00:32:00.421: INFO: AfterEach: Cleaning up test resources.
May 27 00:32:00.421: INFO: pvc is nil
May 27 00:32:00.421: INFO: Deleting PersistentVolume "hostpath-88zgj"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":7,"skipped":61,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:50
[It] volume on tmpfs should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:75
STEP: Creating a pod to test emptydir volume type on tmpfs
May 27 00:31:58.286: INFO: Waiting up to 5m0s for pod "pod-32da78a8-dfb3-4850-839c-8fc979a141e7" in namespace "emptydir-4161" to be "Succeeded or Failed"
May 27 00:31:58.474: INFO: Pod "pod-32da78a8-dfb3-4850-839c-8fc979a141e7": Phase="Pending", Reason="", readiness=false. Elapsed: 188.547192ms
May 27 00:32:00.663: INFO: Pod "pod-32da78a8-dfb3-4850-839c-8fc979a141e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.37713583s
STEP: Saw pod success
May 27 00:32:00.663: INFO: Pod "pod-32da78a8-dfb3-4850-839c-8fc979a141e7" satisfied condition "Succeeded or Failed"
May 27 00:32:00.851: INFO: Trying to get logs from node ip-172-20-40-196.ap-southeast-1.compute.internal pod pod-32da78a8-dfb3-4850-839c-8fc979a141e7 container test-container: <nil>
STEP: delete the pod
May 27 00:32:01.238: INFO: Waiting for pod pod-32da78a8-dfb3-4850-839c-8fc979a141e7 to disappear
May 27 00:32:01.427: INFO: Pod pod-32da78a8-dfb3-4850-839c-8fc979a141e7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0527 00:27:01.339238    4744 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
May 27 00:32:01.720: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:32:01.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2742" for this suite.


• [SLOW TEST:309.250 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 5 lines ...
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103
STEP: Creating service test in namespace statefulset-1062
[It] should not deadlock when a pod's predecessor fails
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:248
STEP: Creating statefulset ss in namespace statefulset-1062
May 27 00:32:01.941: INFO: error finding default storageClass : No default storage class found
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114
May 27 00:32:01.942: INFO: Deleting all statefulset in ns statefulset-1062
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:32:02.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    should not deadlock when a pod's predecessor fails [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:248

    error finding default storageClass : No default storage class found

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:830
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:32:02.906: INFO: Only supported for providers [vsphere] (not aws)
... skipping 114 lines ...
May 27 00:32:00.370: INFO: PersistentVolumeClaim pvc-nvvfg found but phase is Pending instead of Bound.
May 27 00:32:02.559: INFO: PersistentVolumeClaim pvc-nvvfg found and phase=Bound (6.758104465s)
May 27 00:32:02.559: INFO: Waiting up to 3m0s for PersistentVolume local-6v4r5 to have phase Bound
May 27 00:32:02.748: INFO: PersistentVolume local-6v4r5 found and phase=Bound (188.63394ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-vq8z
STEP: Creating a pod to test exec-volume-test
May 27 00:32:03.321: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-vq8z" in namespace "volume-1796" to be "Succeeded or Failed"
May 27 00:32:03.510: INFO: Pod "exec-volume-test-preprovisionedpv-vq8z": Phase="Pending", Reason="", readiness=false. Elapsed: 188.944661ms
May 27 00:32:05.699: INFO: Pod "exec-volume-test-preprovisionedpv-vq8z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.378139491s
STEP: Saw pod success
May 27 00:32:05.700: INFO: Pod "exec-volume-test-preprovisionedpv-vq8z" satisfied condition "Succeeded or Failed"
May 27 00:32:05.889: INFO: Trying to get logs from node ip-172-20-41-144.ap-southeast-1.compute.internal pod exec-volume-test-preprovisionedpv-vq8z container exec-container-preprovisionedpv-vq8z: <nil>
STEP: delete the pod
May 27 00:32:06.323: INFO: Waiting for pod exec-volume-test-preprovisionedpv-vq8z to disappear
May 27 00:32:06.511: INFO: Pod exec-volume-test-preprovisionedpv-vq8z no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-vq8z
May 27 00:32:06.511: INFO: Deleting pod "exec-volume-test-preprovisionedpv-vq8z" in namespace "volume-1796"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":9,"skipped":49,"failed":0}

SSSSSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":11,"skipped":98,"failed":0}
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:31:33.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 2258 lines ...
• [SLOW TEST:39.727 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:208
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":12,"skipped":98,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:32:13.631: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:32:15.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-413" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":13,"skipped":102,"failed":0}
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:32:15.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
May 27 00:32:17.049: INFO: Waiting up to 5m0s for pod "busybox-user-65534-ebc8871e-76c2-4cab-8eb9-9cd0ec0a5d5a" in namespace "security-context-test-555" to be "Succeeded or Failed"
May 27 00:32:17.240: INFO: Pod "busybox-user-65534-ebc8871e-76c2-4cab-8eb9-9cd0ec0a5d5a": Phase="Pending", Reason="", readiness=false. Elapsed: 190.682808ms
May 27 00:32:19.431: INFO: Pod "busybox-user-65534-ebc8871e-76c2-4cab-8eb9-9cd0ec0a5d5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.381646015s
May 27 00:32:19.431: INFO: Pod "busybox-user-65534-ebc8871e-76c2-4cab-8eb9-9cd0ec0a5d5a" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 27 00:32:19.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-555" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":102,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 27 00:32:19.829: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 30 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1304
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 27 00:27:01.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 28 lines ...
May 27 00:27:39.558: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should support inline execution and attach
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:551
STEP: executing a command with run and attach with stdin
May 27 00:27:39.558: INFO: Running '/tmp/kubectl920392710/kubectl --server=https://api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4933 run run-test --image=docker.io/library/busybox:1.29 --restart=OnFailure --attach=true --stdin -- sh -c echo -n read: && cat && echo 'stdin closed''
May 27 00:30:00.429: INFO: rc: 1
May 27 00:30:00.429: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running /tmp/kubectl920392710/kubectl --server=https://api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4933 run run-test --image=docker.io/library/busybox:1.29 --restart=OnFailure --attach=true --stdin -- sh -c echo -n read: && cat && echo 'stdin closed':\nCommand stdout:\n\nstderr:\nIf you don't see a command prompt, try pressing enter.\nError attaching, falling back to logs: Timeout occured\nerror: timed out waiting for the condition\n\nerror:\nexit status 1",
        },
        Code: 1,
    }
    error running /tmp/kubectl920392710/kubectl --server=https://api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4933 run run-test --image=docker.io/library/busybox:1.29 --restart=OnFailure --attach=true --stdin -- sh -c echo -n read: && cat && echo 'stdin closed':
    Command stdout:
    
    stderr:
    If you don't see a command prompt, try pressing enter.
    Error attaching, falling back to logs: Timeout occured
    error: timed out waiting for the condition
    
    error:
    exit status 1
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.KubectlBuilder.ExecOrDie(0xc0028f0840, 0x0, 0xc002532880, 0xc, 0xa, 0xc0028e82d0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:598 +0xbf
... skipping 12 lines ...
STEP: using delete to clean up resources
May 27 00:30:00.430: INFO: Running '/tmp/kubectl920392710/kubectl --server=https://api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4933 delete --grace-period=0 --force -f -'
May 27 00:30:01.343: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 27 00:30:01.343: INFO: stdout: "pod \"httpd\" force deleted\n"
May 27 00:30:01.343: INFO: Running '/tmp/kubectl920392710/kubectl --server=https://api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4933 get rc,svc -l name=httpd --no-headers'
May 27 00:32:13.715: INFO: rc: 1
May 27 00:32:13.716: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running /tmp/kubectl920392710/kubectl --server=https://api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4933 get rc,svc -l name=httpd --no-headers:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: net/http: TLS handshake timeout\n\nerror:\nexit status 1",
        },
        Code: 1,
    }
    error running /tmp/kubectl920392710/kubectl --server=https://api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4933 get rc,svc -l name=httpd --no-headers:
    Command stdout:
    
    stderr:
    Unable to connect to the server: net/http: TLS handshake timeout
    
    error:
    exit status 1
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.KubectlBuilder.ExecOrDie(0xc0028f0160, 0x0, 0xc002532880, 0xc, 0x5, 0xc0028e83b0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:598 +0xbf
... skipping 18 lines ...
k8s.io/kubernetes/test/e2e.TestE2E(0xc003442c00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc003442c00, 0x4fbaa38)
	/usr/local/go/src/testing/testing.go:1123 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1168 +0x2b3
E0527 00:32:13.716967    4806 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"May 27 00:32:13.716: Unexpected error:\n    <exec.CodeExitError>: {\n        Err: {\n            s: \"error running /tmp/kubectl920392710/kubectl --server=https://api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4933 get rc,svc -l name=httpd --no-headers:\\nCommand stdout:\\n\\nstderr:\\nUnable to connect to the server: net/http: TLS handshake timeout\\n\\nerror:\\nexit status 1\",\n        },\n        Code: 1,\n    }\n    error running /tmp/kubectl920392710/kubectl --server=https://api.e2e-4c7293f1bb-5f87d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4933 get rc,svc -l name=httpd --no-headers:\n    Command stdout:\n    \n    stderr:\n    Unable to connect to the server: net/http: TLS handshake timeout\n    \n    error:\n    exit status 1\noccurred", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go", Line:598, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework.KubectlBuilder.ExecOrDie(0xc0028f0160, 0x0, 0xc002532880, 0xc, 0x5, 0xc0028e83b0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:598 +0xbf\nk8s.io/kubernetes/test/e2e/framework.RunKubectlOrDie(0xc002532880, 0xc, 0xc0017a0c78, 0x5, 0x5, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:660 +0x85\nk8s.io/kubernetes/test/e2e/kubectl.assertCleanup.func1(0x1, 0xc0034e6be0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:185 +0x19c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0017a0e20, 0xcaf500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0033b8300, 0xc0017a0e20, 0xc0033b8300, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x1dcd6500, 0xdf8475800, 0xc0017a0e20, 0xc0029bc200, 0xc0017a0e58)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/kubectl.assertCleanup(0xc002532880, 0xc, 0xc0017a0f48, 0x1, 0x1)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:198 +0xbe\nk8s.io/kubernetes/test/e2e/kubectl.cleanupKubectlInputs(0xc00290e140, 0x13c, 0xc002532880, 0xc, 0xc0017a0f48, 0x1, 0x1)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:176 +0x185\nk8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.2()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:391 +0x88\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc003442c00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc003442c00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc003442c00, 0x4fbaa38)\n\t/usr/local/go/src/testing/testing.go:1123 +0xef\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1168 +0x2b3"} (
Your test failed.
Ginkgo panics to prevent subsequent assertions from running.
Normally Ginkgo rescues this panic so you shouldn't see it.

But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
To circumvent this, you should call

... skipping 5 lines ...
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x4991180, 0xc0031e4200)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89
panic(0x4991180, 0xc0031e4200)
	/usr/local/go/src/runtime/panic.go:969 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc003952e00, 0x32b, 0x77626c5, 0x67, 0x256, 0xc0005f4000, 0x9b6)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x4182ea0, 0x5420370)
	/usr/local/go/src/runtime/panic.go:969 +0x1b9
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc003952e00, 0x32b, 0xc0017a0848, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc003952e00, 0x32b, 0xc0017a0930, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Fail(0xc003952a80, 0x316, 0xc002532078, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:62 +0x1ea
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc0031e4000, 0x5553540, 0x79aaec0, 0x0, 0x0, 0x0, 0x0, 0xc0031e4000)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f3
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc0031e4000, 0x5553540, 0x79aaec0, 0x0, 0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7
k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x54f1a60, 0xc0032cc080, 0x0, 0x0, 0x0)
... skipping 57983 lines ...






pe=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-wrvr7\"\nI0527 00:38:35.769420       1 event.go:291] \"Event occurred\" object=\"replication-controller-813/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-4llfq\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0527 00:38:35.771592       1 replica_set.go:584] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-813/condition-test\nI0527 00:38:35.772100       1 event.go:291] \"Event occurred\" object=\"replication-controller-813/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-cvq87\"\nE0527 00:38:35.778026       1 replica_set.go:532] sync \"replication-controller-813/condition-test\" failed with pods \"condition-test-4llfq\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0527 00:38:35.778290       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-813/condition-test\" need=3 creating=1\nI0527 00:38:35.781304       1 replica_set.go:584] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-813/condition-test\nI0527 00:38:35.781547       1 event.go:291] \"Event occurred\" object=\"replication-controller-813/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-9phzk\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0527 00:38:35.784507       1 replica_set.go:532] sync \"replication-controller-813/condition-test\" failed with pods \"condition-test-9phzk\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0527 00:38:35.784657       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-813/condition-test\" need=3 creating=1\nI0527 00:38:35.785857       1 replica_set.go:584] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-813/condition-test\nE0527 00:38:35.785889       1 replica_set.go:532] sync \"replication-controller-813/condition-test\" failed with pods \"condition-test-768pq\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0527 00:38:35.786041       1 event.go:291] \"Event occurred\" object=\"replication-controller-813/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-768pq\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0527 00:38:35.794762       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-813/condition-test\" need=3 creating=1\nI0527 00:38:35.795864       1 replica_set.go:584] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-813/condition-test\nI0527 00:38:35.796071       1 event.go:291] \"Event occurred\" object=\"replication-controller-813/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-29f5z\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0527 00:38:35.796130       1 replica_set.go:532] sync \"replication-controller-813/condition-test\" failed with pods \"condition-test-29f5z\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0527 00:38:35.815676       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9417/pod-a293077f-0902-44e2-913a-49c5e51a512d uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-vlj4x pvc- persistent-local-volumes-test-9417  b03dfe2b-28e1-4153-be68-581a82720a4d 26544 0 2021-05-27 00:38:20 +0000 UTC 2021-05-27 00:38:35 +0000 UTC 0xc002141f48 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:38:20 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:38:20 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pv58b7s,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9417,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:38:35.816326       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9417/pvc-vlj4x because it is still being used\nI0527 00:38:35.836303       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-813/condition-test\" need=3 creating=1\nI0527 00:38:35.837489       1 replica_set.go:584] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-813/condition-test\nE0527 00:38:35.837531       1 replica_set.go:532] sync \"replication-controller-813/condition-test\" failed with pods \"condition-test-snbfg\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0527 00:38:35.837708       1 event.go:291] \"Event occurred\" object=\"replication-controller-813/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-snbfg\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0527 00:38:35.895768       1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-6422\nI0527 00:38:35.917705       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-813/condition-test\" need=3 creating=1\nI0527 00:38:35.919135       1 replica_set.go:584] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-813/condition-test\nE0527 00:38:35.919198       1 replica_set.go:532] sync \"replication-controller-813/condition-test\" failed with pods \"condition-test-2fkf7\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0527 00:38:35.919398       1 event.go:291] \"Event occurred\" object=\"replication-controller-813/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-2fkf7\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0527 00:38:36.011210       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-813/condition-test\" need=3 creating=1\nI0527 00:38:36.013416       1 replica_set.go:584] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-813/condition-test\nE0527 00:38:36.013576       1 replica_set.go:532] sync \"replication-controller-813/condition-test\" failed with pods \"condition-test-dljjk\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0527 00:38:36.013892       1 event.go:291] \"Event occurred\" object=\"replication-controller-813/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-dljjk\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0527 00:38:36.047970       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-4124/default: secrets \"default-token-h655f\" is forbidden: unable to create new content in namespace provisioning-4124 because it is being terminated\nI0527 00:38:36.079298       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-813/condition-test\" need=3 creating=1\nI0527 00:38:36.080760       1 replica_set.go:584] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-813/condition-test\nE0527 00:38:36.080800       1 replica_set.go:532] sync \"replication-controller-813/condition-test\" failed with pods \"condition-test-2gh4n\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0527 00:38:36.080978       1 event.go:291] \"Event occurred\" object=\"replication-controller-813/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-2gh4n\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0527 00:38:36.661851       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:38:36.704564       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9417/pod-a293077f-0902-44e2-913a-49c5e51a512d uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-vlj4x pvc- persistent-local-volumes-test-9417  b03dfe2b-28e1-4153-be68-581a82720a4d 26544 0 2021-05-27 00:38:20 +0000 UTC 2021-05-27 00:38:35 +0000 UTC 0xc002141f48 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:38:20 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:38:20 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pv58b7s,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9417,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:38:36.704633       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9417/pvc-vlj4x because it is still being used\nE0527 00:38:36.747318       1 tokens_controller.go:262] error synchronizing serviceaccount metadata-concealment-3962/default: secrets \"default-token-lgrp4\" is forbidden: unable to create new content in namespace metadata-concealment-3962 because it is being terminated\nI0527 00:38:37.626010       1 namespace_controller.go:185] Namespace has been deleted disruption-7331\nE0527 00:38:37.748128       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0527 00:38:37.868609       1 tokens_controller.go:262] error synchronizing serviceaccount volume-7100/default: secrets \"default-token-97dsp\" is forbidden: unable to create new content in namespace volume-7100 because it is being terminated\nI0527 00:38:37.896045       1 event.go:291] \"Event occurred\" object=\"deployment-4077/test-rollover-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rollover-deployment-78bc8b888c to 1\"\nI0527 00:38:37.896396       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-4077/test-rollover-deployment-78bc8b888c\" need=1 creating=1\nI0527 00:38:37.905199       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-4077/test-rollover-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rollover-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:37.906551       1 event.go:291] \"Event occurred\" object=\"deployment-4077/test-rollover-deployment-78bc8b888c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rollover-deployment-78bc8b888c-pfgst\"\nI0527 00:38:37.918827       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-4077/test-rollover-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rollover-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:37.931009       1 utils.go:413] couldn't find ipfamilies for headless service: webhook-1435/e2e-test-webhook. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.3.98).\nI0527 00:38:37.947916       1 pvc_protection_controller.go:291] PVC volume-8924/pvc-m7nv9 is unused\nI0527 00:38:37.956597       1 pv_controller.go:638] volume \"local-q989l\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:38:37.958745       1 route_controller.go:294] set node ip-172-20-33-93.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:38:37.958796       1 route_controller.go:294] set node ip-172-20-40-196.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:38:37.958809       1 route_controller.go:294] set node ip-172-20-41-144.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:38:37.958824       1 route_controller.go:294] set node ip-172-20-40-209.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:38:37.960733       1 route_controller.go:294] set node ip-172-20-42-187.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:38:37.962415       1 pv_controller.go:864] volume \"local-q989l\" entered phase \"Released\"\nI0527 00:38:38.133822       1 pv_controller_base.go:504] deletion of claim \"volume-8924/pvc-m7nv9\" was already processed\nI0527 00:38:38.249042       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4767-4374/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nE0527 00:38:38.541468       1 tokens_controller.go:262] error synchronizing serviceaccount projected-1236/default: secrets \"default-token-q2q48\" is forbidden: unable to create new content in namespace projected-1236 because it is being terminated\nI0527 00:38:38.629394       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4767-4374/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0527 00:38:38.676282       1 namespace_controller.go:185] Namespace has been deleted provisioning-1901\nE0527 00:38:38.817592       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0527 00:38:38.895421       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:38:39.435141       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-4077/test-rollover-deployment-78bc8b888c\" need=0 deleting=1\nI0527 00:38:39.435177       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-4077/test-rollover-deployment-78bc8b888c\" relatedReplicaSets=[test-rollover-controller test-rollover-deployment-78bc8b888c test-rollover-deployment-668db69979]\nI0527 00:38:39.435905       1 event.go:291] \"Event occurred\" object=\"deployment-4077/test-rollover-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rollover-deployment-78bc8b888c to 0\"\nI0527 00:38:39.435990       1 controller_utils.go:604] \"Deleting pod\" controller=\"test-rollover-deployment-78bc8b888c\" pod=\"deployment-4077/test-rollover-deployment-78bc8b888c-pfgst\"\nI0527 00:38:39.444365       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-4077/test-rollover-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rollover-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:39.446045       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-07e5b4da20cff9ffe\") from node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nI0527 00:38:39.449191       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-4077/test-rollover-deployment-668db69979\" need=1 creating=1\nI0527 00:38:39.450175       1 event.go:291] \"Event occurred\" object=\"deployment-4077/test-rollover-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rollover-deployment-668db69979 to 1\"\nI0527 00:38:39.452505       1 event.go:291] \"Event occurred\" object=\"deployment-4077/test-rollover-deployment-78bc8b888c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rollover-deployment-78bc8b888c-pfgst\"\nI0527 00:38:39.458929       1 event.go:291] \"Event occurred\" object=\"deployment-4077/test-rollover-deployment-668db69979\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rollover-deployment-668db69979-b6nqm\"\nI0527 00:38:39.476940       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-4077/test-rollover-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rollover-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:39.482896       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-4077/test-rollover-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rollover-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:39.499233       1 aws.go:2014] Assigned mount device ch -> volume vol-07e5b4da20cff9ffe\nI0527 00:38:39.764996       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.166.250).\nI0527 00:38:39.872665       1 namespace_controller.go:185] Namespace has been deleted container-probe-5971\nI0527 00:38:39.941203       1 namespace_controller.go:185] Namespace has been deleted provisioning-6053\nI0527 00:38:39.961292       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.166.250).\nI0527 00:38:39.962458       1 event.go:291] \"Event occurred\" object=\"ephemeral-9915-136/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0527 00:38:40.025832       1 aws.go:2427] AttachVolume volume=\"vol-07e5b4da20cff9ffe\" instance=\"i-081c5901a8830e60d\" request returned {\n  AttachTime: 2021-05-27 00:38:40.011 +0000 UTC,\n  Device: \"/dev/xvdch\",\n  InstanceId: \"i-081c5901a8830e60d\",\n  State: \"attaching\",\n  VolumeId: \"vol-07e5b4da20cff9ffe\"\n}\nI0527 00:38:40.045841       1 garbagecollector.go:471] \"Processing object\" object=\"replicaset-2820/pod-adoption-release-lxzxt\" objectUID=03bc835a-eec3-453d-ba23-de372b4c6244 kind=\"Pod\" virtual=false\nI0527 00:38:40.048497       1 garbagecollector.go:580] \"Deleting object\" object=\"replicaset-2820/pod-adoption-release-lxzxt\" objectUID=03bc835a-eec3-453d-ba23-de372b4c6244 kind=\"Pod\" propagationPolicy=Background\nI0527 00:38:40.340832       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.41.103).\nI0527 00:38:40.541718       1 namespace_controller.go:185] Namespace has been deleted provisioning-8323\nI0527 00:38:40.545152       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.41.103).\nI0527 00:38:40.545951       1 event.go:291] \"Event occurred\" object=\"ephemeral-9915-136/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nW0527 00:38:40.709178       1 aws.go:2268] Expected instance i-033cc39af9e90ab7c/detached for volume vol-07e5b4da20cff9ffe, but found instance i-081c5901a8830e60d/attached\nI0527 00:38:40.770716       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.166.250).\nI0527 00:38:40.778596       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.17.19).\nI0527 00:38:40.823863       1 replica_set.go:449] ReplicaSet \"test-rollover-deployment-668db69979\" will be enqueued after 10s for availability check\nI0527 00:38:41.006907       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.17.19).\nI0527 00:38:41.008016       1 event.go:291] \"Event occurred\" object=\"ephemeral-9915-136/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0527 00:38:41.112590       1 namespace_controller.go:185] Namespace has been deleted provisioning-4124\nE0527 00:38:41.150216       1 namespace_controller.go:162] deletion of namespace volume-expand-8024 failed: unable to retrieve the complete list of server APIs: webhook.example.com/v2: the server could not find the requested resource\nI0527 00:38:41.186417       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.208.216).\nI0527 00:38:41.292814       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9417/pod-a293077f-0902-44e2-913a-49c5e51a512d uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-vlj4x pvc- persistent-local-volumes-test-9417  b03dfe2b-28e1-4153-be68-581a82720a4d 26544 0 2021-05-27 00:38:20 +0000 UTC 2021-05-27 00:38:35 +0000 UTC 0xc002141f48 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:38:20 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:38:20 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pv58b7s,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9417,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:38:41.293075       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9417/pvc-vlj4x because it is still being used\nI0527 00:38:41.296449       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9417/pod-9928f70d-1fed-433f-998d-e61c9a9fc9c9 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-vlj4x pvc- persistent-local-volumes-test-9417  b03dfe2b-28e1-4153-be68-581a82720a4d 26544 0 2021-05-27 00:38:20 +0000 UTC 2021-05-27 00:38:35 +0000 UTC 0xc002141f48 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:38:20 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:38:20 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pv58b7s,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9417,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:38:41.296693       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9417/pvc-vlj4x because it is still being used\nI0527 00:38:41.307453       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9417/pod-9928f70d-1fed-433f-998d-e61c9a9fc9c9 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-vlj4x pvc- persistent-local-volumes-test-9417  b03dfe2b-28e1-4153-be68-581a82720a4d 26544 0 2021-05-27 00:38:20 +0000 UTC 2021-05-27 00:38:35 +0000 UTC 0xc002141f48 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:38:20 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:38:20 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pv58b7s,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9417,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:38:41.307664       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9417/pvc-vlj4x because it is still being used\nI0527 00:38:41.318728       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-9417/pvc-vlj4x is unused\nI0527 00:38:41.329495       1 pv_controller.go:638] volume \"local-pv58b7s\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:38:41.334103       1 pv_controller.go:864] volume \"local-pv58b7s\" entered phase \"Released\"\nI0527 00:38:41.340486       1 pv_controller_base.go:504] deletion of claim \"persistent-local-volumes-test-9417/pvc-vlj4x\" was already processed\nI0527 00:38:41.393415       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.208.216).\nI0527 00:38:41.397097       1 event.go:291] \"Event occurred\" object=\"ephemeral-9915-136/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0527 00:38:41.576041       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.61.17).\nI0527 00:38:41.776873       1 event.go:291] \"Event occurred\" object=\"ephemeral-9915-136/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0527 00:38:41.777802       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.61.17).\nI0527 00:38:41.799384       1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-2339\nI0527 00:38:41.800494       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.17.19).\nI0527 00:38:41.835818       1 namespace_controller.go:185] Namespace has been deleted metadata-concealment-3962\nI0527 00:38:41.896728       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-1435/e2e-test-webhook-9ntrh\" objectUID=e9e492c5-a4e6-46d6-b1a9-995f4578e3ac kind=\"EndpointSlice\" virtual=false\nI0527 00:38:41.901587       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-1435/e2e-test-webhook-9ntrh\" objectUID=e9e492c5-a4e6-46d6-b1a9-995f4578e3ac kind=\"EndpointSlice\" propagationPolicy=Background\nE0527 00:38:41.952802       1 tokens_controller.go:262] error synchronizing serviceaccount replication-controller-813/default: secrets \"default-token-72747\" is forbidden: unable to create new content in namespace replication-controller-813 because it is being terminated\nI0527 00:38:41.956310       1 garbagecollector.go:471] \"Processing object\" object=\"replication-controller-813/condition-test-wrvr7\" objectUID=4ea99d7c-981c-4a4e-b252-662951b71bb9 kind=\"Pod\" virtual=false\nI0527 00:38:41.956598       1 garbagecollector.go:471] \"Processing object\" object=\"replication-controller-813/condition-test-cvq87\" objectUID=0bfd7925-2e40-4565-8c20-07a30620ed77 kind=\"Pod\" virtual=false\nI0527 00:38:41.959073       1 garbagecollector.go:580] \"Deleting object\" object=\"replication-controller-813/condition-test-cvq87\" objectUID=0bfd7925-2e40-4565-8c20-07a30620ed77 kind=\"Pod\" propagationPolicy=Background\nI0527 00:38:41.959659       1 garbagecollector.go:580] \"Deleting object\" object=\"replication-controller-813/condition-test-wrvr7\" objectUID=4ea99d7c-981c-4a4e-b252-662951b71bb9 kind=\"Pod\" propagationPolicy=Background\nE0527 00:38:41.988782       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-7431/pvc-btsg2: storageclass.storage.k8s.io \"provisioning-7431\" not found\nI0527 00:38:41.989096       1 event.go:291] \"Event occurred\" object=\"provisioning-7431/pvc-btsg2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-7431\\\" not found\"\nI0527 00:38:42.016486       1 resource_quota_controller.go:307] Resource quota has been deleted replication-controller-813/condition-test\nI0527 00:38:42.094589       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-1435/sample-webhook-deployment-6bd9446d55\" objectUID=3f1d7b94-6c2b-4573-a6bd-b632510b138b kind=\"ReplicaSet\" virtual=false\nI0527 00:38:42.094875       1 deployment_controller.go:581] Deployment webhook-1435/sample-webhook-deployment has been deleted\nI0527 00:38:42.096209       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-1435/sample-webhook-deployment-6bd9446d55\" objectUID=3f1d7b94-6c2b-4573-a6bd-b632510b138b kind=\"ReplicaSet\" propagationPolicy=Background\nI0527 00:38:42.099047       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-1435/sample-webhook-deployment-6bd9446d55-jkl2k\" objectUID=fea28cff-ba0a-4ec4-8907-6a360109c60f kind=\"Pod\" virtual=false\nI0527 00:38:42.100760       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-1435/sample-webhook-deployment-6bd9446d55-jkl2k\" objectUID=fea28cff-ba0a-4ec4-8907-6a360109c60f kind=\"Pod\" propagationPolicy=Background\nI0527 00:38:42.132876       1 aws.go:2037] Releasing in-process attachment entry: ch -> volume vol-07e5b4da20cff9ffe\nI0527 00:38:42.132925       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-07e5b4da20cff9ffe\") from node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nI0527 00:38:42.132944       1 actual_state_of_world.go:350] Volume \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-07e5b4da20cff9ffe\" is already added to attachedVolume list to node \"ip-172-20-33-93.ap-southeast-1.compute.internal\", update device path \"/dev/xvdch\"\nI0527 00:38:42.133092       1 event.go:291] \"Event occurred\" object=\"volume-2282/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-volume-0\\\" \"\nI0527 00:38:42.186295       1 pv_controller.go:864] volume \"local-wwk56\" entered phase \"Available\"\nI0527 00:38:42.194504       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.208.216).\nI0527 00:38:42.480782       1 namespace_controller.go:185] Namespace has been deleted provisioning-2800\nI0527 00:38:42.580667       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.61.17).\nI0527 00:38:42.724891       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"vol1\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0f131a276669317a6\") on node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:38:42.750338       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-00c040294c27f37de\") on node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:38:42.756823       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"vol1\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0f131a276669317a6\") on node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:38:42.762494       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-00c040294c27f37de\") on node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nW0527 00:38:42.809045       1 aws.go:2268] Expected instance i-033cc39af9e90ab7c/detached for volume vol-07e5b4da20cff9ffe, but found instance i-081c5901a8830e60d/attached\nE0527 00:38:42.984440       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-9417/default: secrets \"default-token-gg659\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-9417 because it is being terminated\nI0527 00:38:43.098054       1 namespace_controller.go:185] Namespace has been deleted volume-7100\nI0527 00:38:43.162990       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.76.23).\nI0527 00:38:43.361272       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.76.23).\nI0527 00:38:43.365183       1 event.go:291] \"Event occurred\" object=\"provisioning-3441-9268/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nE0527 00:38:43.489302       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:38:43.599585       1 namespace_controller.go:185] Namespace has been deleted container-runtime-8980\nI0527 00:38:43.660346       1 namespace_controller.go:185] Namespace has been deleted ephemeral-7757-6174\nI0527 00:38:43.663594       1 namespace_controller.go:185] Namespace has been deleted projected-1236\nI0527 00:38:43.727347       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.229.113).\nI0527 00:38:43.942688       1 event.go:291] \"Event occurred\" object=\"provisioning-3441-9268/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0527 00:38:43.942950       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.229.113).\nI0527 00:38:44.104156       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.15.2).\nI0527 00:38:44.169932       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.76.23).\nI0527 00:38:44.295797       1 namespace_controller.go:185] Namespace has been deleted provisioning-9269\nI0527 00:38:44.296474       1 event.go:291] \"Event occurred\" object=\"provisioning-3441-9268/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0527 00:38:44.296344       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.15.2).\nE0527 00:38:44.344529       1 pv_controller.go:1437] error finding provisioning plugin for claim volume-67/pvc-m6h8q: storageclass.storage.k8s.io \"volume-67\" not found\nI0527 00:38:44.344917       1 event.go:291] \"Event occurred\" object=\"volume-67/pvc-m6h8q\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-67\\\" not found\"\nI0527 00:38:44.480167       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.116.253).\nI0527 00:38:44.537474       1 pv_controller.go:864] volume \"local-qwwrq\" entered phase \"Available\"\nI0527 00:38:44.613939       1 graph_builder.go:587] add [v1/Pod, namespace: csi-mock-volumes-4767, name: inline-volume-rqkhs, uid: a0aadbd0-4d17-47c9-b439-170b8080e8f1] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0527 00:38:44.614181       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4767/inline-volume-rqkhs\" objectUID=a0aadbd0-4d17-47c9-b439-170b8080e8f1 kind=\"Pod\" virtual=false\nI0527 00:38:44.616853       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: csi-mock-volumes-4767, name: inline-volume-rqkhs, uid: a0aadbd0-4d17-47c9-b439-170b8080e8f1]\nI0527 00:38:44.671985       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.116.253).\nI0527 00:38:44.672758       1 event.go:291] \"Event occurred\" object=\"provisioning-3441-9268/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0527 00:38:44.730628       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.229.113).\nI0527 00:38:44.856284       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.25.127).\nI0527 00:38:45.048886       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.25.127).\nI0527 00:38:45.052201       1 event.go:291] \"Event occurred\" object=\"provisioning-3441-9268/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0527 00:38:45.109033       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.15.2).\nI0527 00:38:45.483326       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.116.253).\nI0527 00:38:45.508327       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.76.23).\nI0527 00:38:45.605889       1 event.go:291] \"Event occurred\" object=\"provisioning-3441/csi-hostpath8279g\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-3441\\\" or manually created by system administrator\"\nI0527 00:38:45.606138       1 event.go:291] \"Event occurred\" object=\"provisioning-3441/csi-hostpath8279g\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-3441\\\" or manually created by system administrator\"\nE0527 00:38:45.867613       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:38:45.991639       1 pv_controller.go:915] claim \"provisioning-5523/pvc-7z9hq\" bound to volume \"local-fgm79\"\nI0527 00:38:45.998742       1 pv_controller.go:864] volume \"local-fgm79\" entered phase \"Bound\"\nI0527 00:38:45.998788       1 pv_controller.go:967] volume \"local-fgm79\" bound to claim \"provisioning-5523/pvc-7z9hq\"\nI0527 00:38:46.003742       1 pv_controller.go:808] claim \"provisioning-5523/pvc-7z9hq\" entered phase \"Bound\"\nI0527 00:38:46.003841       1 pv_controller.go:915] claim \"provisioning-7431/pvc-btsg2\" bound to volume \"local-wwk56\"\nI0527 00:38:46.009043       1 pv_controller.go:864] volume \"local-wwk56\" entered phase \"Bound\"\nI0527 00:38:46.009067       1 pv_controller.go:967] volume \"local-wwk56\" bound to claim \"provisioning-7431/pvc-btsg2\"\nI0527 00:38:46.014527       1 pv_controller.go:808] claim \"provisioning-7431/pvc-btsg2\" entered phase \"Bound\"\nI0527 00:38:46.014756       1 event.go:291] \"Event occurred\" object=\"provisioning-3441/csi-hostpath8279g\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-3441\\\" or manually created by system administrator\"\nI0527 00:38:46.014855       1 pv_controller.go:915] claim \"volume-67/pvc-m6h8q\" bound to volume \"local-qwwrq\"\nI0527 00:38:46.021208       1 pv_controller.go:864] volume \"local-qwwrq\" entered phase \"Bound\"\nI0527 00:38:46.021250       1 pv_controller.go:967] volume \"local-qwwrq\" bound to claim \"volume-67/pvc-m6h8q\"\nI0527 00:38:46.026118       1 pv_controller.go:808] claim \"volume-67/pvc-m6h8q\" entered phase \"Bound\"\nI0527 00:38:46.292714       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=6 creating=6\nI0527 00:38:46.293385       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-dd94f59b7 to 6\"\nI0527 00:38:46.304488       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-fpfs2\"\nI0527 00:38:46.306848       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:46.315080       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-shnph\"\nI0527 00:38:46.318371       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-2z2mj\"\nI0527 00:38:46.330183       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-ndjfh\"\nI0527 00:38:46.331021       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-fjgbk\"\nI0527 00:38:46.336174       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-d4wfz\"\nI0527 00:38:46.370132       1 namespace_controller.go:185] Namespace has been deleted volume-expand-8024\nI0527 00:38:46.450600       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.61.17).\nI0527 00:38:46.461086       1 pv_controller.go:864] volume \"pvc-994b6628-e941-484c-964a-e2d9b8dfdcce\" entered phase \"Bound\"\nI0527 00:38:46.461115       1 pv_controller.go:967] volume \"pvc-994b6628-e941-484c-964a-e2d9b8dfdcce\" bound to claim \"provisioning-3441/csi-hostpath8279g\"\nI0527 00:38:46.471160       1 pv_controller.go:808] claim \"provisioning-3441/csi-hostpath8279g\" entered phase \"Bound\"\nI0527 00:38:46.513132       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.76.23).\nE0527 00:38:46.545943       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-1435/default: secrets \"default-token-k45l9\" is forbidden: unable to create new content in namespace webhook-1435 because it is being terminated\nI0527 00:38:46.799851       1 namespace_controller.go:185] Namespace has been deleted projected-8172\nW0527 00:38:46.914027       1 aws.go:2268] Expected instance i-033cc39af9e90ab7c/detached for volume vol-07e5b4da20cff9ffe, but found instance i-081c5901a8830e60d/attached\nI0527 00:38:47.047729       1 namespace_controller.go:185] Namespace has been deleted replication-controller-813\nI0527 00:38:47.050582       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.166.250).\nI0527 00:38:47.108007       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"webhook-3920/sample-webhook-deployment-6bd9446d55\" need=1 creating=1\nI0527 00:38:47.108282       1 event.go:291] \"Event occurred\" object=\"webhook-3920/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-6bd9446d55 to 1\"\nI0527 00:38:47.117495       1 event.go:291] \"Event occurred\" object=\"webhook-3920/sample-webhook-deployment-6bd9446d55\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-6bd9446d55-zlz6f\"\nI0527 00:38:47.125596       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-3920/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:47.302852       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"kubectl-6107/update-demo-nautilus\" need=2 creating=2\nI0527 00:38:47.307239       1 event.go:291] \"Event occurred\" object=\"kubectl-6107/update-demo-nautilus\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: update-demo-nautilus-vhmbd\"\nI0527 00:38:47.315120       1 event.go:291] \"Event occurred\" object=\"kubectl-6107/update-demo-nautilus\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: update-demo-nautilus-xbdgg\"\nI0527 00:38:47.431831       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-dd94f59b7 to 7\"\nI0527 00:38:47.432089       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=7 creating=1\nI0527 00:38:47.439802       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-pxkm4\"\nI0527 00:38:47.440706       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-66d6495f4b\" need=2 creating=2\nI0527 00:38:47.446866       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [kubectl.example.com/v1, Resource=e2e-test-kubectl-9163-crds], removed: [crd-publish-openapi-test-common-group.example.com/v4, Resource=e2e-test-crd-publish-openapi-3351-crds crd-publish-openapi-test-common-group.example.com/v5, Resource=e2e-test-crd-publish-openapi-2470-crds crd-publish-openapi-test-empty.example.com/v1, Resource=e2e-test-crd-publish-openapi-7628-crds]\nI0527 00:38:47.447000       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-kubectl-9163-crds.kubectl.example.com\nI0527 00:38:47.449000       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0527 00:38:47.449366       1 reflector.go:219] Starting reflector *v1.PartialObjectMetadata (13h53m30.64926721s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0527 00:38:47.449592       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-66d6495f4b to 2\"\nI0527 00:38:47.456280       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.61.17).\nI0527 00:38:47.461486       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-66d6495f4b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-66d6495f4b-z6hp7\"\nI0527 00:38:47.462041       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.41.103).\nI0527 00:38:47.466774       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:47.512097       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-66d6495f4b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-66d6495f4b-fks4w\"\nI0527 00:38:47.545565       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:47.549279       1 shared_informer.go:247] Caches are synced for resource quota \nI0527 00:38:47.549298       1 resource_quota_controller.go:454] synced quota controller\nI0527 00:38:47.550843       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=6 deleting=1\nI0527 00:38:47.551070       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" relatedReplicaSets=[webserver-dd94f59b7 webserver-66d6495f4b]\nI0527 00:38:47.551264       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-dd94f59b7\" pod=\"deployment-6359/webserver-dd94f59b7-pxkm4\"\nI0527 00:38:47.552450       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 6\"\nI0527 00:38:47.562374       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-66d6495f4b to 3\"\nI0527 00:38:47.563364       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-66d6495f4b\" need=3 creating=1\nI0527 00:38:47.568980       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-pxkm4\"\nI0527 00:38:47.573016       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-66d6495f4b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-66d6495f4b-zkp7z\"\nI0527 00:38:47.577046       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:47.623195       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:47.848837       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.17.19).\nE0527 00:38:47.890184       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:38:47.890821       1 route_controller.go:294] set node ip-172-20-33-93.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:38:47.890861       1 route_controller.go:294] set node ip-172-20-40-209.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:38:47.890872       1 route_controller.go:294] set node ip-172-20-40-196.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:38:47.890883       1 route_controller.go:294] set node ip-172-20-42-187.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:38:47.890898       1 route_controller.go:294] set node ip-172-20-41-144.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:38:48.057134       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.166.250).\nI0527 00:38:48.165680       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.229.113).\nI0527 00:38:48.184624       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-9417\nI0527 00:38:48.227705       1 aws.go:2291] Waiting for volume \"vol-0f131a276669317a6\" state: actual=detaching, desired=detached\nI0527 00:38:48.325494       1 aws.go:2291] Waiting for volume \"vol-00c040294c27f37de\" state: actual=detaching, desired=detached\nI0527 00:38:48.480724       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.41.103).\nI0527 00:38:48.552329       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=7 creating=1\nI0527 00:38:48.552836       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-dd94f59b7 to 7\"\nI0527 00:38:48.561664       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-hz9b9\"\nI0527 00:38:48.569296       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-66d6495f4b\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:48.585785       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 5\"\nI0527 00:38:48.585926       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=5 deleting=2\nI0527 00:38:48.585945       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" relatedReplicaSets=[webserver-dd94f59b7 webserver-66d6495f4b]\nI0527 00:38:48.586019       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-dd94f59b7\" pod=\"deployment-6359/webserver-dd94f59b7-fpfs2\"\nI0527 00:38:48.589154       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-dd94f59b7\" pod=\"deployment-6359/webserver-dd94f59b7-hz9b9\"\nI0527 00:38:48.593361       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-66d6495f4b to 5\"\nI0527 00:38:48.593714       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-66d6495f4b\" need=5 creating=2\nI0527 00:38:48.599957       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-994b6628-e941-484c-964a-e2d9b8dfdcce\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-3441^e5704a21-be83-11eb-b599-3616f201064e\") from node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:38:48.600948       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-66d6495f4b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-66d6495f4b-86gnt\"\nI0527 00:38:48.612270       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-fpfs2\"\nI0527 00:38:48.613511       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-66d6495f4b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-66d6495f4b-lh7x7\"\nI0527 00:38:48.621844       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-hz9b9\"\nI0527 00:38:48.621437       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-994b6628-e941-484c-964a-e2d9b8dfdcce\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-3441^e5704a21-be83-11eb-b599-3616f201064e\") from node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:38:48.621813       1 event.go:291] \"Event occurred\" object=\"provisioning-3441/pod-subpath-test-dynamicpv-hrq7\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-994b6628-e941-484c-964a-e2d9b8dfdcce\\\" \"\nI0527 00:38:48.648172       1 utils.go:413] couldn't find ipfamilies for headless service: ephemeral-9915-136/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.208.216).\nI0527 00:38:49.165038       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.15.2).\nI0527 00:38:49.174129       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.229.113).\nE0527 00:38:49.175954       1 tokens_controller.go:262] error synchronizing serviceaccount volume-8924/default: secrets \"default-token-7m4tx\" is forbidden: unable to create new content in namespace volume-8924 because it is being terminated\nI0527 00:38:49.502613       1 operation_generator.go:298] VerifyVolumesAreAttached.BulkVerifyVolumes failed for node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" and volume \"vol1\"\nE0527 00:38:50.198279       1 tokens_controller.go:262] error synchronizing serviceaccount security-context-7167/default: secrets \"default-token-l5hxs\" is forbidden: unable to create new content in namespace security-context-7167 because it is being terminated\nI0527 00:38:50.264184       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 1\"\nI0527 00:38:50.270587       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-dd94f59b7\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:50.276216       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-66d6495f4b\" need=3 deleting=2\nI0527 00:38:50.276444       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-66d6495f4b\" relatedReplicaSets=[webserver-dd94f59b7 webserver-66d6495f4b]\nI0527 00:38:50.277091       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-66d6495f4b\" pod=\"deployment-6359/webserver-66d6495f4b-86gnt\"\nI0527 00:38:50.276984       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-66d6495f4b to 3\"\nI0527 00:38:50.277349       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-66d6495f4b\" pod=\"deployment-6359/webserver-66d6495f4b-lh7x7\"\nI0527 00:38:50.282477       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:50.287623       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-66d6495f4b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-66d6495f4b-86gnt\"\nI0527 00:38:50.290711       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=7 creating=2\nI0527 00:38:50.291981       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-dd94f59b7 to 7\"\nI0527 00:38:50.292553       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-66d6495f4b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-66d6495f4b-lh7x7\"\nI0527 00:38:50.302114       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-lv9hs\"\nI0527 00:38:50.310489       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-05-27 00:38:24 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdct\",\n  InstanceId: \"i-069a67f4c9afb4c56\",\n  State: \"detaching\",\n  VolumeId: \"vol-0f131a276669317a6\"\n}\nI0527 00:38:50.310695       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"vol1\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0f131a276669317a6\") on node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:38:50.319262       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-9dkhf\"\nI0527 00:38:50.445647       1 aws.go:2291] Waiting for volume \"vol-00c040294c27f37de\" state: actual=detaching, desired=detached\nI0527 00:38:50.657631       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-dd94f59b7 to 8\"\nI0527 00:38:50.657699       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=8 creating=1\nI0527 00:38:50.659974       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-66d6495f4b\" need=4 creating=1\nI0527 00:38:50.663223       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-66d6495f4b to 4\"\nI0527 00:38:50.663846       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-cm7z7\"\nI0527 00:38:50.670891       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-66d6495f4b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-66d6495f4b-pccgs\"\nI0527 00:38:50.877269       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-4077/test-rollover-controller\" need=0 deleting=1\nI0527 00:38:50.877443       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-4077/test-rollover-controller\" relatedReplicaSets=[test-rollover-controller test-rollover-deployment-78bc8b888c test-rollover-deployment-668db69979]\nI0527 00:38:50.877570       1 controller_utils.go:604] \"Deleting pod\" controller=\"test-rollover-controller\" pod=\"deployment-4077/test-rollover-controller-tmv66\"\nI0527 00:38:50.877735       1 event.go:291] \"Event occurred\" object=\"deployment-4077/test-rollover-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rollover-controller to 0\"\nI0527 00:38:50.888464       1 event.go:291] \"Event occurred\" object=\"deployment-4077/test-rollover-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rollover-controller-tmv66\"\nI0527 00:38:51.055401       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-66d6495f4b\" need=4 creating=1\nI0527 00:38:51.060128       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-66d6495f4b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-66d6495f4b-jrvkm\"\nI0527 00:38:51.077905       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:51.163592       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.116.253).\nI0527 00:38:51.252491       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-66d6495f4b\" need=4 creating=1\nI0527 00:38:51.262890       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-66d6495f4b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-66d6495f4b-4856b\"\nI0527 00:38:51.455389       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-66d6495f4b\" need=3 deleting=1\nI0527 00:38:51.455613       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-66d6495f4b\" relatedReplicaSets=[webserver-dd94f59b7 webserver-66d6495f4b]\nI0527 00:38:51.455842       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-66d6495f4b\" pod=\"deployment-6359/webserver-66d6495f4b-z6hp7\"\nI0527 00:38:51.456554       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-66d6495f4b to 3\"\nI0527 00:38:51.463641       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=8 creating=1\nI0527 00:38:51.470469       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-dd94f59b7 to 9\"\nI0527 00:38:51.470562       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-j7c9w\"\nI0527 00:38:51.474771       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-66d6495f4b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-66d6495f4b-z6hp7\"\nI0527 00:38:51.480333       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:51.496408       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=9 creating=1\nI0527 00:38:51.500190       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-v5wrq\"\nI0527 00:38:51.509556       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:51.620185       1 namespace_controller.go:185] Namespace has been deleted webhook-1435\nI0527 00:38:51.730153       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=9 creating=1\nI0527 00:38:51.793554       1 namespace_controller.go:185] Namespace has been deleted webhook-1435-markers\nI0527 00:38:51.833886       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-gng7c\"\nI0527 00:38:51.879911       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=9 creating=1\nI0527 00:38:52.030187       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-wnwpc\"\nI0527 00:38:52.087509       1 utils.go:413] couldn't find ipfamilies for headless service: webhook-3920/e2e-test-webhook. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.188.50).\nI0527 00:38:52.167590       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.116.253).\nI0527 00:38:52.230213       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=9 creating=1\nI0527 00:38:52.279925       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-bbxr7\"\nI0527 00:38:52.330469       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=9 creating=1\nI0527 00:38:52.337195       1 pvc_protection_controller.go:291] PVC volumemode-7472/pvc-8pwjh is unused\nI0527 00:38:52.343261       1 pv_controller.go:638] volume \"local-l6d2v\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:38:52.345993       1 pv_controller.go:864] volume \"local-l6d2v\" entered phase \"Released\"\nI0527 00:38:52.373866       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [kubectl.example.com/v1, Resource=e2e-test-kubectl-9163-crds], removed: [crd-publish-openapi-test-common-group.example.com/v5, Resource=e2e-test-crd-publish-openapi-2470-crds]\nI0527 00:38:52.429584       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-8jb26\"\nI0527 00:38:52.539804       1 pv_controller_base.go:504] deletion of claim \"volumemode-7472/pvc-8pwjh\" was already processed\nI0527 00:38:52.556088       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-9915, name: inline-volume-tester-rxtpp, uid: ed85b454-37d8-4820-8e9f-cd5542b88461] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0527 00:38:52.763718       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-3441-9268/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.25.127).\nI0527 00:38:52.925870       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0527 00:38:52.926200       1 shared_informer.go:247] Caches are synced for garbage collector \nI0527 00:38:52.926355       1 garbagecollector.go:254] synced garbage collector\nI0527 00:38:52.926501       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9915/inline-volume-tester-rxtpp\" objectUID=ed85b454-37d8-4820-8e9f-cd5542b88461 kind=\"Pod\" virtual=false\nI0527 00:38:52.928463       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-9915, name: inline-volume-tester-rxtpp, uid: ed85b454-37d8-4820-8e9f-cd5542b88461]\nI0527 00:38:53.092296       1 utils.go:413] couldn't find ipfamilies for headless service: webhook-3920/e2e-test-webhook. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.188.50).\nI0527 00:38:53.422877       1 pvc_protection_controller.go:291] PVC volume-67/pvc-m6h8q is unused\nI0527 00:38:53.429464       1 pv_controller.go:638] volume \"local-qwwrq\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:38:53.432404       1 pv_controller.go:864] volume \"local-qwwrq\" entered phase \"Released\"\nI0527 00:38:53.616749       1 pv_controller_base.go:504] deletion of claim \"volume-67/pvc-m6h8q\" was already processed\nI0527 00:38:53.933086       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-66d6495f4b to 2\"\nI0527 00:38:53.933319       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-66d6495f4b\" need=2 deleting=1\nI0527 00:38:53.933348       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-66d6495f4b\" relatedReplicaSets=[webserver-66d6495f4b webserver-dd94f59b7]\nI0527 00:38:53.933478       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-66d6495f4b\" pod=\"deployment-6359/webserver-66d6495f4b-4856b\"\nI0527 00:38:53.945927       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-66d6495f4b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-66d6495f4b-4856b\"\nE0527 00:38:54.001280       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:38:54.279686       1 namespace_controller.go:185] Namespace has been deleted volume-8924\nI0527 00:38:54.510462       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-05-27 00:38:21 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdby\",\n  InstanceId: \"i-069a67f4c9afb4c56\",\n  State: \"detaching\",\n  VolumeId: \"vol-00c040294c27f37de\"\n}\nI0527 00:38:54.510512       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-00c040294c27f37de\") on node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:38:54.597421       1 namespace_controller.go:185] Namespace has been deleted configmap-6900\nI0527 00:38:54.654943       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-66d6495f4b\" need=1 deleting=1\nI0527 00:38:54.655079       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-66d6495f4b\" relatedReplicaSets=[webserver-dd94f59b7 webserver-66d6495f4b]\nI0527 00:38:54.655221       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-66d6495f4b\" pod=\"deployment-6359/webserver-66d6495f4b-jrvkm\"\nI0527 00:38:54.656597       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-66d6495f4b to 1\"\nI0527 00:38:54.665491       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-66d6495f4b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-66d6495f4b-jrvkm\"\nI0527 00:38:54.670915       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:54.680503       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:54.806198       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3920/e2e-test-webhook-8bpds\" objectUID=63cb2ed0-fbc9-485b-a4b2-523b6bbcd5c1 kind=\"EndpointSlice\" virtual=false\nI0527 00:38:54.813499       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3920/e2e-test-webhook-8bpds\" objectUID=63cb2ed0-fbc9-485b-a4b2-523b6bbcd5c1 kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:38:55.005895       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3920/sample-webhook-deployment-6bd9446d55\" objectUID=d3e9f336-6c98-4467-82bd-8d2454b39857 kind=\"ReplicaSet\" virtual=false\nI0527 00:38:55.006213       1 deployment_controller.go:581] Deployment webhook-3920/sample-webhook-deployment has been deleted\nI0527 00:38:55.007751       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3920/sample-webhook-deployment-6bd9446d55\" objectUID=d3e9f336-6c98-4467-82bd-8d2454b39857 kind=\"ReplicaSet\" propagationPolicy=Background\nI0527 00:38:55.010595       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3920/sample-webhook-deployment-6bd9446d55-zlz6f\" objectUID=d3f30dc2-0b07-422a-828b-21b4f3832dae kind=\"Pod\" virtual=false\nI0527 00:38:55.014260       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3920/sample-webhook-deployment-6bd9446d55-zlz6f\" objectUID=d3f30dc2-0b07-422a-828b-21b4f3832dae kind=\"Pod\" propagationPolicy=Background\nW0527 00:38:55.021323       1 aws.go:2268] Expected instance i-033cc39af9e90ab7c/detached for volume vol-07e5b4da20cff9ffe, but found instance i-081c5901a8830e60d/attached\nI0527 00:38:55.064310       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-dd94f59b7 to 12\"\nI0527 00:38:55.064754       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=12 creating=3\nI0527 00:38:55.071622       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-mprhv\"\nI0527 00:38:55.079106       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-5wrbp\"\nI0527 00:38:55.081088       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-4t6wt\"\nI0527 00:38:55.106666       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:55.109179       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=7 deleting=5\nI0527 00:38:55.109390       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" relatedReplicaSets=[webserver-dd94f59b7 webserver-66d6495f4b webserver-6d6886857d]\nI0527 00:38:55.109587       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-dd94f59b7\" pod=\"deployment-6359/webserver-dd94f59b7-4t6wt\"\nI0527 00:38:55.109838       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-dd94f59b7\" pod=\"deployment-6359/webserver-dd94f59b7-5wrbp\"\nI0527 00:38:55.110064       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-dd94f59b7\" pod=\"deployment-6359/webserver-dd94f59b7-wnwpc\"\nI0527 00:38:55.110305       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-dd94f59b7\" pod=\"deployment-6359/webserver-dd94f59b7-j7c9w\"\nI0527 00:38:55.110516       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-dd94f59b7\" pod=\"deployment-6359/webserver-dd94f59b7-mprhv\"\nI0527 00:38:55.112888       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 7\"\nI0527 00:38:55.128313       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6d6886857d to 5\"\nI0527 00:38:55.128534       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-6d6886857d\" need=5 creating=5\nI0527 00:38:55.129638       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-4t6wt\"\nI0527 00:38:55.133811       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-mprhv\"\nI0527 00:38:55.134000       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-5wrbp\"\nI0527 00:38:55.135136       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-wnwpc\"\nI0527 00:38:55.135616       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-j7c9w\"\nI0527 00:38:55.141452       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6d6886857d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6d6886857d-r44fk\"\nI0527 00:38:55.148330       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6d6886857d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6d6886857d-7bs54\"\nI0527 00:38:55.149993       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6d6886857d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6d6886857d-8n79f\"\nI0527 00:38:55.157952       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6d6886857d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6d6886857d-mj84v\"\nI0527 00:38:55.166196       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:55.169660       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6d6886857d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6d6886857d-ptk5s\"\nI0527 00:38:55.227020       1 namespace_controller.go:185] Namespace has been deleted security-context-7167\nI0527 00:38:55.459200       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=6 deleting=1\nI0527 00:38:55.459233       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" relatedReplicaSets=[webserver-66d6495f4b webserver-6d6886857d webserver-dd94f59b7]\nI0527 00:38:55.459327       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-dd94f59b7\" pod=\"deployment-6359/webserver-dd94f59b7-8jb26\"\nI0527 00:38:55.460058       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 6\"\nI0527 00:38:55.471362       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-8jb26\"\nE0527 00:38:55.485080       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:38:55.502507       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-7842/awsmrlhm\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nE0527 00:38:55.862727       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:38:56.659212       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=5 deleting=1\nI0527 00:38:56.659245       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" relatedReplicaSets=[webserver-6d6886857d webserver-dd94f59b7 webserver-66d6495f4b]\nI0527 00:38:56.659359       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-dd94f59b7\" pod=\"deployment-6359/webserver-dd94f59b7-bbxr7\"\nI0527 00:38:56.660086       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 5\"\nI0527 00:38:56.670676       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-6d6886857d\" need=6 creating=1\nI0527 00:38:56.671309       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6d6886857d to 6\"\nI0527 00:38:56.675307       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-bbxr7\"\nI0527 00:38:56.678458       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6d6886857d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6d6886857d-xtxpg\"\nI0527 00:38:56.696332       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:57.464258       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 4\"\nI0527 00:38:57.464585       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=4 deleting=1\nI0527 00:38:57.464718       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" relatedReplicaSets=[webserver-dd94f59b7 webserver-66d6495f4b webserver-6d6886857d]\nI0527 00:38:57.464882       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-dd94f59b7\" pod=\"deployment-6359/webserver-dd94f59b7-v5wrq\"\nI0527 00:38:57.475277       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-6d6886857d\" need=7 creating=1\nI0527 00:38:57.476010       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6d6886857d to 7\"\nI0527 00:38:57.480279       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-v5wrq\"\nI0527 00:38:57.484495       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6d6886857d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6d6886857d-sbn2d\"\nI0527 00:38:57.829308       1 graph_builder.go:587] add [v1/Pod, namespace: gc-5385, name: pod1, uid: 25786faf-89c4-4451-b0d2-7369d4269fb4] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0527 00:38:57.829374       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5385/pod2\" objectUID=6296b7f9-c1f5-42e9-b627-05b4de478ab5 kind=\"Pod\" virtual=false\nI0527 00:38:57.829587       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5385/pod1\" objectUID=25786faf-89c4-4451-b0d2-7369d4269fb4 kind=\"Pod\" virtual=false\nI0527 00:38:57.831247       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-5385, name: pod2, uid: 6296b7f9-c1f5-42e9-b627-05b4de478ab5] to attemptToDelete, because its owner [v1/Pod, namespace: gc-5385, name: pod1, uid: 25786faf-89c4-4451-b0d2-7369d4269fb4] is deletingDependents\nI0527 00:38:57.832366       1 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-5385, name: pod2, uid: 6296b7f9-c1f5-42e9-b627-05b4de478ab5] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0527 00:38:57.834550       1 graph_builder.go:587] add [v1/Pod, namespace: gc-5385, name: pod2, uid: 6296b7f9-c1f5-42e9-b627-05b4de478ab5] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0527 00:38:57.835110       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5385/pod2\" objectUID=6296b7f9-c1f5-42e9-b627-05b4de478ab5 kind=\"Pod\" virtual=false\nI0527 00:38:57.835631       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5385/pod3\" objectUID=1755cafe-96fd-40d8-894f-379d1518fb6b kind=\"Pod\" virtual=false\nI0527 00:38:57.838287       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-5385, name: pod3, uid: 1755cafe-96fd-40d8-894f-379d1518fb6b] to attemptToDelete, because its owner [v1/Pod, namespace: gc-5385, name: pod2, uid: 6296b7f9-c1f5-42e9-b627-05b4de478ab5] is deletingDependents\nI0527 00:38:57.839540       1 garbagecollector.go:545] processing object [v1/Pod, namespace: gc-5385, name: pod3, uid: 1755cafe-96fd-40d8-894f-379d1518fb6b], some of its owners and its dependent [[v1/Pod, namespace: gc-5385, name: pod1, uid: 25786faf-89c4-4451-b0d2-7369d4269fb4]] have FinalizerDeletingDependents, to prevent potential cycle, its ownerReferences are going to be modified to be non-blocking, then the object is going to be deleted with Foreground\nI0527 00:38:57.842431       1 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-5385, name: pod3, uid: 1755cafe-96fd-40d8-894f-379d1518fb6b] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0527 00:38:57.842618       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5385/pod2\" objectUID=6296b7f9-c1f5-42e9-b627-05b4de478ab5 kind=\"Pod\" virtual=false\nI0527 00:38:57.844390       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-5385, name: pod2, uid: 6296b7f9-c1f5-42e9-b627-05b4de478ab5]\nI0527 00:38:57.846653       1 graph_builder.go:587] add [v1/Pod, namespace: gc-5385, name: pod3, uid: 1755cafe-96fd-40d8-894f-379d1518fb6b] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0527 00:38:57.846947       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5385/pod1\" objectUID=25786faf-89c4-4451-b0d2-7369d4269fb4 kind=\"Pod\" virtual=false\nI0527 00:38:57.847120       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5385/pod3\" objectUID=1755cafe-96fd-40d8-894f-379d1518fb6b kind=\"Pod\" virtual=false\nI0527 00:38:57.855698       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5385/pod3\" objectUID=1755cafe-96fd-40d8-894f-379d1518fb6b kind=\"Pod\" virtual=false\nI0527 00:38:57.856169       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5385/pod1\" objectUID=25786faf-89c4-4451-b0d2-7369d4269fb4 kind=\"Pod\" virtual=false\nI0527 00:38:57.859479       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-5385, name: pod1, uid: 25786faf-89c4-4451-b0d2-7369d4269fb4]\nI0527 00:38:57.867175       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5385/pod3\" objectUID=1755cafe-96fd-40d8-894f-379d1518fb6b kind=\"Pod\" virtual=false\nI0527 00:38:57.868956       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-5385, name: pod3, uid: 1755cafe-96fd-40d8-894f-379d1518fb6b]\nI0527 00:38:57.890680       1 route_controller.go:294] set node ip-172-20-40-209.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:38:57.890845       1 route_controller.go:294] set node ip-172-20-41-144.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:38:57.890893       1 route_controller.go:294] set node ip-172-20-33-93.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:38:57.890922       1 route_controller.go:294] set node ip-172-20-40-196.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:38:57.891340       1 route_controller.go:294] set node ip-172-20-42-187.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:38:57.892196       1 pvc_protection_controller.go:291] PVC provisioning-7431/pvc-btsg2 is unused\nI0527 00:38:57.898355       1 pv_controller.go:638] volume \"local-wwk56\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:38:57.901255       1 pv_controller.go:864] volume \"local-wwk56\" entered phase \"Released\"\nI0527 00:38:58.086176       1 pv_controller_base.go:504] deletion of claim \"provisioning-7431/pvc-btsg2\" was already processed\nI0527 00:38:58.560107       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-6107/update-demo-nautilus-vhmbd\" objectUID=dea560cf-5f47-4770-89d6-d6681f8329f0 kind=\"Pod\" virtual=false\nI0527 00:38:58.560885       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-6107/update-demo-nautilus-xbdgg\" objectUID=06f4ba25-50a8-48d5-bbca-fad4d39ecd8d kind=\"Pod\" virtual=false\nI0527 00:38:58.566601       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-6107/update-demo-nautilus-xbdgg\" objectUID=06f4ba25-50a8-48d5-bbca-fad4d39ecd8d kind=\"Pod\" propagationPolicy=Background\nI0527 00:38:58.567090       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-6107/update-demo-nautilus-vhmbd\" objectUID=dea560cf-5f47-4770-89d6-d6681f8329f0 kind=\"Pod\" propagationPolicy=Background\nE0527 00:38:58.874795       1 tokens_controller.go:262] error synchronizing serviceaccount volume-7907/default: secrets \"default-token-9fhzw\" is forbidden: unable to create new content in namespace volume-7907 because it is being terminated\nE0527 00:38:58.987834       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:38:59.269813       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6d6886857d to 6\"\nI0527 00:38:59.270368       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-6d6886857d\" need=6 deleting=1\nI0527 00:38:59.270443       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-6d6886857d\" relatedReplicaSets=[webserver-dd94f59b7 webserver-66d6495f4b webserver-6d6886857d]\nI0527 00:38:59.270605       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-6d6886857d\" pod=\"deployment-6359/webserver-6d6886857d-sbn2d\"\nI0527 00:38:59.274673       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=3 deleting=1\nI0527 00:38:59.274845       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 3\"\nI0527 00:38:59.274919       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" relatedReplicaSets=[webserver-dd94f59b7 webserver-66d6495f4b webserver-6d6886857d]\nI0527 00:38:59.275053       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-dd94f59b7\" pod=\"deployment-6359/webserver-dd94f59b7-gng7c\"\nI0527 00:38:59.300832       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:59.301723       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6d6886857d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6d6886857d-sbn2d\"\nI0527 00:38:59.301743       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-gng7c\"\nI0527 00:38:59.316360       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:59.325267       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-4077/test-rollover-deployment-668db69979\" need=1 creating=1\nI0527 00:38:59.336420       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-6d6886857d\" need=2 deleting=4\nI0527 00:38:59.336454       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-6d6886857d\" relatedReplicaSets=[webserver-6d6886857d webserver-84767c454 webserver-dd94f59b7 webserver-66d6495f4b]\nI0527 00:38:59.336541       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-6d6886857d\" pod=\"deployment-6359/webserver-6d6886857d-mj84v\"\nI0527 00:38:59.336752       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6d6886857d to 2\"\nI0527 00:38:59.337288       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-6d6886857d\" pod=\"deployment-6359/webserver-6d6886857d-8n79f\"\nI0527 00:38:59.337420       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-6d6886857d\" pod=\"deployment-6359/webserver-6d6886857d-r44fk\"\nI0527 00:38:59.337533       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-6d6886857d\" pod=\"deployment-6359/webserver-6d6886857d-xtxpg\"\nI0527 00:38:59.354745       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-84767c454\" need=4 creating=4\nI0527 00:38:59.354942       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-84767c454 to 4\"\nI0527 00:38:59.370573       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:38:59.372925       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6d6886857d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6d6886857d-mj84v\"\nI0527 00:38:59.372951       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-84767c454\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-84767c454-pltn2\"\nI0527 00:38:59.377179       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6d6886857d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6d6886857d-xtxpg\"\nI0527 00:38:59.378050       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6d6886857d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6d6886857d-r44fk\"\nI0527 00:38:59.379913       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-84767c454\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-84767c454-hjxnd\"\nI0527 00:38:59.379935       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6d6886857d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6d6886857d-8n79f\"\nI0527 00:38:59.385937       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-84767c454\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-84767c454-78jcp\"\nI0527 00:38:59.395115       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-84767c454\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-84767c454-kwgvq\"\nI0527 00:38:59.457015       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-4077/test-rollover-deployment-668db69979\" objectUID=d2e1cd2d-7a69-46d1-a77e-a6487bd89cf0 kind=\"ReplicaSet\" virtual=false\nI0527 00:38:59.457630       1 deployment_controller.go:581] Deployment deployment-4077/test-rollover-deployment has been deleted\nI0527 00:38:59.457885       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-4077/test-rollover-controller\" objectUID=dd2258b4-458e-4fc2-a2ca-a86fa7189187 kind=\"ReplicaSet\" virtual=false\nI0527 00:38:59.458569       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-4077/test-rollover-deployment-78bc8b888c\" objectUID=6776d413-61d3-4a83-9b51-1dabe4f59be1 kind=\"ReplicaSet\" virtual=false\nI0527 00:38:59.463741       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-4077/test-rollover-controller\" objectUID=dd2258b4-458e-4fc2-a2ca-a86fa7189187 kind=\"ReplicaSet\" propagationPolicy=Background\nI0527 00:38:59.465694       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-4077/test-rollover-deployment-668db69979\" objectUID=d2e1cd2d-7a69-46d1-a77e-a6487bd89cf0 kind=\"ReplicaSet\" propagationPolicy=Background\nI0527 00:38:59.468922       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-4077/test-rollover-deployment-78bc8b888c\" objectUID=6776d413-61d3-4a83-9b51-1dabe4f59be1 kind=\"ReplicaSet\" propagationPolicy=Background\nE0527 00:38:59.469435       1 tokens_controller.go:262] error synchronizing serviceaccount volumemode-7472/default: secrets \"default-token-rp85s\" is forbidden: unable to create new content in namespace volumemode-7472 because it is being terminated\nE0527 00:38:59.479480       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"apps/v1\", Kind:\"ReplicaSet\", Name:\"test-rollover-deployment-668db69979\", UID:\"d2e1cd2d-7a69-46d1-a77e-a6487bd89cf0\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"deployment-4077\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"apps/v1\", Kind:\"Deployment\", Name:\"test-rollover-deployment\", UID:\"9eb17313-650a-4b8d-8a64-f4e50249868e\", Controller:(*bool)(0xc002b8bf47), BlockOwnerDeletion:(*bool)(0xc002b8bf48)}}}: replicasets.apps \"test-rollover-deployment-668db69979\" not found\nI0527 00:38:59.484911       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-4077/test-rollover-deployment-668db69979\" objectUID=d2e1cd2d-7a69-46d1-a77e-a6487bd89cf0 kind=\"ReplicaSet\" virtual=false\nE0527 00:38:59.865820       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0527 00:39:00.005509       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-2763/default: secrets \"default-token-dtklh\" is forbidden: unable to create new content in namespace kubectl-2763 because it is being terminated\nE0527 00:39:00.007943       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-1496/pvc-26sv2: storageclass.storage.k8s.io \"provisioning-1496\" not found\nI0527 00:39:00.008439       1 event.go:291] \"Event occurred\" object=\"provisioning-1496/pvc-26sv2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-1496\\\" not found\"\nI0527 00:39:00.204109       1 pv_controller.go:864] volume \"local-s8f4b\" entered phase \"Available\"\nI0527 00:39:00.526575       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=2 deleting=1\nI0527 00:39:00.526608       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" relatedReplicaSets=[webserver-dd94f59b7 webserver-66d6495f4b webserver-6d6886857d webserver-84767c454]\nI0527 00:39:00.526694       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-dd94f59b7\" pod=\"deployment-6359/webserver-dd94f59b7-9dkhf\"\nI0527 00:39:00.527257       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 2\"\nI0527 00:39:00.537732       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-9dkhf\"\nI0527 00:39:00.539792       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-84767c454\" need=5 creating=1\nI0527 00:39:00.540424       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-84767c454 to 5\"\nI0527 00:39:00.545308       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-84767c454\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-84767c454-8d6nq\"\nI0527 00:39:00.991912       1 pv_controller.go:915] claim \"provisioning-1496/pvc-26sv2\" bound to volume \"local-s8f4b\"\nI0527 00:39:00.998660       1 pv_controller.go:864] volume \"local-s8f4b\" entered phase \"Bound\"\nI0527 00:39:00.998770       1 pv_controller.go:967] volume \"local-s8f4b\" bound to claim \"provisioning-1496/pvc-26sv2\"\nI0527 00:39:01.004342       1 pv_controller.go:808] claim \"provisioning-1496/pvc-26sv2\" entered phase \"Bound\"\nI0527 00:39:01.064243       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=1 deleting=1\nI0527 00:39:01.064282       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" relatedReplicaSets=[webserver-dd94f59b7 webserver-66d6495f4b webserver-6d6886857d webserver-84767c454]\nI0527 00:39:01.064490       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-dd94f59b7\" pod=\"deployment-6359/webserver-dd94f59b7-2z2mj\"\nI0527 00:39:01.066403       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 1\"\nI0527 00:39:01.076759       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-84767c454\" need=6 creating=1\nI0527 00:39:01.077921       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-84767c454 to 6\"\nI0527 00:39:01.077933       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-2z2mj\"\nI0527 00:39:01.088753       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-84767c454\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-84767c454-hfbgm\"\nE0527 00:39:01.163683       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:01.253109       1 aws_util.go:113] Successfully created EBS Disk volume aws://ap-southeast-1a/vol-0226586ae109ac335\nI0527 00:39:01.313275       1 pv_controller.go:1652] volume \"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\" provisioned for claim \"fsgroupchangepolicy-7842/awsmrlhm\"\nI0527 00:39:01.313510       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-7842/awsmrlhm\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef using kubernetes.io/aws-ebs\"\nI0527 00:39:01.317184       1 pv_controller.go:864] volume \"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\" entered phase \"Bound\"\nI0527 00:39:01.317378       1 pv_controller.go:967] volume \"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\" bound to claim \"fsgroupchangepolicy-7842/awsmrlhm\"\nI0527 00:39:01.322762       1 pv_controller.go:808] claim \"fsgroupchangepolicy-7842/awsmrlhm\" entered phase \"Bound\"\nE0527 00:39:01.773929       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0527 00:39:01.836754       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:01.937480       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0226586ae109ac335\") from node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nI0527 00:39:02.004887       1 aws.go:2014] Assigned mount device br -> volume vol-0226586ae109ac335\nE0527 00:39:02.165511       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:02.270785       1 namespace_controller.go:185] Namespace has been deleted kubelet-test-5189\nI0527 00:39:02.305845       1 event.go:291] \"Event occurred\" object=\"gc-9929/simple\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job simple-1622075940\"\nI0527 00:39:02.315652       1 cronjob_controller.go:188] Unable to update status for gc-9929/simple (rv = 27719): Operation cannot be fulfilled on cronjobs.batch \"simple\": the object has been modified; please apply your changes to the latest version and try again\nI0527 00:39:02.320017       1 event.go:291] \"Event occurred\" object=\"gc-9929/simple-1622075940\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simple-1622075940-vpp8n\"\nI0527 00:39:02.330477       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" need=0 deleting=1\nI0527 00:39:02.331279       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-dd94f59b7\" relatedReplicaSets=[webserver-6d6886857d webserver-84767c454 webserver-dd94f59b7 webserver-66d6495f4b]\nI0527 00:39:02.331553       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-dd94f59b7\" pod=\"deployment-6359/webserver-dd94f59b7-fjgbk\"\nI0527 00:39:02.331040       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 0\"\nI0527 00:39:02.341879       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-84767c454\" need=7 creating=1\nI0527 00:39:02.342314       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-84767c454 to 7\"\nI0527 00:39:02.348115       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-fjgbk\"\nI0527 00:39:02.350873       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-84767c454\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-84767c454-sz7v7\"\nI0527 00:39:02.397342       1 aws.go:2427] AttachVolume volume=\"vol-0226586ae109ac335\" instance=\"i-081c5901a8830e60d\" request returned {\n  AttachTime: 2021-05-27 00:39:02.386 +0000 UTC,\n  Device: \"/dev/xvdbr\",\n  InstanceId: \"i-081c5901a8830e60d\",\n  State: \"attaching\",\n  VolumeId: \"vol-0226586ae109ac335\"\n}\nI0527 00:39:02.555573       1 pvc_protection_controller.go:291] PVC provisioning-5523/pvc-7z9hq is unused\nI0527 00:39:02.567069       1 pv_controller.go:638] volume \"local-fgm79\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:39:02.571158       1 pv_controller.go:864] volume \"local-fgm79\" entered phase \"Released\"\nI0527 00:39:02.575344       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9929/simple-1622075940\" objectUID=b9b2e3b7-b25f-4a4b-95d9-c74f229a5a6f kind=\"Job\" virtual=false\nI0527 00:39:02.577122       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-9929/simple-1622075940\" objectUID=b9b2e3b7-b25f-4a4b-95d9-c74f229a5a6f kind=\"Job\" propagationPolicy=Background\nI0527 00:39:02.580766       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9929/simple-1622075940-vpp8n\" objectUID=16ba45b6-423d-4ea1-b71f-e4f8f6cb7c39 kind=\"Pod\" virtual=false\nI0527 00:39:02.582588       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-9929/simple-1622075940-vpp8n\" objectUID=16ba45b6-423d-4ea1-b71f-e4f8f6cb7c39 kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:02.745932       1 pv_controller_base.go:504] deletion of claim \"provisioning-5523/pvc-7z9hq\" was already processed\nI0527 00:39:02.803061       1 namespace_controller.go:185] Namespace has been deleted volumemode-3834\nI0527 00:39:02.845995       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-6fbb7dd4-4a69-4163-8582-0d2cdd619ab1\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-01a34b0f6200cb181\") on node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:39:02.848163       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-6fbb7dd4-4a69-4163-8582-0d2cdd619ab1\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-01a34b0f6200cb181\") on node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:39:03.515810       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-66d6495f4b\" need=1 creating=1\nI0527 00:39:03.524079       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-66d6495f4b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-66d6495f4b-9lvzx\"\nI0527 00:39:03.655427       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-994b6628-e941-484c-964a-e2d9b8dfdcce\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-3441^e5704a21-be83-11eb-b599-3616f201064e\") on node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:39:03.657836       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-994b6628-e941-484c-964a-e2d9b8dfdcce\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-3441^e5704a21-be83-11eb-b599-3616f201064e\") on node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:39:03.671065       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-994b6628-e941-484c-964a-e2d9b8dfdcce\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-3441^e5704a21-be83-11eb-b599-3616f201064e\") on node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:39:03.718983       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-6d6886857d\" need=2 creating=1\nI0527 00:39:03.722760       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6d6886857d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6d6886857d-b9hrc\"\nI0527 00:39:03.780433       1 pvc_protection_controller.go:291] PVC volume-7951/awsmtfvq is unused\nI0527 00:39:03.786055       1 pv_controller.go:638] volume \"pvc-6fbb7dd4-4a69-4163-8582-0d2cdd619ab1\" is released and reclaim policy \"Delete\" will be executed\nI0527 00:39:03.791522       1 pv_controller.go:864] volume \"pvc-6fbb7dd4-4a69-4163-8582-0d2cdd619ab1\" entered phase \"Released\"\nI0527 00:39:03.792914       1 pv_controller.go:1326] isVolumeReleased[pvc-6fbb7dd4-4a69-4163-8582-0d2cdd619ab1]: volume is released\nI0527 00:39:03.922646       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-84767c454\" need=7 creating=1\nI0527 00:39:03.929247       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-84767c454\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-84767c454-8hvj2\"\nI0527 00:39:03.942336       1 namespace_controller.go:185] Namespace has been deleted volume-7907\nI0527 00:39:03.953250       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:03.962197       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-1a/vol-01a34b0f6200cb181: error deleting EBS volume \"vol-01a34b0f6200cb181\" since volume is currently attached to \"i-069a67f4c9afb4c56\"\nE0527 00:39:03.962287       1 goroutinemap.go:150] Operation for \"delete-pvc-6fbb7dd4-4a69-4163-8582-0d2cdd619ab1[d1f65ab0-2b70-4a94-8a76-4b4dbdd310b3]\" failed. No retries permitted until 2021-05-27 00:39:04.462257478 +0000 UTC m=+1074.718520069 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-01a34b0f6200cb181\\\" since volume is currently attached to \\\"i-069a67f4c9afb4c56\\\"\"\nI0527 00:39:03.962510       1 event.go:291] \"Event occurred\" object=\"pvc-6fbb7dd4-4a69-4163-8582-0d2cdd619ab1\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-01a34b0f6200cb181\\\" since volume is currently attached to \\\"i-069a67f4c9afb4c56\\\"\"\nI0527 00:39:04.126396       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-84767c454\" need=7 creating=1\nI0527 00:39:04.132160       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-84767c454\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-84767c454-tbmd4\"\nI0527 00:39:04.330081       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-84767c454\" need=7 creating=1\nI0527 00:39:04.335687       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-84767c454\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-84767c454-6fxp8\"\nI0527 00:39:04.510147       1 namespace_controller.go:185] Namespace has been deleted deployment-4077\nI0527 00:39:04.525117       1 aws.go:2037] Releasing in-process attachment entry: br -> volume vol-0226586ae109ac335\nI0527 00:39:04.525270       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0226586ae109ac335\") from node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nI0527 00:39:04.525408       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-7842/pod-80d60a71-971c-4944-b1c9-73bf63c4c386\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\\\" \"\nI0527 00:39:04.615011       1 namespace_controller.go:185] Namespace has been deleted webhook-3920\nI0527 00:39:04.647918       1 namespace_controller.go:185] Namespace has been deleted volumemode-7472\nI0527 00:39:04.746170       1 namespace_controller.go:185] Namespace has been deleted webhook-3920-markers\nI0527 00:39:05.086694       1 namespace_controller.go:185] Namespace has been deleted kubectl-2763\nI0527 00:39:05.251991       1 namespace_controller.go:185] Namespace has been deleted volume-1110\nI0527 00:39:05.344952       1 pvc_protection_controller.go:291] PVC provisioning-3441/csi-hostpath8279g is unused\nI0527 00:39:05.351655       1 pv_controller.go:638] volume \"pvc-994b6628-e941-484c-964a-e2d9b8dfdcce\" is released and reclaim policy \"Delete\" will be executed\nI0527 00:39:05.354869       1 pv_controller.go:864] volume \"pvc-994b6628-e941-484c-964a-e2d9b8dfdcce\" entered phase \"Released\"\nI0527 00:39:05.357846       1 pv_controller.go:1326] isVolumeReleased[pvc-994b6628-e941-484c-964a-e2d9b8dfdcce]: volume is released\nI0527 00:39:05.378230       1 pv_controller_base.go:504] deletion of claim \"provisioning-3441/csi-hostpath8279g\" was already processed\nI0527 00:39:05.638306       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-66d6495f4b\" need=0 deleting=1\nI0527 00:39:05.638350       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-66d6495f4b\" relatedReplicaSets=[webserver-dd94f59b7 webserver-66d6495f4b webserver-6d6886857d webserver-84767c454]\nI0527 00:39:05.639029       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-66d6495f4b to 0\"\nI0527 00:39:05.639246       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-66d6495f4b\" pod=\"deployment-6359/webserver-66d6495f4b-9lvzx\"\nI0527 00:39:05.647687       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-84767c454\" need=8 creating=1\nI0527 00:39:05.648251       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-84767c454 to 8\"\nI0527 00:39:05.651374       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-66d6495f4b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-66d6495f4b-9lvzx\"\nI0527 00:39:05.658711       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-84767c454\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-84767c454-rmnwc\"\nI0527 00:39:05.978706       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-07e5b4da20cff9ffe\") on node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nI0527 00:39:05.982159       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-07e5b4da20cff9ffe\") on node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nI0527 00:39:06.129813       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-580-1307/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nE0527 00:39:06.371098       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-7431/default: secrets \"default-token-bx5vs\" is forbidden: unable to create new content in namespace provisioning-7431 because it is being terminated\nI0527 00:39:06.378411       1 pv_controller.go:864] volume \"local-pvm79f7\" entered phase \"Available\"\nI0527 00:39:06.419719       1 namespace_controller.go:185] Namespace has been deleted replicaset-2820\nI0527 00:39:06.525498       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-580-1307/csi-mockplugin-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\"\nI0527 00:39:06.575513       1 pv_controller.go:915] claim \"persistent-local-volumes-test-9411/pvc-7js6q\" bound to volume \"local-pvm79f7\"\nI0527 00:39:06.586959       1 pv_controller.go:864] volume \"local-pvm79f7\" entered phase \"Bound\"\nI0527 00:39:06.586986       1 pv_controller.go:967] volume \"local-pvm79f7\" bound to claim \"persistent-local-volumes-test-9411/pvc-7js6q\"\nI0527 00:39:06.594053       1 pv_controller.go:808] claim \"persistent-local-volumes-test-9411/pvc-7js6q\" entered phase \"Bound\"\nI0527 00:39:06.652743       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-6d6886857d\" need=1 deleting=1\nI0527 00:39:06.653005       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-6d6886857d\" relatedReplicaSets=[webserver-dd94f59b7 webserver-66d6495f4b webserver-6d6886857d webserver-84767c454]\nI0527 00:39:06.653254       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-6d6886857d\" pod=\"deployment-6359/webserver-6d6886857d-b9hrc\"\nI0527 00:39:06.655750       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6d6886857d to 1\"\nI0527 00:39:06.664013       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6d6886857d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6d6886857d-b9hrc\"\nI0527 00:39:07.024978       1 pvc_protection_controller.go:291] PVC volume-9225/pvc-kzjwm is unused\nI0527 00:39:07.030217       1 pv_controller.go:638] volume \"local-hnmng\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:39:07.032860       1 pv_controller.go:864] volume \"local-hnmng\" entered phase \"Released\"\nE0527 00:39:07.169753       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:07.219739       1 pv_controller_base.go:504] deletion of claim \"volume-9225/pvc-kzjwm\" was already processed\nI0527 00:39:07.261925       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-6d6886857d\" need=0 deleting=1\nI0527 00:39:07.262176       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-6d6886857d\" relatedReplicaSets=[webserver-66d6495f4b webserver-6d6886857d webserver-84767c454 webserver-dd94f59b7]\nI0527 00:39:07.262332       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-6d6886857d\" pod=\"deployment-6359/webserver-6d6886857d-ptk5s\"\nI0527 00:39:07.262687       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6d6886857d to 0\"\nI0527 00:39:07.281436       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6d6886857d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6d6886857d-ptk5s\"\nI0527 00:39:07.892025       1 route_controller.go:294] set node ip-172-20-40-196.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:07.892025       1 route_controller.go:294] set node ip-172-20-40-209.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:07.892053       1 route_controller.go:294] set node ip-172-20-41-144.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:07.892065       1 route_controller.go:294] set node ip-172-20-42-187.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:07.892123       1 route_controller.go:294] set node ip-172-20-33-93.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:08.224510       1 namespace_controller.go:185] Namespace has been deleted volume-67\nI0527 00:39:08.411456       1 aws.go:2291] Waiting for volume \"vol-01a34b0f6200cb181\" state: actual=detaching, desired=detached\nI0527 00:39:08.707273       1 namespace_controller.go:185] Namespace has been deleted kubectl-9801\nE0527 00:39:08.996794       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0527 00:39:09.266126       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-4767/default: secrets \"default-token-2s9k9\" is forbidden: unable to create new content in namespace csi-mock-volumes-4767 because it is being terminated\nI0527 00:39:10.491561       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-05-27 00:38:08 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdcc\",\n  InstanceId: \"i-069a67f4c9afb4c56\",\n  State: \"detaching\",\n  VolumeId: \"vol-01a34b0f6200cb181\"\n}\nI0527 00:39:10.491608       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-6fbb7dd4-4a69-4163-8582-0d2cdd619ab1\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-01a34b0f6200cb181\") on node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:39:10.781973       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-84767c454\" need=7 deleting=1\nI0527 00:39:10.782128       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-84767c454\" relatedReplicaSets=[webserver-84767c454 webserver-66d6495f4b webserver-6d6886857d]\nI0527 00:39:10.782335       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-84767c454\" pod=\"deployment-6359/webserver-84767c454-rmnwc\"\nI0527 00:39:10.782543       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-84767c454 to 7\"\nI0527 00:39:10.790980       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" need=2 creating=2\nI0527 00:39:10.792148       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6f6f95ddc4 to 2\"\nI0527 00:39:10.800251       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6f6f95ddc4-wmnzj\"\nI0527 00:39:10.800331       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-84767c454\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-84767c454-rmnwc\"\nI0527 00:39:10.804167       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:10.807584       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6f6f95ddc4-xqp7d\"\nI0527 00:39:10.826846       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:10.841245       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-84767c454 to 6\"\nI0527 00:39:10.848704       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" need=3 creating=1\nI0527 00:39:10.849259       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6f6f95ddc4 to 3\"\nI0527 00:39:10.860051       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6f6f95ddc4-bwzzm\"\nI0527 00:39:10.866765       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-84767c454\" need=6 deleting=1\nI0527 00:39:10.866964       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-84767c454\" relatedReplicaSets=[webserver-84767c454 webserver-66d6495f4b webserver-6d6886857d webserver-6f6f95ddc4]\nI0527 00:39:10.867250       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-84767c454\" pod=\"deployment-6359/webserver-84767c454-6fxp8\"\nI0527 00:39:10.879545       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-84767c454\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-84767c454-6fxp8\"\nI0527 00:39:11.043901       1 namespace_controller.go:185] Namespace has been deleted kubectl-6107\nW0527 00:39:11.101582       1 aws.go:2268] Expected instance i-033cc39af9e90ab7c/detached for volume vol-07e5b4da20cff9ffe, but found instance i-081c5901a8830e60d/detached\nI0527 00:39:11.250687       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"crd-webhook-3399/sample-crd-conversion-webhook-deployment-7d6697c5b7\" need=1 creating=1\nI0527 00:39:11.251010       1 event.go:291] \"Event occurred\" object=\"crd-webhook-3399/sample-crd-conversion-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-crd-conversion-webhook-deployment-7d6697c5b7 to 1\"\nI0527 00:39:11.263236       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"crd-webhook-3399/sample-crd-conversion-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-crd-conversion-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:11.265117       1 event.go:291] \"Event occurred\" object=\"crd-webhook-3399/sample-crd-conversion-webhook-deployment-7d6697c5b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-crd-conversion-webhook-deployment-7d6697c5b7-d76mp\"\nI0527 00:39:11.402480       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-07e5b4da20cff9ffe\") on node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nI0527 00:39:11.451744       1 namespace_controller.go:185] Namespace has been deleted provisioning-7431\nI0527 00:39:11.465285       1 pvc_protection_controller.go:291] PVC provisioning-1496/pvc-26sv2 is unused\nI0527 00:39:11.470718       1 pv_controller.go:638] volume \"local-s8f4b\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:39:11.473569       1 pv_controller.go:864] volume \"local-s8f4b\" entered phase \"Released\"\nI0527 00:39:11.672854       1 pv_controller_base.go:504] deletion of claim \"provisioning-1496/pvc-26sv2\" was already processed\nI0527 00:39:12.119274       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-84767c454\" need=5 deleting=1\nI0527 00:39:12.119658       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-84767c454\" relatedReplicaSets=[webserver-84767c454 webserver-66d6495f4b webserver-6d6886857d webserver-6f6f95ddc4]\nI0527 00:39:12.120441       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-84767c454\" pod=\"deployment-6359/webserver-84767c454-8d6nq\"\nI0527 00:39:12.120136       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-84767c454 to 5\"\nI0527 00:39:12.127590       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6f6f95ddc4 to 4\"\nI0527 00:39:12.127973       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" need=4 creating=1\nI0527 00:39:12.149339       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6f6f95ddc4-4qxft\"\nI0527 00:39:12.149519       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-84767c454\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-84767c454-8d6nq\"\nI0527 00:39:12.188778       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:12.845484       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-580/pvc-5rs74\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-580\\\" or manually created by system administrator\"\nI0527 00:39:12.846193       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-580/pvc-5rs74\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-580\\\" or manually created by system administrator\"\nI0527 00:39:12.869947       1 pv_controller.go:864] volume \"pvc-21fe0e4b-75ad-479b-af52-92e2a191069e\" entered phase \"Bound\"\nI0527 00:39:12.869978       1 pv_controller.go:967] volume \"pvc-21fe0e4b-75ad-479b-af52-92e2a191069e\" bound to claim \"csi-mock-volumes-580/pvc-5rs74\"\nI0527 00:39:12.875150       1 pv_controller.go:808] claim \"csi-mock-volumes-580/pvc-5rs74\" entered phase \"Bound\"\nE0527 00:39:12.940919       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-1142/pvc-kjz54: storageclass.storage.k8s.io \"provisioning-1142\" not found\nI0527 00:39:12.941814       1 event.go:291] \"Event occurred\" object=\"provisioning-1142/pvc-kjz54\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-1142\\\" not found\"\nI0527 00:39:13.135654       1 pv_controller.go:864] volume \"local-pdhlt\" entered phase \"Available\"\nI0527 00:39:13.509768       1 namespace_controller.go:185] Namespace has been deleted gc-5385\nE0527 00:39:13.560371       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-5523/default: secrets \"default-token-q2tkj\" is forbidden: unable to create new content in namespace provisioning-5523 because it is being terminated\nE0527 00:39:13.607499       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-4995/pvc-vkktk: storageclass.storage.k8s.io \"provisioning-4995\" not found\nI0527 00:39:13.607856       1 event.go:291] \"Event occurred\" object=\"provisioning-4995/pvc-vkktk\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-4995\\\" not found\"\nI0527 00:39:13.803809       1 pv_controller.go:864] volume \"local-kqqvv\" entered phase \"Available\"\nI0527 00:39:14.378258       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-4767\nI0527 00:39:14.582428       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-84767c454 to 4\"\nI0527 00:39:14.583205       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-84767c454\" need=4 deleting=1\nI0527 00:39:14.583265       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-84767c454\" relatedReplicaSets=[webserver-84767c454 webserver-66d6495f4b webserver-6d6886857d webserver-6f6f95ddc4]\nI0527 00:39:14.583383       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-84767c454\" pod=\"deployment-6359/webserver-84767c454-tbmd4\"\nI0527 00:39:14.594159       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" need=5 creating=1\nI0527 00:39:14.595135       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6f6f95ddc4 to 5\"\nI0527 00:39:14.606111       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6f6f95ddc4-82jr2\"\nI0527 00:39:14.607451       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-84767c454\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-84767c454-tbmd4\"\nI0527 00:39:14.647428       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0527 00:39:14.690739       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0527 00:39:14.704377       1 tokens_controller.go:262] error synchronizing serviceaccount volume-1196/default: secrets \"default-token-sjg9z\" is forbidden: unable to create new content in namespace volume-1196 because it is being terminated\nI0527 00:39:14.861775       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-84767c454\" need=3 deleting=1\nI0527 00:39:14.861826       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-84767c454\" relatedReplicaSets=[webserver-84767c454 webserver-66d6495f4b webserver-6d6886857d webserver-6f6f95ddc4]\nI0527 00:39:14.861946       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-84767c454\" pod=\"deployment-6359/webserver-84767c454-78jcp\"\nI0527 00:39:14.862655       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-84767c454 to 3\"\nI0527 00:39:14.872695       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6f6f95ddc4 to 6\"\nI0527 00:39:14.875536       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" need=6 creating=1\nI0527 00:39:14.877192       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-84767c454\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-84767c454-78jcp\"\nI0527 00:39:14.885679       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6f6f95ddc4-wczt6\"\nE0527 00:39:15.556442       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:15.992300       1 pv_controller.go:915] claim \"provisioning-1142/pvc-kjz54\" bound to volume \"local-pdhlt\"\nI0527 00:39:15.999728       1 pv_controller.go:1326] isVolumeReleased[pvc-6fbb7dd4-4a69-4163-8582-0d2cdd619ab1]: volume is released\nI0527 00:39:16.004894       1 pv_controller.go:864] volume \"local-pdhlt\" entered phase \"Bound\"\nI0527 00:39:16.005053       1 pv_controller.go:967] volume \"local-pdhlt\" bound to claim \"provisioning-1142/pvc-kjz54\"\nI0527 00:39:16.012244       1 pv_controller.go:808] claim \"provisioning-1142/pvc-kjz54\" entered phase \"Bound\"\nI0527 00:39:16.012546       1 pv_controller.go:915] claim \"provisioning-4995/pvc-vkktk\" bound to volume \"local-kqqvv\"\nI0527 00:39:16.024955       1 pv_controller.go:864] volume \"local-kqqvv\" entered phase \"Bound\"\nI0527 00:39:16.025129       1 pv_controller.go:967] volume \"local-kqqvv\" bound to claim \"provisioning-4995/pvc-vkktk\"\nI0527 00:39:16.030829       1 namespace_controller.go:185] Namespace has been deleted provisioning-3441\nI0527 00:39:16.032660       1 pv_controller.go:808] claim \"provisioning-4995/pvc-vkktk\" entered phase \"Bound\"\nI0527 00:39:16.070060       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-3441-9268/csi-hostpath-attacher-v54sk\" objectUID=a913d05b-7ec4-4660-834f-8c4c1f7db821 kind=\"EndpointSlice\" virtual=false\nI0527 00:39:16.078653       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3441-9268/csi-hostpath-attacher-v54sk\" objectUID=a913d05b-7ec4-4660-834f-8c4c1f7db821 kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:39:16.191752       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ap-southeast-1a/vol-01a34b0f6200cb181\nI0527 00:39:16.191781       1 pv_controller.go:1421] volume \"pvc-6fbb7dd4-4a69-4163-8582-0d2cdd619ab1\" deleted\nI0527 00:39:16.199678       1 pv_controller_base.go:504] deletion of claim \"volume-7951/awsmtfvq\" was already processed\nI0527 00:39:16.218855       1 utils.go:413] couldn't find ipfamilies for headless service: crd-webhook-3399/e2e-test-crd-conversion-webhook. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.42.102).\nI0527 00:39:16.278656       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-3441-9268/csi-hostpath-attacher-5f558d4f98\" objectUID=12396a96-afbf-418f-97bd-1d7bb08dfb89 kind=\"ControllerRevision\" virtual=false\nI0527 00:39:16.278875       1 stateful_set.go:419] StatefulSet has been deleted provisioning-3441-9268/csi-hostpath-attacher\nI0527 00:39:16.278989       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-3441-9268/csi-hostpath-attacher-0\" objectUID=320cfe11-1cc1-4e19-955f-614c525f5e68 kind=\"Pod\" virtual=false\nI0527 00:39:16.282282       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3441-9268/csi-hostpath-attacher-5f558d4f98\" objectUID=12396a96-afbf-418f-97bd-1d7bb08dfb89 kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:39:16.282447       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3441-9268/csi-hostpath-attacher-0\" objectUID=320cfe11-1cc1-4e19-955f-614c525f5e68 kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:16.509774       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4767-4374/csi-mockplugin-5b49f58987\" objectUID=3346ae2b-fa0b-4949-bc43-905a042a9776 kind=\"ControllerRevision\" virtual=false\nI0527 00:39:16.509932       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-4767-4374/csi-mockplugin\nI0527 00:39:16.509942       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4767-4374/csi-mockplugin-0\" objectUID=16f936ee-c1b1-4e8c-b940-4196118432f1 kind=\"Pod\" virtual=false\nI0527 00:39:16.519209       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4767-4374/csi-mockplugin-5b49f58987\" objectUID=3346ae2b-fa0b-4949-bc43-905a042a9776 kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:39:16.519321       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4767-4374/csi-mockplugin-0\" objectUID=16f936ee-c1b1-4e8c-b940-4196118432f1 kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:16.547625       1 event.go:291] \"Event occurred\" object=\"provisioning-7625/awsm8fkr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0527 00:39:16.648005       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" need=3 deleting=3\nI0527 00:39:16.648247       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" relatedReplicaSets=[webserver-6d6886857d webserver-6f6f95ddc4 webserver-7c5f9f596d webserver-84767c454 webserver-66d6495f4b]\nI0527 00:39:16.648506       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-6f6f95ddc4\" pod=\"deployment-6359/webserver-6f6f95ddc4-82jr2\"\nI0527 00:39:16.648783       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6f6f95ddc4 to 3\"\nI0527 00:39:16.648830       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-6f6f95ddc4\" pod=\"deployment-6359/webserver-6f6f95ddc4-wczt6\"\nI0527 00:39:16.649016       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-6f6f95ddc4\" pod=\"deployment-6359/webserver-6f6f95ddc4-wmnzj\"\nI0527 00:39:16.656905       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-3441-9268/csi-hostpathplugin-6xf84\" objectUID=27e09a00-5b39-449d-b6e6-3a1be37e9063 kind=\"EndpointSlice\" virtual=false\nI0527 00:39:16.657935       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:16.667704       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-7c5f9f596d\" need=3 creating=3\nI0527 00:39:16.668978       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6f6f95ddc4-82jr2\"\nI0527 00:39:16.669003       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6f6f95ddc4-wczt6\"\nI0527 00:39:16.669092       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7c5f9f596d to 3\"\nI0527 00:39:16.673117       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3441-9268/csi-hostpathplugin-6xf84\" objectUID=27e09a00-5b39-449d-b6e6-3a1be37e9063 kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:39:16.680293       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6f6f95ddc4-wmnzj\"\nI0527 00:39:16.688135       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-7c5f9f596d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7c5f9f596d-qlvmj\"\nI0527 00:39:16.723671       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-7c5f9f596d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7c5f9f596d-l7wd7\"\nI0527 00:39:16.731106       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-7c5f9f596d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7c5f9f596d-wrm56\"\nE0527 00:39:16.728994       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:16.733553       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:16.766596       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:16.775395       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:16.884826       1 stateful_set.go:419] StatefulSet has been deleted provisioning-3441-9268/csi-hostpathplugin\nI0527 00:39:16.885021       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-3441-9268/csi-hostpathplugin-0\" objectUID=67d76d73-d9f8-4006-abd2-626e0d3c641d kind=\"Pod\" virtual=false\nI0527 00:39:16.885090       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-3441-9268/csi-hostpathplugin-599c76cfc4\" objectUID=8f1b46c0-244e-4065-9856-4c298d7364f8 kind=\"ControllerRevision\" virtual=false\nI0527 00:39:16.887297       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3441-9268/csi-hostpathplugin-599c76cfc4\" objectUID=8f1b46c0-244e-4065-9856-4c298d7364f8 kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:39:16.887593       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3441-9268/csi-hostpathplugin-0\" objectUID=67d76d73-d9f8-4006-abd2-626e0d3c641d kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:16.917731       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-4767-4374/csi-mockplugin-attacher\nI0527 00:39:16.917963       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4767-4374/csi-mockplugin-attacher-0\" objectUID=7ccee71e-12e5-4ded-9744-a66e20084f40 kind=\"Pod\" virtual=false\nI0527 00:39:16.918264       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4767-4374/csi-mockplugin-attacher-66f7bf56c5\" objectUID=82d92684-a377-4fe7-9941-7dffd9dea27d kind=\"ControllerRevision\" virtual=false\nI0527 00:39:16.920985       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4767-4374/csi-mockplugin-attacher-66f7bf56c5\" objectUID=82d92684-a377-4fe7-9941-7dffd9dea27d kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:39:16.921390       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4767-4374/csi-mockplugin-attacher-0\" objectUID=7ccee71e-12e5-4ded-9744-a66e20084f40 kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:17.072529       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-3441-9268/csi-hostpath-provisioner-8cqst\" objectUID=de241a4b-74d4-4f33-abe4-99e2a4cdabb9 kind=\"EndpointSlice\" virtual=false\nI0527 00:39:17.078007       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3441-9268/csi-hostpath-provisioner-8cqst\" objectUID=de241a4b-74d4-4f33-abe4-99e2a4cdabb9 kind=\"EndpointSlice\" propagationPolicy=Background\nE0527 00:39:17.147809       1 tokens_controller.go:262] error synchronizing serviceaccount resourcequota-4235/default: secrets \"default-token-pnjnw\" is forbidden: unable to create new content in namespace resourcequota-4235 because it is being terminated\nI0527 00:39:17.175303       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-4235/test-quota\nI0527 00:39:17.223755       1 utils.go:413] couldn't find ipfamilies for headless service: crd-webhook-3399/e2e-test-crd-conversion-webhook. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.42.102).\nI0527 00:39:17.278226       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-3441-9268/csi-hostpath-provisioner-d5f9b6d47\" objectUID=5dc07d97-b95a-4cfa-b47e-e3625a044e48 kind=\"ControllerRevision\" virtual=false\nI0527 00:39:17.278416       1 stateful_set.go:419] StatefulSet has been deleted provisioning-3441-9268/csi-hostpath-provisioner\nI0527 00:39:17.278472       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-3441-9268/csi-hostpath-provisioner-0\" objectUID=01699138-7b31-469b-ad87-7119af140fd3 kind=\"Pod\" virtual=false\nI0527 00:39:17.282320       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3441-9268/csi-hostpath-provisioner-d5f9b6d47\" objectUID=5dc07d97-b95a-4cfa-b47e-e3625a044e48 kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:39:17.282320       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3441-9268/csi-hostpath-provisioner-0\" objectUID=01699138-7b31-469b-ad87-7119af140fd3 kind=\"Pod\" propagationPolicy=Background\nE0527 00:39:17.419679       1 pv_controller.go:1437] error finding provisioning plugin for claim volume-2158/pvc-2cw77: storageclass.storage.k8s.io \"volume-2158\" not found\nI0527 00:39:17.419765       1 event.go:291] \"Event occurred\" object=\"volume-2158/pvc-2cw77\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-2158\\\" not found\"\nI0527 00:39:17.464789       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-3441-9268/csi-hostpath-resizer-tdq8h\" objectUID=60071d65-05ed-4995-b789-f9bfd9a40a55 kind=\"EndpointSlice\" virtual=false\nI0527 00:39:17.616109       1 pv_controller.go:864] volume \"local-ljlmc\" entered phase \"Available\"\nI0527 00:39:17.683651       1 stateful_set.go:419] StatefulSet has been deleted provisioning-3441-9268/csi-hostpath-resizer\nI0527 00:39:17.683656       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-3441-9268/csi-hostpath-resizer-69db489d7\" objectUID=583dc3d7-730c-404d-9a20-45d922844115 kind=\"ControllerRevision\" virtual=false\nI0527 00:39:17.683935       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-3441-9268/csi-hostpath-resizer-0\" objectUID=0989b32c-4283-4072-a57e-06fd049196b4 kind=\"Pod\" virtual=false\nI0527 00:39:17.715591       1 namespace_controller.go:185] Namespace has been deleted volume-8408\nI0527 00:39:17.732313       1 event.go:291] \"Event occurred\" object=\"job-5174/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-local-4rpb4\"\nI0527 00:39:17.736757       1 event.go:291] \"Event occurred\" object=\"job-5174/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-local-sqlfd\"\nI0527 00:39:17.871822       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-3441-9268/csi-hostpath-snapshotter-4rzw8\" objectUID=50b44038-c2c2-443a-b8b7-0ab7412025f1 kind=\"EndpointSlice\" virtual=false\nI0527 00:39:17.889131       1 route_controller.go:294] set node ip-172-20-41-144.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:17.889152       1 route_controller.go:294] set node ip-172-20-42-187.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:17.889163       1 route_controller.go:294] set node ip-172-20-33-93.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:17.889139       1 route_controller.go:294] set node ip-172-20-40-209.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:17.889285       1 route_controller.go:294] set node ip-172-20-40-196.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:17.920114       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3441-9268/csi-hostpath-resizer-69db489d7\" objectUID=583dc3d7-730c-404d-9a20-45d922844115 kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:39:17.920415       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3441-9268/csi-hostpath-snapshotter-4rzw8\" objectUID=50b44038-c2c2-443a-b8b7-0ab7412025f1 kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:39:17.920614       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3441-9268/csi-hostpath-resizer-tdq8h\" objectUID=60071d65-05ed-4995-b789-f9bfd9a40a55 kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:39:17.921033       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3441-9268/csi-hostpath-resizer-0\" objectUID=0989b32c-4283-4072-a57e-06fd049196b4 kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:18.000378       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [], removed: [kubectl.example.com/v1, Resource=e2e-test-kubectl-9163-crds]\nI0527 00:39:18.000516       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0527 00:39:18.000541       1 shared_informer.go:247] Caches are synced for resource quota \nI0527 00:39:18.000572       1 resource_quota_controller.go:454] synced quota controller\nI0527 00:39:18.065089       1 stateful_set.go:419] StatefulSet has been deleted provisioning-3441-9268/csi-hostpath-snapshotter\nI0527 00:39:18.065115       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-3441-9268/csi-hostpath-snapshotter-0\" objectUID=e21417cd-6169-4f58-aa66-4f13b1f7bfd6 kind=\"Pod\" virtual=false\nI0527 00:39:18.065143       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-3441-9268/csi-hostpath-snapshotter-7545c5bc87\" objectUID=69769dc7-b5df-48d0-bef6-37824ffd0878 kind=\"ControllerRevision\" virtual=false\nI0527 00:39:18.067003       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3441-9268/csi-hostpath-snapshotter-0\" objectUID=e21417cd-6169-4f58-aa66-4f13b1f7bfd6 kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:18.067299       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3441-9268/csi-hostpath-snapshotter-7545c5bc87\" objectUID=69769dc7-b5df-48d0-bef6-37824ffd0878 kind=\"ControllerRevision\" propagationPolicy=Background\nE0527 00:39:18.385907       1 tokens_controller.go:262] error synchronizing serviceaccount volume-2282/default: secrets \"default-token-f4hh7\" is forbidden: unable to create new content in namespace volume-2282 because it is being terminated\nI0527 00:39:18.389390       1 expand_controller.go:277] Ignoring the PVC \"csi-mock-volumes-580/pvc-5rs74\" (uid: \"21fe0e4b-75ad-479b-af52-92e2a191069e\") : didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nI0527 00:39:18.389774       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-580/pvc-5rs74\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ExternalExpanding\" message=\"Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\"\nE0527 00:39:18.524921       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:18.674254       1 namespace_controller.go:185] Namespace has been deleted provisioning-5523\nI0527 00:39:18.741109       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-84767c454\" need=2 deleting=1\nI0527 00:39:18.741619       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-84767c454\" relatedReplicaSets=[webserver-66d6495f4b webserver-6d6886857d webserver-6f6f95ddc4 webserver-7c5f9f596d webserver-84767c454]\nI0527 00:39:18.742204       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-84767c454\" pod=\"deployment-6359/webserver-84767c454-8hvj2\"\nI0527 00:39:18.741900       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-84767c454 to 2\"\nE0527 00:39:18.759165       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-1496/default: secrets \"default-token-5twcb\" is forbidden: unable to create new content in namespace provisioning-1496 because it is being terminated\nI0527 00:39:18.767208       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-84767c454\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-84767c454-8hvj2\"\nI0527 00:39:18.770716       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-7c5f9f596d\" need=4 creating=1\nI0527 00:39:18.773408       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7c5f9f596d to 4\"\nI0527 00:39:18.782085       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-7c5f9f596d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7c5f9f596d-56j5j\"\nE0527 00:39:18.865315       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:19.200757       1 namespace_controller.go:185] Namespace has been deleted volume-9225\nI0527 00:39:19.301645       1 event.go:291] \"Event occurred\" object=\"provisioning-40/nfscg4xx\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-provisioning-40\\\" or manually created by system administrator\"\nI0527 00:39:19.735208       1 namespace_controller.go:185] Namespace has been deleted volume-1196\nI0527 00:39:20.094995       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-3399/e2e-test-crd-conversion-webhook-ll22g\" objectUID=e11c02b4-dd50-4690-97bc-c4b9fe0b8034 kind=\"EndpointSlice\" virtual=false\nI0527 00:39:20.098408       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-3399/e2e-test-crd-conversion-webhook-ll22g\" objectUID=e11c02b4-dd50-4690-97bc-c4b9fe0b8034 kind=\"EndpointSlice\" propagationPolicy=Background\nE0527 00:39:20.177349       1 tokens_controller.go:262] error synchronizing serviceaccount crd-watch-7904/default: secrets \"default-token-nf8fr\" is forbidden: unable to create new content in namespace crd-watch-7904 because it is being terminated\nI0527 00:39:20.302109       1 deployment_controller.go:581] Deployment crd-webhook-3399/sample-crd-conversion-webhook-deployment has been deleted\nI0527 00:39:20.302394       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-3399/sample-crd-conversion-webhook-deployment-7d6697c5b7\" objectUID=7efa1f3a-7495-4533-baf8-8fc066e9f7ae kind=\"ReplicaSet\" virtual=false\nI0527 00:39:20.304657       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-3399/sample-crd-conversion-webhook-deployment-7d6697c5b7\" objectUID=7efa1f3a-7495-4533-baf8-8fc066e9f7ae kind=\"ReplicaSet\" propagationPolicy=Background\nI0527 00:39:20.314849       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-3399/sample-crd-conversion-webhook-deployment-7d6697c5b7-d76mp\" objectUID=824f50cf-37ff-48a2-b4db-b9cee28b812e kind=\"Pod\" virtual=false\nI0527 00:39:20.317302       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-3399/sample-crd-conversion-webhook-deployment-7d6697c5b7-d76mp\" objectUID=824f50cf-37ff-48a2-b4db-b9cee28b812e kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:20.869222       1 event.go:291] \"Event occurred\" object=\"job-5174/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-local-hgm8g\"\nI0527 00:39:21.769258       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-84767c454\" need=1 deleting=1\nI0527 00:39:21.769476       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-84767c454\" relatedReplicaSets=[webserver-84767c454 webserver-66d6495f4b webserver-6d6886857d webserver-6f6f95ddc4 webserver-7c5f9f596d]\nI0527 00:39:21.769718       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-84767c454\" pod=\"deployment-6359/webserver-84767c454-hfbgm\"\nI0527 00:39:21.769795       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-84767c454 to 1\"\nI0527 00:39:21.779073       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-7c5f9f596d\" need=5 creating=1\nI0527 00:39:21.779767       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7c5f9f596d to 5\"\nI0527 00:39:21.783927       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:21.785225       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-84767c454\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-84767c454-hfbgm\"\nI0527 00:39:21.785802       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-7c5f9f596d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7c5f9f596d-4l54g\"\nI0527 00:39:21.869405       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-84767c454 to 0\"\nI0527 00:39:21.869724       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-84767c454\" need=0 deleting=1\nI0527 00:39:21.869964       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-84767c454\" relatedReplicaSets=[webserver-7c5f9f596d webserver-84767c454 webserver-66d6495f4b webserver-6d6886857d webserver-6f6f95ddc4]\nI0527 00:39:21.870202       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-84767c454\" pod=\"deployment-6359/webserver-84767c454-hjxnd\"\nI0527 00:39:21.875960       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-7c5f9f596d\" need=6 creating=1\nI0527 00:39:21.876239       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7c5f9f596d to 6\"\nI0527 00:39:21.886570       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-7c5f9f596d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7c5f9f596d-vrggl\"\nI0527 00:39:21.895472       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-84767c454\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-84767c454-hjxnd\"\nE0527 00:39:22.146985       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-4767-4374/default: secrets \"default-token-7g7ss\" is forbidden: unable to create new content in namespace csi-mock-volumes-4767-4374 because it is being terminated\nI0527 00:39:22.183451       1 namespace_controller.go:185] Namespace has been deleted resourcequota-4235\nI0527 00:39:22.243612       1 utils.go:424] couldn't find ipfamilies for headless service: dns-2413/dns-test-service-2 likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:39:22.311833       1 aws_util.go:113] Successfully created EBS Disk volume aws://ap-southeast-1a/vol-02e8e7d595de13dc7\nI0527 00:39:22.364600       1 pv_controller.go:1652] volume \"pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af\" provisioned for claim \"provisioning-7625/awsm8fkr\"\nI0527 00:39:22.365036       1 event.go:291] \"Event occurred\" object=\"provisioning-7625/awsm8fkr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af using kubernetes.io/aws-ebs\"\nI0527 00:39:22.369151       1 pv_controller.go:864] volume \"pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af\" entered phase \"Bound\"\nI0527 00:39:22.369230       1 pv_controller.go:967] volume \"pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af\" bound to claim \"provisioning-7625/awsm8fkr\"\nI0527 00:39:22.373738       1 pv_controller.go:808] claim \"provisioning-7625/awsm8fkr\" entered phase \"Bound\"\nI0527 00:39:22.440710       1 utils.go:424] couldn't find ipfamilies for headless service: dns-2413/dns-test-service-2 likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0527 00:39:22.511065       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:22.729770       1 pv_controller.go:864] volume \"pvc-257389da-3b9c-49ac-8cf1-3b9150322c6a\" entered phase \"Bound\"\nI0527 00:39:22.729800       1 pv_controller.go:967] volume \"pvc-257389da-3b9c-49ac-8cf1-3b9150322c6a\" bound to claim \"provisioning-40/nfscg4xx\"\nI0527 00:39:22.734969       1 pv_controller.go:808] claim \"provisioning-40/nfscg4xx\" entered phase \"Bound\"\nI0527 00:39:23.033910       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-02e8e7d595de13dc7\") from node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nI0527 00:39:23.084822       1 aws.go:2014] Assigned mount device cn -> volume vol-02e8e7d595de13dc7\nI0527 00:39:23.248221       1 utils.go:424] couldn't find ipfamilies for headless service: dns-2413/dns-test-service-2 likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:39:23.260791       1 event.go:291] \"Event occurred\" object=\"job-5174/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-local-r27rc\"\nI0527 00:39:23.377686       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [], removed: [kubectl.example.com/v1, Resource=e2e-test-kubectl-9163-crds mygroup.example.com/v1beta1, Resource=noxus]\nI0527 00:39:23.377754       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0527 00:39:23.377807       1 shared_informer.go:247] Caches are synced for garbage collector \nI0527 00:39:23.377850       1 garbagecollector.go:254] synced garbage collector\nI0527 00:39:23.480953       1 aws.go:2427] AttachVolume volume=\"vol-02e8e7d595de13dc7\" instance=\"i-081c5901a8830e60d\" request returned {\n  AttachTime: 2021-05-27 00:39:23.464 +0000 UTC,\n  Device: \"/dev/xvdcn\",\n  InstanceId: \"i-081c5901a8830e60d\",\n  State: \"attaching\",\n  VolumeId: \"vol-02e8e7d595de13dc7\"\n}\nI0527 00:39:23.904238       1 namespace_controller.go:185] Namespace has been deleted volume-2282\nI0527 00:39:24.000153       1 deployment_controller.go:581] Deployment webhook-1501/sample-webhook-deployment has been deleted\nI0527 00:39:24.016421       1 namespace_controller.go:185] Namespace has been deleted provisioning-1496\nI0527 00:39:24.635438       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 6\"\nI0527 00:39:24.641184       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-6f6f95ddc4\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:24.646947       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-7c5f9f596d\" need=3 deleting=3\nI0527 00:39:24.647111       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-7c5f9f596d\" relatedReplicaSets=[webserver-7c5f9f596d webserver-84767c454 webserver-66d6495f4b webserver-6d6886857d webserver-6f6f95ddc4]\nI0527 00:39:24.647321       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-7c5f9f596d\" pod=\"deployment-6359/webserver-7c5f9f596d-qlvmj\"\nI0527 00:39:24.647443       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-7c5f9f596d\" pod=\"deployment-6359/webserver-7c5f9f596d-vrggl\"\nI0527 00:39:24.647328       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7c5f9f596d to 3\"\nI0527 00:39:24.647389       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-7c5f9f596d\" pod=\"deployment-6359/webserver-7c5f9f596d-4l54g\"\nI0527 00:39:24.652098       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:24.659429       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" need=6 creating=3\nI0527 00:39:24.660184       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6f6f95ddc4 to 6\"\nI0527 00:39:24.665088       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-7c5f9f596d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7c5f9f596d-qlvmj\"\nI0527 00:39:24.671674       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-7c5f9f596d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7c5f9f596d-4l54g\"\nI0527 00:39:24.673049       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6f6f95ddc4-bbzdp\"\nI0527 00:39:24.674310       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-7c5f9f596d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7c5f9f596d-vrggl\"\nI0527 00:39:24.698834       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6f6f95ddc4-gq92t\"\nI0527 00:39:24.716907       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6f6f95ddc4-rglbx\"\nI0527 00:39:24.738942       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:25.117096       1 event.go:291] \"Event occurred\" object=\"volume-expand-6289/awswcw9q\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0527 00:39:25.230846       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-8001/rc-test\" need=1 creating=1\nI0527 00:39:25.238712       1 event.go:291] \"Event occurred\" object=\"replication-controller-8001/rc-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rc-test-tk62t\"\nI0527 00:39:25.243090       1 namespace_controller.go:185] Namespace has been deleted crd-watch-7904\nI0527 00:39:25.596223       1 aws.go:2037] Releasing in-process attachment entry: cn -> volume vol-02e8e7d595de13dc7\nI0527 00:39:25.596278       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-02e8e7d595de13dc7\") from node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nI0527 00:39:25.596418       1 event.go:291] \"Event occurred\" object=\"provisioning-7625/pod-subpath-test-dynamicpv-rbzx\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af\\\" \"\nE0527 00:39:26.155479       1 tokens_controller.go:262] error synchronizing serviceaccount downward-api-7752/default: secrets \"default-token-7hslw\" is forbidden: unable to create new content in namespace downward-api-7752 because it is being terminated\nI0527 00:39:26.374365       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7c5f9f596d to 2\"\nI0527 00:39:26.374733       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-7c5f9f596d\" need=2 deleting=1\nI0527 00:39:26.375000       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-7c5f9f596d\" relatedReplicaSets=[webserver-84767c454 webserver-66d6495f4b webserver-6d6886857d webserver-6f6f95ddc4 webserver-7c5f9f596d]\nI0527 00:39:26.375225       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-7c5f9f596d\" pod=\"deployment-6359/webserver-7c5f9f596d-l7wd7\"\nI0527 00:39:26.383508       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-7c5f9f596d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7c5f9f596d-l7wd7\"\nI0527 00:39:26.384921       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" need=7 creating=1\nI0527 00:39:26.388669       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6f6f95ddc4 to 7\"\nI0527 00:39:26.395392       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:26.395881       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6f6f95ddc4-prs2h\"\nI0527 00:39:26.409733       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:26.592579       1 namespace_controller.go:185] Namespace has been deleted kubectl-1047\nI0527 00:39:27.272574       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-4767-4374\nI0527 00:39:27.888342       1 route_controller.go:294] set node ip-172-20-40-209.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:27.888356       1 route_controller.go:294] set node ip-172-20-42-187.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:27.888376       1 route_controller.go:294] set node ip-172-20-41-144.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:27.888387       1 route_controller.go:294] set node ip-172-20-33-93.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:27.888402       1 route_controller.go:294] set node ip-172-20-40-196.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nE0527 00:39:27.955474       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:27.999989       1 pvc_protection_controller.go:291] PVC provisioning-1142/pvc-kjz54 is unused\nI0527 00:39:28.005298       1 pv_controller.go:638] volume \"local-pdhlt\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:39:28.006938       1 pvc_protection_controller.go:291] PVC csi-mock-volumes-580/pvc-5rs74 is unused\nI0527 00:39:28.009141       1 pv_controller.go:864] volume \"local-pdhlt\" entered phase \"Released\"\nI0527 00:39:28.016046       1 pv_controller.go:638] volume \"pvc-21fe0e4b-75ad-479b-af52-92e2a191069e\" is released and reclaim policy \"Delete\" will be executed\nI0527 00:39:28.018759       1 pv_controller.go:864] volume \"pvc-21fe0e4b-75ad-479b-af52-92e2a191069e\" entered phase \"Released\"\nI0527 00:39:28.021079       1 pv_controller.go:1326] isVolumeReleased[pvc-21fe0e4b-75ad-479b-af52-92e2a191069e]: volume is released\nI0527 00:39:28.030624       1 pv_controller_base.go:504] deletion of claim \"csi-mock-volumes-580/pvc-5rs74\" was already processed\nI0527 00:39:28.191163       1 pv_controller_base.go:504] deletion of claim \"provisioning-1142/pvc-kjz54\" was already processed\nE0527 00:39:28.385358       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-4650/default: secrets \"default-token-jxxrc\" is forbidden: unable to create new content in namespace provisioning-4650 because it is being terminated\nI0527 00:39:28.458881       1 event.go:291] \"Event occurred\" object=\"job-5174/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI0527 00:39:28.870271       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" need=6 deleting=1\nI0527 00:39:28.870491       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" relatedReplicaSets=[webserver-66d6495f4b webserver-6d6886857d webserver-6f6f95ddc4 webserver-7c5f9f596d webserver-84767c454]\nI0527 00:39:28.870651       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-6f6f95ddc4\" pod=\"deployment-6359/webserver-6f6f95ddc4-prs2h\"\nI0527 00:39:28.871267       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6f6f95ddc4 to 6\"\nI0527 00:39:28.881715       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6f6f95ddc4-prs2h\"\nI0527 00:39:28.884582       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-7c5f9f596d\" need=1 deleting=1\nI0527 00:39:28.884729       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-7c5f9f596d\" relatedReplicaSets=[webserver-6f6f95ddc4 webserver-7c5f9f596d webserver-84767c454 webserver-66d6495f4b webserver-6d6886857d]\nI0527 00:39:28.884964       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-7c5f9f596d\" pod=\"deployment-6359/webserver-7c5f9f596d-wrm56\"\nI0527 00:39:28.887268       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7c5f9f596d to 1\"\nI0527 00:39:28.896323       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-7c5f9f596d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7c5f9f596d-wrm56\"\nI0527 00:39:28.906668       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:28.919034       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0527 00:39:29.337400       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:29.555841       1 namespace_controller.go:185] Namespace has been deleted pod-network-test-3510\nI0527 00:39:30.014016       1 namespace_controller.go:185] Namespace has been deleted crd-webhook-3399\nI0527 00:39:30.255914       1 namespace_controller.go:185] Namespace has been deleted volume-7951\nI0527 00:39:30.523330       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7c5f9f596d to 0\"\nI0527 00:39:30.523751       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-7c5f9f596d\" need=0 deleting=1\nI0527 00:39:30.523997       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-7c5f9f596d\" relatedReplicaSets=[webserver-84767c454 webserver-66d6495f4b webserver-6d6886857d webserver-6f6f95ddc4 webserver-7c5f9f596d]\nI0527 00:39:30.524191       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-7c5f9f596d\" pod=\"deployment-6359/webserver-7c5f9f596d-56j5j\"\nI0527 00:39:30.538321       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-7c5f9f596d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7c5f9f596d-56j5j\"\nI0527 00:39:30.992357       1 pv_controller.go:915] claim \"volume-2158/pvc-2cw77\" bound to volume \"local-ljlmc\"\nI0527 00:39:30.992531       1 event.go:291] \"Event occurred\" object=\"volume-expand-6289/awswcw9q\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0527 00:39:30.999865       1 pv_controller.go:864] volume \"local-ljlmc\" entered phase \"Bound\"\nI0527 00:39:31.000032       1 pv_controller.go:967] volume \"local-ljlmc\" bound to claim \"volume-2158/pvc-2cw77\"\nI0527 00:39:31.005424       1 pv_controller.go:808] claim \"volume-2158/pvc-2cw77\" entered phase \"Bound\"\nI0527 00:39:31.221291       1 namespace_controller.go:185] Namespace has been deleted downward-api-7752\nE0527 00:39:32.434967       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0527 00:39:32.678404       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:33.171649       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-8001/rc-test\" need=2 creating=1\nI0527 00:39:33.174907       1 event.go:291] \"Event occurred\" object=\"replication-controller-8001/rc-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rc-test-8c7dr\"\nI0527 00:39:33.449713       1 namespace_controller.go:185] Namespace has been deleted provisioning-4650\nE0527 00:39:33.594713       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-580/default: secrets \"default-token-wgksk\" is forbidden: unable to create new content in namespace csi-mock-volumes-580 because it is being terminated\nI0527 00:39:33.755043       1 namespace_controller.go:185] Namespace has been deleted projected-7804\nI0527 00:39:34.046125       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9411/pod-a50e4ddb-2c4a-497d-8965-e97309355a2b uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-7js6q pvc- persistent-local-volumes-test-9411  99beff27-46d4-44dd-9f03-3b3e93c1a790 29435 0 2021-05-27 00:39:06 +0000 UTC 2021-05-27 00:39:34 +0000 UTC 0xc002202c48 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:39:06 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:39:06 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvm79f7,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9411,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:39:34.046228       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9411/pvc-7js6q because it is still being used\nI0527 00:39:34.364647       1 utils.go:424] couldn't find ipfamilies for headless service: dns-2413/dns-test-service-2 likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0527 00:39:34.665172       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:34.791844       1 pvc_protection_controller.go:291] PVC provisioning-40/nfscg4xx is unused\nI0527 00:39:34.797102       1 pv_controller.go:638] volume \"pvc-257389da-3b9c-49ac-8cf1-3b9150322c6a\" is released and reclaim policy \"Delete\" will be executed\nI0527 00:39:34.799729       1 pv_controller.go:864] volume \"pvc-257389da-3b9c-49ac-8cf1-3b9150322c6a\" entered phase \"Released\"\nI0527 00:39:34.801302       1 pv_controller.go:1326] isVolumeReleased[pvc-257389da-3b9c-49ac-8cf1-3b9150322c6a]: volume is released\nI0527 00:39:34.809257       1 pv_controller_base.go:504] deletion of claim \"provisioning-40/nfscg4xx\" was already processed\nW0527 00:39:34.862826       1 reconciler.go:335] Multi-Attach error for volume \"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0226586ae109ac335\") from node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" Volume is already exclusively attached to node ip-172-20-33-93.ap-southeast-1.compute.internal and can't be attached to another\nI0527 00:39:34.863002       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-7842/pod-b3508d7e-4d62-4deb-9659-2a42ff05b851\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\\\" Volume is already exclusively attached to one node and can't be attached to another\"\nI0527 00:39:35.315426       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9411/pod-1f8f0e96-a96c-4ef0-95bd-396853d6df37 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-7js6q pvc- persistent-local-volumes-test-9411  99beff27-46d4-44dd-9f03-3b3e93c1a790 29435 0 2021-05-27 00:39:06 +0000 UTC 2021-05-27 00:39:34 +0000 UTC 0xc002202c48 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:39:06 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:39:06 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvm79f7,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9411,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:39:35.316321       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9411/pvc-7js6q because it is still being used\nI0527 00:39:35.350469       1 namespace_controller.go:185] Namespace has been deleted volume-6417\nI0527 00:39:35.424213       1 garbagecollector.go:471] \"Processing object\" object=\"replication-controller-8001/rc-test\" objectUID=dd33184a-b993-4f3b-9129-7ba725719fe4 kind=\"ReplicationController\" virtual=false\nI0527 00:39:35.430712       1 garbagecollector.go:471] \"Processing object\" object=\"replication-controller-8001/rc-test\" objectUID=dd33184a-b993-4f3b-9129-7ba725719fe4 kind=\"ReplicationController\" virtual=false\nI0527 00:39:35.455837       1 garbagecollector.go:471] \"Processing object\" object=\"job-5174/fail-once-local-r27rc\" objectUID=451bfe02-f423-4ae9-bb85-49ecdf26b23d kind=\"Pod\" virtual=false\nI0527 00:39:35.456019       1 garbagecollector.go:471] \"Processing object\" object=\"job-5174/fail-once-local-sqlfd\" objectUID=3bb1852f-7a41-4e23-821b-ee7f4e311afb kind=\"Pod\" virtual=false\nI0527 00:39:35.455939       1 garbagecollector.go:471] \"Processing object\" object=\"job-5174/fail-once-local-4rpb4\" objectUID=bba97680-949f-4a29-8e3f-49e593e92a2f kind=\"Pod\" virtual=false\nI0527 00:39:35.456419       1 garbagecollector.go:471] \"Processing object\" object=\"job-5174/fail-once-local-hgm8g\" objectUID=2dfa2938-1c53-4157-a783-28bc1f04087a kind=\"Pod\" virtual=false\nI0527 00:39:35.458023       1 garbagecollector.go:580] \"Deleting object\" object=\"job-5174/fail-once-local-4rpb4\" objectUID=bba97680-949f-4a29-8e3f-49e593e92a2f kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:35.459438       1 garbagecollector.go:580] \"Deleting object\" object=\"job-5174/fail-once-local-r27rc\" objectUID=451bfe02-f423-4ae9-bb85-49ecdf26b23d kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:35.460669       1 garbagecollector.go:580] \"Deleting object\" object=\"job-5174/fail-once-local-hgm8g\" objectUID=2dfa2938-1c53-4157-a783-28bc1f04087a kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:35.462050       1 garbagecollector.go:580] \"Deleting object\" object=\"job-5174/fail-once-local-sqlfd\" objectUID=3bb1852f-7a41-4e23-821b-ee7f4e311afb kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:35.481330       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0226586ae109ac335\") on node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nI0527 00:39:35.485396       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0226586ae109ac335\") on node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nI0527 00:39:35.715303       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9411/pod-1f8f0e96-a96c-4ef0-95bd-396853d6df37 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-7js6q pvc- persistent-local-volumes-test-9411  99beff27-46d4-44dd-9f03-3b3e93c1a790 29435 0 2021-05-27 00:39:06 +0000 UTC 2021-05-27 00:39:34 +0000 UTC 0xc002202c48 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:39:06 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:39:06 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvm79f7,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9411,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:39:35.715379       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9411/pvc-7js6q because it is still being used\nI0527 00:39:36.405827       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.218.91).\nI0527 00:39:36.466034       1 namespace_controller.go:185] Namespace has been deleted projected-3371\nI0527 00:39:36.603356       1 event.go:291] \"Event occurred\" object=\"volumemode-7628-9717/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0527 00:39:36.603688       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.218.91).\nI0527 00:39:36.979841       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.86.30).\nI0527 00:39:37.115868       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9411/pod-1f8f0e96-a96c-4ef0-95bd-396853d6df37 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-7js6q pvc- persistent-local-volumes-test-9411  99beff27-46d4-44dd-9f03-3b3e93c1a790 29435 0 2021-05-27 00:39:06 +0000 UTC 2021-05-27 00:39:34 +0000 UTC 0xc002202c48 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:39:06 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:39:06 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvm79f7,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9411,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:39:37.116224       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9411/pvc-7js6q because it is still being used\nI0527 00:39:37.118766       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9411/pod-1f8f0e96-a96c-4ef0-95bd-396853d6df37 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-7js6q pvc- persistent-local-volumes-test-9411  99beff27-46d4-44dd-9f03-3b3e93c1a790 29435 0 2021-05-27 00:39:06 +0000 UTC 2021-05-27 00:39:34 +0000 UTC 0xc002202c48 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:39:06 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:39:06 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvm79f7,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9411,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:39:37.118843       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9411/pvc-7js6q because it is still being used\nI0527 00:39:37.177340       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.86.30).\nI0527 00:39:37.179272       1 event.go:291] \"Event occurred\" object=\"volumemode-7628-9717/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nE0527 00:39:37.271970       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:37.390899       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.173.243).\nI0527 00:39:37.591637       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.173.243).\nI0527 00:39:37.591994       1 event.go:291] \"Event occurred\" object=\"volumemode-7628-9717/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0527 00:39:37.783533       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.196.152).\nI0527 00:39:37.907999       1 pv_controller.go:864] volume \"local-pvzcx6x\" entered phase \"Available\"\nI0527 00:39:37.915878       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9411/pod-1f8f0e96-a96c-4ef0-95bd-396853d6df37 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-7js6q pvc- persistent-local-volumes-test-9411  99beff27-46d4-44dd-9f03-3b3e93c1a790 29435 0 2021-05-27 00:39:06 +0000 UTC 2021-05-27 00:39:34 +0000 UTC 0xc002202c48 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:39:06 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:39:06 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvm79f7,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9411,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:39:37.915949       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9411/pvc-7js6q because it is still being used\nI0527 00:39:37.920003       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-9411/pvc-7js6q is unused\nI0527 00:39:37.926042       1 pv_controller.go:638] volume \"local-pvm79f7\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:39:37.928750       1 pv_controller.go:864] volume \"local-pvm79f7\" entered phase \"Released\"\nI0527 00:39:37.933006       1 pv_controller_base.go:504] deletion of claim \"persistent-local-volumes-test-9411/pvc-7js6q\" was already processed\nI0527 00:39:37.958569       1 route_controller.go:294] set node ip-172-20-33-93.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:37.958569       1 route_controller.go:294] set node ip-172-20-40-209.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:37.958583       1 route_controller.go:294] set node ip-172-20-41-144.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:37.958609       1 route_controller.go:294] set node ip-172-20-42-187.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:37.958742       1 route_controller.go:294] set node ip-172-20-40-196.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:37.982025       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.196.152).\nI0527 00:39:37.982979       1 event.go:291] \"Event occurred\" object=\"volumemode-7628-9717/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0527 00:39:37.985677       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.86.30).\nE0527 00:39:38.029932       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:38.104502       1 pv_controller.go:915] claim \"persistent-local-volumes-test-8264/pvc-cwtkx\" bound to volume \"local-pvzcx6x\"\nI0527 00:39:38.111617       1 pv_controller.go:864] volume \"local-pvzcx6x\" entered phase \"Bound\"\nI0527 00:39:38.111643       1 pv_controller.go:967] volume \"local-pvzcx6x\" bound to claim \"persistent-local-volumes-test-8264/pvc-cwtkx\"\nI0527 00:39:38.118534       1 pv_controller.go:808] claim \"persistent-local-volumes-test-8264/pvc-cwtkx\" entered phase \"Bound\"\nI0527 00:39:38.172271       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.138.142).\nI0527 00:39:38.241183       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 7\"\nI0527 00:39:38.247017       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-7c5f9f596d\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:38.254643       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-7c5f9f596d\" need=2 creating=2\nI0527 00:39:38.255150       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7c5f9f596d to 2\"\nI0527 00:39:38.260048       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:38.261132       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-7c5f9f596d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7c5f9f596d-w2zlj\"\nI0527 00:39:38.264296       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" need=5 deleting=1\nI0527 00:39:38.264326       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" relatedReplicaSets=[webserver-84767c454 webserver-7c5f9f596d webserver-6f6f95ddc4]\nI0527 00:39:38.264429       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-6f6f95ddc4\" pod=\"deployment-6359/webserver-6f6f95ddc4-gq92t\"\nI0527 00:39:38.271963       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-7c5f9f596d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7c5f9f596d-kkvrd\"\nI0527 00:39:38.271995       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6f6f95ddc4 to 5\"\nI0527 00:39:38.285108       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6f6f95ddc4-gq92t\"\nI0527 00:39:38.305154       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-7c5f9f596d\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:38.316515       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-7c5f9f596d\" need=3 creating=1\nI0527 00:39:38.317735       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7c5f9f596d to 3\"\nI0527 00:39:38.326345       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-7c5f9f596d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7c5f9f596d-fkhtm\"\nI0527 00:39:38.339915       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:38.348875       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:38.369536       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.138.142).\nI0527 00:39:38.376846       1 event.go:291] \"Event occurred\" object=\"volumemode-7628-9717/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0527 00:39:38.408926       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.173.243).\nE0527 00:39:38.494108       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-6905/pvc-5s6mx: storageclass.storage.k8s.io \"provisioning-6905\" not found\nI0527 00:39:38.494562       1 event.go:291] \"Event occurred\" object=\"provisioning-6905/pvc-5s6mx\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-6905\\\" not found\"\nI0527 00:39:38.640131       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-7c5f9f596d\" need=0 deleting=3\nI0527 00:39:38.640562       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-7c5f9f596d\" relatedReplicaSets=[webserver-6f6f95ddc4 webserver-654cd69b7b webserver-84767c454 webserver-7c5f9f596d]\nI0527 00:39:38.641048       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-7c5f9f596d\" pod=\"deployment-6359/webserver-7c5f9f596d-kkvrd\"\nI0527 00:39:38.641299       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-7c5f9f596d\" pod=\"deployment-6359/webserver-7c5f9f596d-w2zlj\"\nI0527 00:39:38.642332       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7c5f9f596d to 0\"\nI0527 00:39:38.642478       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-7c5f9f596d\" pod=\"deployment-6359/webserver-7c5f9f596d-fkhtm\"\nI0527 00:39:38.646466       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:38.659011       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-654cd69b7b to 3\"\nI0527 00:39:38.659289       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-654cd69b7b\" need=3 creating=3\nI0527 00:39:38.662050       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-7c5f9f596d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7c5f9f596d-fkhtm\"\nI0527 00:39:38.669427       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-654cd69b7b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-654cd69b7b-cz5zg\"\nI0527 00:39:38.676802       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-7c5f9f596d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7c5f9f596d-w2zlj\"\nI0527 00:39:38.677171       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-7c5f9f596d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7c5f9f596d-kkvrd\"\nI0527 00:39:38.677320       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-654cd69b7b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-654cd69b7b-qcvpg\"\nI0527 00:39:38.685057       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-654cd69b7b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-654cd69b7b-hqvrd\"\nI0527 00:39:38.700969       1 pv_controller.go:864] volume \"local-xqljg\" entered phase \"Available\"\nI0527 00:39:38.702697       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-580\nI0527 00:39:38.789176       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.196.152).\nI0527 00:39:38.845998       1 namespace_controller.go:185] Namespace has been deleted provisioning-3441-9268\nI0527 00:39:38.980463       1 event.go:291] \"Event occurred\" object=\"volumemode-7628/csi-hostpathc8dm6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volumemode-7628\\\" or manually created by system administrator\"\nI0527 00:39:39.175707       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.138.142).\nI0527 00:39:39.183649       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-8264/pvc-cwtkx is unused\nI0527 00:39:39.189199       1 pv_controller.go:638] volume \"local-pvzcx6x\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:39:39.191971       1 pv_controller.go:864] volume \"local-pvzcx6x\" entered phase \"Released\"\nI0527 00:39:39.387377       1 pv_controller_base.go:504] deletion of claim \"persistent-local-volumes-test-8264/pvc-cwtkx\" was already processed\nI0527 00:39:39.562418       1 event.go:291] \"Event occurred\" object=\"provisioning-8850/nfsbwrxs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-provisioning-8850\\\" or manually created by system administrator\"\nI0527 00:39:39.715184       1 pvc_protection_controller.go:291] PVC volume-2158/pvc-2cw77 is unused\nI0527 00:39:39.731268       1 pv_controller.go:638] volume \"local-ljlmc\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:39:39.734687       1 pv_controller.go:864] volume \"local-ljlmc\" entered phase \"Released\"\nI0527 00:39:39.904658       1 pv_controller_base.go:504] deletion of claim \"volume-2158/pvc-2cw77\" was already processed\nE0527 00:39:40.005939       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:40.157691       1 namespace_controller.go:185] Namespace has been deleted provisioning-1142\nI0527 00:39:40.401238       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" need=4 deleting=1\nI0527 00:39:40.402638       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" relatedReplicaSets=[webserver-84767c454 webserver-7c5f9f596d webserver-6f6f95ddc4 webserver-654cd69b7b]\nI0527 00:39:40.402919       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-6f6f95ddc4\" pod=\"deployment-6359/webserver-6f6f95ddc4-rglbx\"\nI0527 00:39:40.406662       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6f6f95ddc4 to 4\"\nI0527 00:39:40.420714       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6f6f95ddc4-rglbx\"\nI0527 00:39:40.427373       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-654cd69b7b\" need=4 creating=1\nI0527 00:39:40.430903       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-654cd69b7b to 4\"\nI0527 00:39:40.436918       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-654cd69b7b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-654cd69b7b-8tbdr\"\nI0527 00:39:40.468782       1 pv_controller.go:864] volume \"pvc-1c817ea0-ff5f-4d61-81b8-2b61b8a9dbd4\" entered phase \"Bound\"\nI0527 00:39:40.468827       1 pv_controller.go:967] volume \"pvc-1c817ea0-ff5f-4d61-81b8-2b61b8a9dbd4\" bound to claim \"volumemode-7628/csi-hostpathc8dm6\"\nI0527 00:39:40.474722       1 pv_controller.go:808] claim \"volumemode-7628/csi-hostpathc8dm6\" entered phase \"Bound\"\nI0527 00:39:40.494839       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" need=3 deleting=1\nI0527 00:39:40.495610       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6f6f95ddc4 to 3\"\nI0527 00:39:40.496446       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" relatedReplicaSets=[webserver-84767c454 webserver-7c5f9f596d webserver-6f6f95ddc4 webserver-654cd69b7b]\nI0527 00:39:40.496619       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-6f6f95ddc4\" pod=\"deployment-6359/webserver-6f6f95ddc4-bbzdp\"\nI0527 00:39:40.504709       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-654cd69b7b\" need=5 creating=1\nI0527 00:39:40.507684       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-654cd69b7b to 5\"\nI0527 00:39:40.511722       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-654cd69b7b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-654cd69b7b-w895w\"\nI0527 00:39:40.520792       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6f6f95ddc4-bbzdp\"\nI0527 00:39:40.532841       1 namespace_controller.go:185] Namespace has been deleted job-5174\nI0527 00:39:40.706165       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-580-1307/csi-mockplugin-0\" objectUID=19be3bb1-b818-4044-91fc-6aee60858a86 kind=\"Pod\" virtual=false\nI0527 00:39:40.706735       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-580-1307/csi-mockplugin\nI0527 00:39:40.706798       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-580-1307/csi-mockplugin-7d8d697c45\" objectUID=d95311b9-9abe-4a57-b566-28d24c706a17 kind=\"ControllerRevision\" virtual=false\nI0527 00:39:40.708642       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-580-1307/csi-mockplugin-0\" objectUID=19be3bb1-b818-4044-91fc-6aee60858a86 kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:40.708890       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-580-1307/csi-mockplugin-7d8d697c45\" objectUID=d95311b9-9abe-4a57-b566-28d24c706a17 kind=\"ControllerRevision\" propagationPolicy=Background\nE0527 00:39:40.792584       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-9732/default: secrets \"default-token-mmcnn\" is forbidden: unable to create new content in namespace kubectl-9732 because it is being terminated\nI0527 00:39:40.971967       1 aws.go:2291] Waiting for volume \"vol-0226586ae109ac335\" state: actual=detaching, desired=detached\nI0527 00:39:41.087800       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-580-1307/csi-mockplugin-resizer-0\" objectUID=7d841800-2887-41a5-a247-bc7d62923996 kind=\"Pod\" virtual=false\nI0527 00:39:41.088206       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-580-1307/csi-mockplugin-resizer\nI0527 00:39:41.088327       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-580-1307/csi-mockplugin-resizer-7f4799bc79\" objectUID=4d50537f-0935-41d1-bad5-0df77991aeda kind=\"ControllerRevision\" virtual=false\nI0527 00:39:41.090741       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-580-1307/csi-mockplugin-resizer-0\" objectUID=7d841800-2887-41a5-a247-bc7d62923996 kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:41.090947       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-580-1307/csi-mockplugin-resizer-7f4799bc79\" objectUID=4d50537f-0935-41d1-bad5-0df77991aeda kind=\"ControllerRevision\" propagationPolicy=Background\nE0527 00:39:41.236463       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-9411/default: secrets \"default-token-bskgg\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-9411 because it is being terminated\nI0527 00:39:41.279825       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"webhook-6214/sample-webhook-deployment-6bd9446d55\" need=1 creating=1\nI0527 00:39:41.281099       1 event.go:291] \"Event occurred\" object=\"webhook-6214/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-6bd9446d55 to 1\"\nI0527 00:39:41.290029       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-6214/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:41.293476       1 event.go:291] \"Event occurred\" object=\"webhook-6214/sample-webhook-deployment-6bd9446d55\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-6bd9446d55-x6dr8\"\nI0527 00:39:41.363847       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.218.91).\nI0527 00:39:41.551607       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" need=2 deleting=1\nI0527 00:39:41.551890       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" relatedReplicaSets=[webserver-654cd69b7b webserver-84767c454 webserver-7c5f9f596d webserver-6f6f95ddc4]\nI0527 00:39:41.552111       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-6f6f95ddc4\" pod=\"deployment-6359/webserver-6f6f95ddc4-xqp7d\"\nI0527 00:39:41.552629       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6f6f95ddc4 to 2\"\nI0527 00:39:41.563704       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-654cd69b7b\" need=6 creating=1\nI0527 00:39:41.564206       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-654cd69b7b to 6\"\nI0527 00:39:41.576979       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6f6f95ddc4-xqp7d\"\nI0527 00:39:41.580401       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-654cd69b7b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-654cd69b7b-rgjgx\"\nI0527 00:39:41.587671       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:39:42.002901       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-1c817ea0-ff5f-4d61-81b8-2b61b8a9dbd4\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volumemode-7628^05a1b4bd-be84-11eb-b463-a63ce1686753\") from node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:39:42.012261       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-1c817ea0-ff5f-4d61-81b8-2b61b8a9dbd4\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volumemode-7628^05a1b4bd-be84-11eb-b463-a63ce1686753\") from node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:39:42.012500       1 event.go:291] \"Event occurred\" object=\"volumemode-7628/pod-0f01bc5c-0850-4891-9ba5-7e2b9a81079f\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-1c817ea0-ff5f-4d61-81b8-2b61b8a9dbd4\\\" \"\nE0527 00:39:42.297399       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:42.369998       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.218.91).\nI0527 00:39:42.778503       1 pvc_protection_controller.go:291] PVC provisioning-7625/awsm8fkr is unused\nI0527 00:39:42.784801       1 pv_controller.go:638] volume \"pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af\" is released and reclaim policy \"Delete\" will be executed\nI0527 00:39:42.788967       1 pv_controller.go:864] volume \"pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af\" entered phase \"Released\"\nI0527 00:39:42.790270       1 pv_controller.go:1326] isVolumeReleased[pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af]: volume is released\nI0527 00:39:42.858447       1 pv_controller.go:864] volume \"pvc-b929ce7a-e804-4a39-865a-f0c4a3c1991a\" entered phase \"Bound\"\nI0527 00:39:42.858502       1 pv_controller.go:967] volume \"pvc-b929ce7a-e804-4a39-865a-f0c4a3c1991a\" bound to claim \"provisioning-8850/nfsbwrxs\"\nI0527 00:39:42.864136       1 pv_controller.go:808] claim \"provisioning-8850/nfsbwrxs\" entered phase \"Bound\"\nI0527 00:39:42.922307       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" need=1 deleting=1\nI0527 00:39:42.922508       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" relatedReplicaSets=[webserver-84767c454 webserver-7c5f9f596d webserver-6f6f95ddc4 webserver-654cd69b7b]\nI0527 00:39:42.922699       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6f6f95ddc4 to 1\"\nI0527 00:39:42.922801       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-6f6f95ddc4\" pod=\"deployment-6359/webserver-6f6f95ddc4-4qxft\"\nI0527 00:39:42.948516       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6f6f95ddc4-4qxft\"\nI0527 00:39:42.985640       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-1a/vol-02e8e7d595de13dc7: error deleting EBS volume \"vol-02e8e7d595de13dc7\" since volume is currently attached to \"i-081c5901a8830e60d\"\nE0527 00:39:42.985808       1 goroutinemap.go:150] Operation for \"delete-pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af[546a6fc5-7416-49bb-8395-5e826303815c]\" failed. No retries permitted until 2021-05-27 00:39:43.485787059 +0000 UTC m=+1113.742049654 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-02e8e7d595de13dc7\\\" since volume is currently attached to \\\"i-081c5901a8830e60d\\\"\"\nI0527 00:39:42.985837       1 event.go:291] \"Event occurred\" object=\"pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-02e8e7d595de13dc7\\\" since volume is currently attached to \\\"i-081c5901a8830e60d\\\"\"\nI0527 00:39:43.047716       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-05-27 00:39:02 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdbr\",\n  InstanceId: \"i-081c5901a8830e60d\",\n  State: \"detaching\",\n  VolumeId: \"vol-0226586ae109ac335\"\n}\nI0527 00:39:43.048332       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0226586ae109ac335\") on node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nI0527 00:39:43.110217       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0226586ae109ac335\") from node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nW0527 00:39:43.165014       1 aws.go:2207] Waiting for volume \"vol-07e5b4da20cff9ffe\" to be detached but the volume does not exist\nI0527 00:39:43.165045       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  State: \"detached\"\n}\nI0527 00:39:43.165099       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-07e5b4da20cff9ffe\") on node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nI0527 00:39:43.170234       1 aws.go:2014] Assigned mount device cx -> volume vol-0226586ae109ac335\nI0527 00:39:43.179894       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.173.243).\nE0527 00:39:43.247335       1 tokens_controller.go:262] error synchronizing serviceaccount nettest-205/default: secrets \"default-token-9jz4n\" is forbidden: unable to create new content in namespace nettest-205 because it is being terminated\nE0527 00:39:43.511841       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:43.561688       1 aws.go:2427] AttachVolume volume=\"vol-0226586ae109ac335\" instance=\"i-069a67f4c9afb4c56\" request returned {\n  AttachTime: 2021-05-27 00:39:43.556 +0000 UTC,\n  Device: \"/dev/xvdcx\",\n  InstanceId: \"i-069a67f4c9afb4c56\",\n  State: \"attaching\",\n  VolumeId: \"vol-0226586ae109ac335\"\n}\nI0527 00:39:43.574306       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.30.239).\nI0527 00:39:43.768374       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.86.30).\nI0527 00:39:43.783511       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.30.239).\nI0527 00:39:43.786501       1 event.go:291] \"Event occurred\" object=\"volume-expand-5875-5066/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0527 00:39:44.158271       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.16.123).\nI0527 00:39:44.364252       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.16.123).\nI0527 00:39:44.364822       1 event.go:291] \"Event occurred\" object=\"volume-expand-5875-5066/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0527 00:39:44.551486       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.243.132).\nI0527 00:39:44.748408       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.243.132).\nI0527 00:39:44.749249       1 event.go:291] \"Event occurred\" object=\"volume-expand-5875-5066/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0527 00:39:44.936239       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.164.253).\nI0527 00:39:45.000119       1 deployment_controller.go:581] Deployment kubectl-6827/httpd-deployment has been deleted\nI0527 00:39:45.134204       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.164.253).\nI0527 00:39:45.135081       1 event.go:291] \"Event occurred\" object=\"volume-expand-5875-5066/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0527 00:39:45.162335       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.16.123).\nI0527 00:39:45.321498       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.236.224).\nI0527 00:39:45.524624       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.236.224).\nI0527 00:39:45.524950       1 event.go:291] \"Event occurred\" object=\"volume-expand-5875-5066/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0527 00:39:45.527751       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-02e8e7d595de13dc7\") on node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nI0527 00:39:45.539075       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-02e8e7d595de13dc7\") on node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nI0527 00:39:45.558495       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.243.132).\nI0527 00:39:45.703118       1 aws.go:2291] Waiting for volume \"vol-0226586ae109ac335\" state: actual=attaching, desired=attached\nI0527 00:39:45.826127       1 namespace_controller.go:185] Namespace has been deleted kubectl-9732\nI0527 00:39:45.940039       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.164.253).\nI0527 00:39:45.992110       1 pv_controller.go:915] claim \"provisioning-6905/pvc-5s6mx\" bound to volume \"local-xqljg\"\nI0527 00:39:45.996547       1 pv_controller.go:1326] isVolumeReleased[pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af]: volume is released\nI0527 00:39:45.999036       1 pv_controller.go:864] volume \"local-xqljg\" entered phase \"Bound\"\nI0527 00:39:45.999060       1 pv_controller.go:967] volume \"local-xqljg\" bound to claim \"provisioning-6905/pvc-5s6mx\"\nI0527 00:39:46.004449       1 pv_controller.go:808] claim \"provisioning-6905/pvc-5s6mx\" entered phase \"Bound\"\nI0527 00:39:46.005086       1 event.go:291] \"Event occurred\" object=\"volume-expand-6289/awswcw9q\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0527 00:39:46.093174       1 event.go:291] \"Event occurred\" object=\"volume-expand-5875/csi-hostpathx2qj8\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5875\\\" or manually created by system administrator\"\nI0527 00:39:46.180418       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-1a/vol-02e8e7d595de13dc7: error deleting EBS volume \"vol-02e8e7d595de13dc7\" since volume is currently attached to \"i-081c5901a8830e60d\"\nE0527 00:39:46.180515       1 goroutinemap.go:150] Operation for \"delete-pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af[546a6fc5-7416-49bb-8395-5e826303815c]\" failed. No retries permitted until 2021-05-27 00:39:47.180494208 +0000 UTC m=+1117.436756794 (durationBeforeRetry 1s). Error: \"error deleting EBS volume \\\"vol-02e8e7d595de13dc7\\\" since volume is currently attached to \\\"i-081c5901a8830e60d\\\"\"\nI0527 00:39:46.180712       1 event.go:291] \"Event occurred\" object=\"pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-02e8e7d595de13dc7\\\" since volume is currently attached to \\\"i-081c5901a8830e60d\\\"\"\nI0527 00:39:46.331421       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-9411\nI0527 00:39:46.533083       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" need=0 deleting=1\nI0527 00:39:46.533945       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6f6f95ddc4 to 0\"\nI0527 00:39:46.534347       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6359/webserver-6f6f95ddc4\" relatedReplicaSets=[webserver-7c5f9f596d webserver-6f6f95ddc4 webserver-654cd69b7b webserver-84767c454]\nI0527 00:39:46.534564       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-6f6f95ddc4\" pod=\"deployment-6359/webserver-6f6f95ddc4-bwzzm\"\nI0527 00:39:46.550131       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-6f6f95ddc4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6f6f95ddc4-bwzzm\"\nI0527 00:39:46.551232       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6359/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0527 00:39:46.619377       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-8264/default: secrets \"default-token-psz8p\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-8264 because it is being terminated\nI0527 00:39:46.629716       1 pv_controller.go:864] volume \"pvc-07f4aadc-e7f1-4b78-a8f3-74a6336177c3\" entered phase \"Bound\"\nI0527 00:39:46.631483       1 pv_controller.go:967] volume \"pvc-07f4aadc-e7f1-4b78-a8f3-74a6336177c3\" bound to claim \"volume-expand-5875/csi-hostpathx2qj8\"\nI0527 00:39:46.640580       1 pv_controller.go:808] claim \"volume-expand-5875/csi-hostpathx2qj8\" entered phase \"Bound\"\nI0527 00:39:46.974723       1 pvc_protection_controller.go:291] PVC provisioning-4995/pvc-vkktk is unused\nI0527 00:39:46.981142       1 pv_controller.go:638] volume \"local-kqqvv\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:39:46.986188       1 pv_controller.go:864] volume \"local-kqqvv\" entered phase \"Released\"\nI0527 00:39:47.165139       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.196.152).\nI0527 00:39:47.182131       1 pv_controller_base.go:504] deletion of claim \"provisioning-4995/pvc-vkktk\" was already processed\nE0527 00:39:47.332339       1 tokens_controller.go:262] error synchronizing serviceaccount disruption-7273/default: secrets \"default-token-2z9gc\" is forbidden: unable to create new content in namespace disruption-7273 because it is being terminated\nI0527 00:39:47.447348       1 namespace_controller.go:185] Namespace has been deleted ephemeral-9915\nI0527 00:39:47.564219       1 utils.go:413] couldn't find ipfamilies for headless service: volumemode-7628-9717/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.138.142).\nI0527 00:39:47.576959       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9915-136/csi-hostpath-attacher-4l26s\" objectUID=6c1f67a2-8dc4-4c98-abec-ce81df349fff kind=\"EndpointSlice\" virtual=false\nI0527 00:39:47.723866       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6359/webserver-654cd69b7b\" need=7 creating=1\nI0527 00:39:47.724267       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-654cd69b7b to 7\"\nI0527 00:39:47.738102       1 event.go:291] \"Event occurred\" object=\"deployment-6359/webserver-654cd69b7b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-654cd69b7b-n72ws\"\nI0527 00:39:47.773136       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9915-136/csi-hostpath-attacher-6fd57b6779\" objectUID=fd9f3167-454e-4f06-97de-80253a634cc2 kind=\"ControllerRevision\" virtual=false\nI0527 00:39:47.773157       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-9915-136/csi-hostpath-attacher\nI0527 00:39:47.773300       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9915-136/csi-hostpath-attacher-0\" objectUID=3cf17364-7a5e-48ad-a73b-f45454916ca7 kind=\"Pod\" virtual=false\nI0527 00:39:47.836205       1 aws.go:2037] Releasing in-process attachment entry: cx -> volume vol-0226586ae109ac335\nI0527 00:39:47.836247       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0226586ae109ac335\") from node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:39:47.836391       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-7842/pod-b3508d7e-4d62-4deb-9659-2a42ff05b851\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\\\" \"\nI0527 00:39:47.978598       1 route_controller.go:294] set node ip-172-20-33-93.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:47.978745       1 route_controller.go:294] set node ip-172-20-41-144.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:47.978828       1 route_controller.go:294] set node ip-172-20-40-196.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:47.978910       1 route_controller.go:294] set node ip-172-20-40-209.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:47.979019       1 route_controller.go:294] set node ip-172-20-42-187.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:48.030871       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9915-136/csi-hostpath-attacher-6fd57b6779\" objectUID=fd9f3167-454e-4f06-97de-80253a634cc2 kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:39:48.032062       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9915-136/csi-hostpath-attacher-0\" objectUID=3cf17364-7a5e-48ad-a73b-f45454916ca7 kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:48.032349       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9915-136/csi-hostpath-attacher-4l26s\" objectUID=6c1f67a2-8dc4-4c98-abec-ce81df349fff kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:39:48.091395       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.16.123).\nI0527 00:39:48.155738       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9915-136/csi-hostpathplugin-fjfzb\" objectUID=ac8b573c-1981-4ee4-957f-1a69b918e17c kind=\"EndpointSlice\" virtual=false\nI0527 00:39:48.159064       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9915-136/csi-hostpathplugin-fjfzb\" objectUID=ac8b573c-1981-4ee4-957f-1a69b918e17c kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:39:48.354847       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9915-136/csi-hostpathplugin-6d5c6fffcf\" objectUID=d54b2397-5b85-4f87-abf4-e85f6a2b007f kind=\"ControllerRevision\" virtual=false\nI0527 00:39:48.355031       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-9915-136/csi-hostpathplugin\nI0527 00:39:48.355185       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9915-136/csi-hostpathplugin-0\" objectUID=5eca89d7-6660-4ea0-a055-a0b3610b7fd5 kind=\"Pod\" virtual=false\nI0527 00:39:48.356557       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9915-136/csi-hostpathplugin-6d5c6fffcf\" objectUID=d54b2397-5b85-4f87-abf4-e85f6a2b007f kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:39:48.357474       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9915-136/csi-hostpathplugin-0\" objectUID=5eca89d7-6660-4ea0-a055-a0b3610b7fd5 kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:48.370084       1 namespace_controller.go:185] Namespace has been deleted pod-network-test-9382\nE0527 00:39:48.475604       1 pv_controller.go:1437] error finding provisioning plugin for claim volume-1942/pvc-9s9jn: storageclass.storage.k8s.io \"volume-1942\" not found\nI0527 00:39:48.475780       1 event.go:291] \"Event occurred\" object=\"volume-1942/pvc-9s9jn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-1942\\\" not found\"\nI0527 00:39:48.500204       1 namespace_controller.go:185] Namespace has been deleted security-context-test-5434\nI0527 00:39:48.545961       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9915-136/csi-hostpath-provisioner-6hgtv\" objectUID=52927b57-8f04-4812-a658-5d0ae1d54bdb kind=\"EndpointSlice\" virtual=false\nI0527 00:39:48.548760       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9915-136/csi-hostpath-provisioner-6hgtv\" objectUID=52927b57-8f04-4812-a658-5d0ae1d54bdb kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:39:48.680130       1 pv_controller.go:864] volume \"local-f6r48\" entered phase \"Available\"\nI0527 00:39:48.689551       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.30.239).\nI0527 00:39:48.745923       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9915-136/csi-hostpath-provisioner-9c4d7d85b\" objectUID=688b6c40-54f4-4165-80df-a81c75b7a429 kind=\"ControllerRevision\" virtual=false\nI0527 00:39:48.746159       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-9915-136/csi-hostpath-provisioner\nI0527 00:39:48.746205       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9915-136/csi-hostpath-provisioner-0\" objectUID=574ec940-3e36-43e6-913e-094b35b870ce kind=\"Pod\" virtual=false\nI0527 00:39:48.748054       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9915-136/csi-hostpath-provisioner-0\" objectUID=574ec940-3e36-43e6-913e-094b35b870ce kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:48.748459       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9915-136/csi-hostpath-provisioner-9c4d7d85b\" objectUID=688b6c40-54f4-4165-80df-a81c75b7a429 kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:39:48.936167       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9915-136/csi-hostpath-resizer-hx4j8\" objectUID=19385d38-6cd5-4672-9dda-6d03a54acfbf kind=\"EndpointSlice\" virtual=false\nI0527 00:39:48.938798       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9915-136/csi-hostpath-resizer-hx4j8\" objectUID=19385d38-6cd5-4672-9dda-6d03a54acfbf kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:39:49.088603       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.164.253).\nI0527 00:39:49.136275       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9915-136/csi-hostpath-resizer-58867c9588\" objectUID=e73c6021-51d7-4691-adc8-1b7e2ac9f7b1 kind=\"ControllerRevision\" virtual=false\nI0527 00:39:49.136714       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-9915-136/csi-hostpath-resizer\nI0527 00:39:49.136826       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9915-136/csi-hostpath-resizer-0\" objectUID=eb837fd6-22a0-43e3-852e-610923045c8f kind=\"Pod\" virtual=false\nI0527 00:39:49.138464       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9915-136/csi-hostpath-resizer-58867c9588\" objectUID=e73c6021-51d7-4691-adc8-1b7e2ac9f7b1 kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:39:49.138464       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9915-136/csi-hostpath-resizer-0\" objectUID=eb837fd6-22a0-43e3-852e-610923045c8f kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:49.147109       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-07f4aadc-e7f1-4b78-a8f3-74a6336177c3\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-5875^094b97e6-be84-11eb-b721-56d090557a50\") from node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nI0527 00:39:49.158440       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-07f4aadc-e7f1-4b78-a8f3-74a6336177c3\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-5875^094b97e6-be84-11eb-b721-56d090557a50\") from node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nI0527 00:39:49.158703       1 event.go:291] \"Event occurred\" object=\"volume-expand-5875/pod-f5d10a52-cba3-44cb-8bdf-1799f052b728\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-07f4aadc-e7f1-4b78-a8f3-74a6336177c3\\\" \"\nI0527 00:39:49.329349       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9915-136/csi-hostpath-snapshotter-gw4g5\" objectUID=a6488260-bb3c-4a6c-b77c-e0fd82e91744 kind=\"EndpointSlice\" virtual=false\nI0527 00:39:49.337028       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9915-136/csi-hostpath-snapshotter-gw4g5\" objectUID=a6488260-bb3c-4a6c-b77c-e0fd82e91744 kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:39:49.532972       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9915-136/csi-hostpath-snapshotter-749b7558bb\" objectUID=380099e6-7eb8-4424-93f0-aa8bac7daceb kind=\"ControllerRevision\" virtual=false\nI0527 00:39:49.533304       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-9915-136/csi-hostpath-snapshotter\nI0527 00:39:49.533377       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9915-136/csi-hostpath-snapshotter-0\" objectUID=bbd1c595-20cc-41c6-b419-8f01f43a2af9 kind=\"Pod\" virtual=false\nI0527 00:39:49.535364       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9915-136/csi-hostpath-snapshotter-749b7558bb\" objectUID=380099e6-7eb8-4424-93f0-aa8bac7daceb kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:39:49.535381       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9915-136/csi-hostpath-snapshotter-0\" objectUID=bbd1c595-20cc-41c6-b419-8f01f43a2af9 kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:49.694482       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.30.239).\nI0527 00:39:50.089924       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.243.132).\nI0527 00:39:50.490478       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.236.224).\nI0527 00:39:51.027028       1 aws.go:2291] Waiting for volume \"vol-02e8e7d595de13dc7\" state: actual=detaching, desired=detached\nI0527 00:39:51.096544       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.243.132).\nI0527 00:39:51.498249       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.236.224).\nI0527 00:39:51.728251       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-8264\nI0527 00:39:52.273803       1 utils.go:413] couldn't find ipfamilies for headless service: webhook-6214/e2e-test-webhook. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.145.81).\nI0527 00:39:52.436276       1 namespace_controller.go:185] Namespace has been deleted volume-2158\nE0527 00:39:52.905360       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:53.093291       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-05-27 00:39:23 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdcn\",\n  InstanceId: \"i-081c5901a8830e60d\",\n  State: \"detaching\",\n  VolumeId: \"vol-02e8e7d595de13dc7\"\n}\nI0527 00:39:53.093349       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-02e8e7d595de13dc7\") on node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nE0527 00:39:53.134928       1 tokens_controller.go:262] error synchronizing serviceaccount sysctl-8671/default: secrets \"default-token-rdh49\" is forbidden: unable to create new content in namespace sysctl-8671 because it is being terminated\nI0527 00:39:53.336334       1 event.go:291] \"Event occurred\" object=\"topology-290/pvc-jzrcn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nE0527 00:39:54.082566       1 publisher.go:164] syncing \"fail-closed-namesapce\" failed: Internal error occurred: failed calling webhook \"fail-closed.k8s.io\": Post \"https://e2e-test-webhook.webhook-6214.svc:8443/configmaps?timeout=10s\": x509: certificate signed by unknown authority\nE0527 00:39:54.093091       1 publisher.go:164] syncing \"fail-closed-namesapce\" failed: Internal error occurred: failed calling webhook \"fail-closed.k8s.io\": Post \"https://e2e-test-webhook.webhook-6214.svc:8443/configmaps?timeout=10s\": x509: certificate signed by unknown authority\nE0527 00:39:54.108087       1 publisher.go:164] syncing \"fail-closed-namesapce\" failed: Internal error occurred: failed calling webhook \"fail-closed.k8s.io\": Post \"https://e2e-test-webhook.webhook-6214.svc:8443/configmaps?timeout=10s\": x509: certificate signed by unknown authority\nE0527 00:39:54.133059       1 publisher.go:164] syncing \"fail-closed-namesapce\" failed: Internal error occurred: failed calling webhook \"fail-closed.k8s.io\": Post \"https://e2e-test-webhook.webhook-6214.svc:8443/configmaps?timeout=10s\": x509: certificate signed by unknown authority\nE0527 00:39:54.177882       1 publisher.go:164] syncing \"fail-closed-namesapce\" failed: Internal error occurred: failed calling webhook \"fail-closed.k8s.io\": Post \"https://e2e-test-webhook.webhook-6214.svc:8443/configmaps?timeout=10s\": x509: certificate signed by unknown authority\nE0527 00:39:54.263023       1 publisher.go:164] syncing \"fail-closed-namesapce\" failed: Internal error occurred: failed calling webhook \"fail-closed.k8s.io\": Post \"https://e2e-test-webhook.webhook-6214.svc:8443/configmaps?timeout=10s\": x509: certificate signed by unknown authority\nI0527 00:39:54.386801       1 namespace_controller.go:185] Namespace has been deleted apf-8342\nE0527 00:39:54.428006       1 publisher.go:164] syncing \"fail-closed-namesapce\" failed: Internal error occurred: failed calling webhook \"fail-closed.k8s.io\": Post \"https://e2e-test-webhook.webhook-6214.svc:8443/configmaps?timeout=10s\": x509: certificate signed by unknown authority\nI0527 00:39:54.691758       1 namespace_controller.go:185] Namespace has been deleted nettest-3494\nE0527 00:39:54.880111       1 tokens_controller.go:262] error synchronizing serviceaccount containers-425/default: secrets \"default-token-bqcdh\" is forbidden: unable to create new content in namespace containers-425 because it is being terminated\nI0527 00:39:55.476205       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-6214/e2e-test-webhook-ghvzc\" objectUID=fa81899e-4b54-4d2a-916d-234668623121 kind=\"EndpointSlice\" virtual=false\nI0527 00:39:55.484877       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-6214/e2e-test-webhook-ghvzc\" objectUID=fa81899e-4b54-4d2a-916d-234668623121 kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:39:55.691028       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-6214/sample-webhook-deployment-6bd9446d55\" objectUID=b9994a32-1085-4e8c-9825-c859a0383d3b kind=\"ReplicaSet\" virtual=false\nI0527 00:39:55.691345       1 deployment_controller.go:581] Deployment webhook-6214/sample-webhook-deployment has been deleted\nI0527 00:39:55.692973       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-6214/sample-webhook-deployment-6bd9446d55\" objectUID=b9994a32-1085-4e8c-9825-c859a0383d3b kind=\"ReplicaSet\" propagationPolicy=Background\nI0527 00:39:55.695871       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-6214/sample-webhook-deployment-6bd9446d55-x6dr8\" objectUID=fabe4d07-61a8-45fa-b315-055850094ddb kind=\"Pod\" virtual=false\nI0527 00:39:55.697353       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-6214/sample-webhook-deployment-6bd9446d55-x6dr8\" objectUID=fabe4d07-61a8-45fa-b315-055850094ddb kind=\"Pod\" propagationPolicy=Background\nE0527 00:39:55.982344       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0527 00:39:56.608894       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-4995/default: secrets \"default-token-v55xm\" is forbidden: unable to create new content in namespace provisioning-4995 because it is being terminated\nI0527 00:39:56.707641       1 namespace_controller.go:185] Namespace has been deleted replication-controller-8001\nI0527 00:39:56.730304       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6359/webserver-654cd69b7b\" objectUID=16be3a87-5a63-4e8a-bd3b-ac2e49773f0e kind=\"ReplicaSet\" virtual=false\nI0527 00:39:56.730665       1 deployment_controller.go:581] Deployment deployment-6359/webserver has been deleted\nI0527 00:39:56.730715       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6359/webserver-7c5f9f596d\" objectUID=bf0a03f3-bda0-42a0-99fb-9bd91d48ddfa kind=\"ReplicaSet\" virtual=false\nI0527 00:39:56.730856       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6359/webserver-6f6f95ddc4\" objectUID=7157c425-d8f2-40d2-91b1-0dc69a73d54b kind=\"ReplicaSet\" virtual=false\nI0527 00:39:56.734729       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6359/webserver-7c5f9f596d\" objectUID=bf0a03f3-bda0-42a0-99fb-9bd91d48ddfa kind=\"ReplicaSet\" propagationPolicy=Background\nI0527 00:39:56.734761       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6359/webserver-654cd69b7b\" objectUID=16be3a87-5a63-4e8a-bd3b-ac2e49773f0e kind=\"ReplicaSet\" propagationPolicy=Background\nI0527 00:39:56.734784       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6359/webserver-6f6f95ddc4\" objectUID=7157c425-d8f2-40d2-91b1-0dc69a73d54b kind=\"ReplicaSet\" propagationPolicy=Background\nI0527 00:39:56.746502       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6359/webserver-654cd69b7b-hqvrd\" objectUID=29bfeca9-23e0-4bf2-8b95-930116a1f774 kind=\"Pod\" virtual=false\nI0527 00:39:56.746852       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6359/webserver-654cd69b7b-8tbdr\" objectUID=d2105d8a-ec3c-49b5-8e64-c69dc68468fb kind=\"Pod\" virtual=false\nI0527 00:39:56.747132       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6359/webserver-654cd69b7b-w895w\" objectUID=56f2f96a-6b15-4778-aab4-42ff7a7de63c kind=\"Pod\" virtual=false\nI0527 00:39:56.747386       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6359/webserver-654cd69b7b-rgjgx\" objectUID=5bc380a1-a6e7-48c8-9c1b-c6e382adcb60 kind=\"Pod\" virtual=false\nI0527 00:39:56.747670       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6359/webserver-654cd69b7b-n72ws\" objectUID=056ff5dd-84e3-47bc-8ecc-ce483bf8ad97 kind=\"Pod\" virtual=false\nI0527 00:39:56.747947       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6359/webserver-654cd69b7b-cz5zg\" objectUID=806db805-9be1-45a5-9639-e4ab84388641 kind=\"Pod\" virtual=false\nI0527 00:39:56.748206       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6359/webserver-654cd69b7b-qcvpg\" objectUID=ebee7c56-8cbb-470e-acfb-6eb773e346ac kind=\"Pod\" virtual=false\nI0527 00:39:56.749132       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6359/webserver-654cd69b7b-hqvrd\" objectUID=29bfeca9-23e0-4bf2-8b95-930116a1f774 kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:56.751128       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6359/webserver-654cd69b7b-8tbdr\" objectUID=d2105d8a-ec3c-49b5-8e64-c69dc68468fb kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:56.751406       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6359/webserver-654cd69b7b-w895w\" objectUID=56f2f96a-6b15-4778-aab4-42ff7a7de63c kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:56.757138       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6359/webserver-654cd69b7b-qcvpg\" objectUID=ebee7c56-8cbb-470e-acfb-6eb773e346ac kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:56.761150       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6359/webserver-654cd69b7b-rgjgx\" objectUID=5bc380a1-a6e7-48c8-9c1b-c6e382adcb60 kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:56.761340       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6359/webserver-654cd69b7b-cz5zg\" objectUID=806db805-9be1-45a5-9639-e4ab84388641 kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:56.761633       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6359/webserver-654cd69b7b-n72ws\" objectUID=056ff5dd-84e3-47bc-8ecc-ce483bf8ad97 kind=\"Pod\" propagationPolicy=Background\nI0527 00:39:56.796747       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-580-1307\nI0527 00:39:56.805430       1 event.go:291] \"Event occurred\" object=\"volume-expand-6289/awswcw9q\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0527 00:39:56.808875       1 pvc_protection_controller.go:291] PVC volume-expand-6289/awswcw9q is unused\nI0527 00:39:57.692798       1 namespace_controller.go:185] Namespace has been deleted downward-api-4099\nE0527 00:39:57.749617       1 tokens_controller.go:262] error synchronizing serviceaccount security-context-test-4000/default: secrets \"default-token-rc8z5\" is forbidden: unable to create new content in namespace security-context-test-4000 because it is being terminated\nI0527 00:39:57.994525       1 route_controller.go:294] set node ip-172-20-40-209.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:57.994529       1 route_controller.go:294] set node ip-172-20-33-93.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:57.994552       1 route_controller.go:294] set node ip-172-20-41-144.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:57.994606       1 route_controller.go:294] set node ip-172-20-42-187.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:57.994624       1 route_controller.go:294] set node ip-172-20-40-196.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:39:58.179300       1 namespace_controller.go:185] Namespace has been deleted sysctl-8671\nI0527 00:39:58.486664       1 namespace_controller.go:185] Namespace has been deleted provisioning-40\nE0527 00:39:58.567736       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:39:58.921550       1 aws_util.go:113] Successfully created EBS Disk volume aws://ap-southeast-1a/vol-0edaa34b7b8587ef9\nI0527 00:39:58.975904       1 pv_controller.go:1652] volume \"pvc-c264216c-9abe-451a-ac1b-f31f143369d2\" provisioned for claim \"topology-290/pvc-jzrcn\"\nI0527 00:39:58.976574       1 event.go:291] \"Event occurred\" object=\"topology-290/pvc-jzrcn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-c264216c-9abe-451a-ac1b-f31f143369d2 using kubernetes.io/aws-ebs\"\nI0527 00:39:58.979496       1 pv_controller.go:864] volume \"pvc-c264216c-9abe-451a-ac1b-f31f143369d2\" entered phase \"Bound\"\nI0527 00:39:58.979800       1 pv_controller.go:967] volume \"pvc-c264216c-9abe-451a-ac1b-f31f143369d2\" bound to claim \"topology-290/pvc-jzrcn\"\nI0527 00:39:58.984837       1 pv_controller.go:808] claim \"topology-290/pvc-jzrcn\" entered phase \"Bound\"\nI0527 00:39:59.017341       1 namespace_controller.go:185] Namespace has been deleted volume-4770\nI0527 00:39:59.593785       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-c264216c-9abe-451a-ac1b-f31f143369d2\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0edaa34b7b8587ef9\") from node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nI0527 00:39:59.674143       1 aws.go:2014] Assigned mount device bo -> volume vol-0edaa34b7b8587ef9\nI0527 00:39:59.910141       1 utils.go:424] couldn't find ipfamilies for headless service: dns-2413/dns-test-service-2 likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:39:59.924789       1 utils.go:424] couldn't find ipfamilies for headless service: dns-2413/dns-test-service-2 likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:39:59.954198       1 namespace_controller.go:185] Namespace has been deleted containers-425\nI0527 00:40:00.039852       1 aws.go:2427] AttachVolume volume=\"vol-0edaa34b7b8587ef9\" instance=\"i-081c5901a8830e60d\" request returned {\n  AttachTime: 2021-05-27 00:40:00.029 +0000 UTC,\n  Device: \"/dev/xvdbo\",\n  InstanceId: \"i-081c5901a8830e60d\",\n  State: \"attaching\",\n  VolumeId: \"vol-0edaa34b7b8587ef9\"\n}\nE0527 00:40:00.106384       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-6214/default: secrets \"default-token-bnj25\" is forbidden: unable to create new content in namespace webhook-6214 because it is being terminated\nI0527 00:40:00.137401       1 garbagecollector.go:471] \"Processing object\" object=\"dns-2413/dns-test-service-2-h6c79\" objectUID=93928027-0d48-4ec4-867d-5ea3375c8a72 kind=\"EndpointSlice\" virtual=false\nI0527 00:40:00.138035       1 garbagecollector.go:471] \"Processing object\" object=\"dns-2413/dns-test-service-2-mkpcw\" objectUID=0ae64971-b5e5-4ad5-819d-17274c99faee kind=\"EndpointSlice\" virtual=false\nI0527 00:40:00.143755       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-2413/dns-test-service-2-h6c79\" objectUID=93928027-0d48-4ec4-867d-5ea3375c8a72 kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:40:00.145108       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-2413/dns-test-service-2-mkpcw\" objectUID=0ae64971-b5e5-4ad5-819d-17274c99faee kind=\"EndpointSlice\" propagationPolicy=Background\nE0527 00:40:00.335835       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-6214-markers/default: secrets \"default-token-vrmjq\" is forbidden: unable to create new content in namespace webhook-6214-markers because it is being terminated\nI0527 00:40:00.992516       1 pv_controller.go:915] claim \"volume-1942/pvc-9s9jn\" bound to volume \"local-f6r48\"\nI0527 00:40:00.994688       1 pv_controller.go:1326] isVolumeReleased[pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af]: volume is released\nI0527 00:40:00.998516       1 pv_controller.go:864] volume \"local-f6r48\" entered phase \"Bound\"\nI0527 00:40:00.998544       1 pv_controller.go:967] volume \"local-f6r48\" bound to claim \"volume-1942/pvc-9s9jn\"\nI0527 00:40:01.003437       1 pv_controller.go:808] claim \"volume-1942/pvc-9s9jn\" entered phase \"Bound\"\nI0527 00:40:01.183584       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ap-southeast-1a/vol-02e8e7d595de13dc7\nI0527 00:40:01.183687       1 pv_controller.go:1421] volume \"pvc-251e708f-ceb5-4daf-8e91-6b30cb7b83af\" deleted\nI0527 00:40:01.192267       1 pv_controller_base.go:504] deletion of claim \"provisioning-7625/awsm8fkr\" was already processed\nI0527 00:40:01.687580       1 namespace_controller.go:185] Namespace has been deleted provisioning-4995\nI0527 00:40:01.724765       1 namespace_controller.go:185] Namespace has been deleted pods-9118\nI0527 00:40:02.178260       1 aws.go:2037] Releasing in-process attachment entry: bo -> volume vol-0edaa34b7b8587ef9\nI0527 00:40:02.178443       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-c264216c-9abe-451a-ac1b-f31f143369d2\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0edaa34b7b8587ef9\") from node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nI0527 00:40:02.178528       1 event.go:291] \"Event occurred\" object=\"topology-290/pod-bf6a1f02-dd92-4b83-90ce-d412f2e69fef\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-c264216c-9abe-451a-ac1b-f31f143369d2\\\" \"\nI0527 00:40:02.348524       1 event.go:291] \"Event occurred\" object=\"cronjob-6308/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job concurrent-1622076000\"\nI0527 00:40:02.357804       1 event.go:291] \"Event occurred\" object=\"cronjob-6308/concurrent-1622076000\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: concurrent-1622076000-v6rgt\"\nI0527 00:40:02.358749       1 cronjob_controller.go:188] Unable to update status for cronjob-6308/concurrent (rv = 29812): Operation cannot be fulfilled on cronjobs.batch \"concurrent\": the object has been modified; please apply your changes to the latest version and try again\nE0527 00:40:02.399773       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-6289/default: secrets \"default-token-jrf9q\" is forbidden: unable to create new content in namespace volume-expand-6289 because it is being terminated\nI0527 00:40:03.214373       1 namespace_controller.go:185] Namespace has been deleted deployment-6359\nI0527 00:40:04.212621       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-1c817ea0-ff5f-4d61-81b8-2b61b8a9dbd4\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volumemode-7628^05a1b4bd-be84-11eb-b463-a63ce1686753\") on node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:40:04.216180       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-1c817ea0-ff5f-4d61-81b8-2b61b8a9dbd4\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volumemode-7628^05a1b4bd-be84-11eb-b463-a63ce1686753\") on node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:40:04.224140       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-1c817ea0-ff5f-4d61-81b8-2b61b8a9dbd4\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volumemode-7628^05a1b4bd-be84-11eb-b463-a63ce1686753\") on node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:40:04.354674       1 pvc_protection_controller.go:291] PVC provisioning-6905/pvc-5s6mx is unused\nI0527 00:40:04.360377       1 pv_controller.go:638] volume \"local-xqljg\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:40:04.363297       1 pv_controller.go:864] volume \"local-xqljg\" entered phase \"Released\"\nI0527 00:40:04.548143       1 pv_controller_base.go:504] deletion of claim \"provisioning-6905/pvc-5s6mx\" was already processed\nI0527 00:40:04.571231       1 namespace_controller.go:185] Namespace has been deleted fail-closed-namesapce\nE0527 00:40:04.604149       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-2334/pvc-q9ttq: storageclass.storage.k8s.io \"provisioning-2334\" not found\nI0527 00:40:04.604937       1 event.go:291] \"Event occurred\" object=\"provisioning-2334/pvc-q9ttq\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-2334\\\" not found\"\nI0527 00:40:04.795004       1 pv_controller.go:864] volume \"local-dnn2h\" entered phase \"Available\"\nI0527 00:40:04.968738       1 pvc_protection_controller.go:291] PVC provisioning-8850/nfsbwrxs is unused\nI0527 00:40:04.974678       1 pv_controller.go:638] volume \"pvc-b929ce7a-e804-4a39-865a-f0c4a3c1991a\" is released and reclaim policy \"Delete\" will be executed\nI0527 00:40:04.977715       1 pv_controller.go:864] volume \"pvc-b929ce7a-e804-4a39-865a-f0c4a3c1991a\" entered phase \"Released\"\nI0527 00:40:04.979917       1 pv_controller.go:1326] isVolumeReleased[pvc-b929ce7a-e804-4a39-865a-f0c4a3c1991a]: volume is released\nI0527 00:40:04.991817       1 pv_controller_base.go:504] deletion of claim \"provisioning-8850/nfsbwrxs\" was already processed\nI0527 00:40:05.247600       1 namespace_controller.go:185] Namespace has been deleted webhook-6214\nI0527 00:40:05.372215       1 namespace_controller.go:185] Namespace has been deleted webhook-6214-markers\nI0527 00:40:05.525387       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-07f4aadc-e7f1-4b78-a8f3-74a6336177c3\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-5875^094b97e6-be84-11eb-b721-56d090557a50\") on node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nI0527 00:40:05.527503       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-07f4aadc-e7f1-4b78-a8f3-74a6336177c3\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-5875^094b97e6-be84-11eb-b721-56d090557a50\") on node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nI0527 00:40:05.537442       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-07f4aadc-e7f1-4b78-a8f3-74a6336177c3\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-5875^094b97e6-be84-11eb-b721-56d090557a50\") on node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nE0527 00:40:05.917648       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:40:06.167456       1 utils.go:424] couldn't find ipfamilies for headless service: services-4792/externalname-service likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:06.556233       1 utils.go:413] couldn't find ipfamilies for headless service: services-4792/externalname-service. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.99.54).\nI0527 00:40:06.748697       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"services-4792/externalname-service\" need=2 creating=2\nI0527 00:40:06.752192       1 event.go:291] \"Event occurred\" object=\"services-4792/externalname-service\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalname-service-nhhxn\"\nI0527 00:40:06.752363       1 utils.go:413] couldn't find ipfamilies for headless service: services-4792/externalname-service. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.99.54).\nI0527 00:40:06.762382       1 event.go:291] \"Event occurred\" object=\"services-4792/externalname-service\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalname-service-lkn7h\"\nI0527 00:40:06.767772       1 utils.go:413] couldn't find ipfamilies for headless service: services-4792/externalname-service. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.99.54).\nE0527 00:40:06.923436       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-6751/default: secrets \"default-token-f2ctc\" is forbidden: unable to create new content in namespace kubectl-6751 because it is being terminated\nI0527 00:40:07.175020       1 utils.go:413] couldn't find ipfamilies for headless service: services-4792/externalname-service. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.99.54).\nI0527 00:40:07.472322       1 namespace_controller.go:185] Namespace has been deleted volume-expand-6289\nI0527 00:40:07.649059       1 namespace_controller.go:185] Namespace has been deleted hostpath-2532\nI0527 00:40:07.743632       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-2163/test-quota\nI0527 00:40:07.785716       1 pv_controller.go:864] volume \"local-pvwmp5r\" entered phase \"Available\"\nI0527 00:40:07.977042       1 utils.go:413] couldn't find ipfamilies for headless service: services-4792/externalname-service. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.99.54).\nI0527 00:40:07.981845       1 pv_controller.go:915] claim \"persistent-local-volumes-test-1593/pvc-jnjqb\" bound to volume \"local-pvwmp5r\"\nI0527 00:40:07.985729       1 route_controller.go:294] set node ip-172-20-33-93.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:07.985745       1 route_controller.go:294] set node ip-172-20-42-187.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:07.985756       1 route_controller.go:294] set node ip-172-20-41-144.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:07.985765       1 route_controller.go:294] set node ip-172-20-40-196.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:07.985773       1 route_controller.go:294] set node ip-172-20-40-209.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:08.003279       1 pv_controller.go:864] volume \"local-pvwmp5r\" entered phase \"Bound\"\nI0527 00:40:08.003460       1 pv_controller.go:967] volume \"local-pvwmp5r\" bound to claim \"persistent-local-volumes-test-1593/pvc-jnjqb\"\nI0527 00:40:08.016229       1 pv_controller.go:808] claim \"persistent-local-volumes-test-1593/pvc-jnjqb\" entered phase \"Bound\"\nE0527 00:40:08.062096       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-3495/default: secrets \"default-token-6lmcb\" is forbidden: unable to create new content in namespace kubectl-3495 because it is being terminated\nE0527 00:40:08.236133       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:40:08.753411       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"gc-3569/simpletest.rc\" need=10 creating=10\nI0527 00:40:08.757578       1 event.go:291] \"Event occurred\" object=\"gc-3569/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-4jjl9\"\nI0527 00:40:08.766734       1 event.go:291] \"Event occurred\" object=\"gc-3569/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-mw94d\"\nI0527 00:40:08.770879       1 event.go:291] \"Event occurred\" object=\"gc-3569/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-g7jzz\"\nI0527 00:40:08.782721       1 event.go:291] \"Event occurred\" object=\"gc-3569/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-nd5qw\"\nI0527 00:40:08.784064       1 event.go:291] \"Event occurred\" object=\"gc-3569/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-9wn64\"\nI0527 00:40:08.784455       1 event.go:291] \"Event occurred\" object=\"gc-3569/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-lzn8f\"\nI0527 00:40:08.786979       1 event.go:291] \"Event occurred\" object=\"gc-3569/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-6x5m7\"\nI0527 00:40:08.795976       1 event.go:291] \"Event occurred\" object=\"gc-3569/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-dbkxk\"\nI0527 00:40:08.797241       1 event.go:291] \"Event occurred\" object=\"gc-3569/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-vh6l2\"\nI0527 00:40:08.798689       1 event.go:291] \"Event occurred\" object=\"gc-3569/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-m778h\"\nI0527 00:40:08.952950       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-1593/pvc-jnjqb is unused\nI0527 00:40:08.957956       1 pv_controller.go:638] volume \"local-pvwmp5r\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:40:08.960363       1 pv_controller.go:864] volume \"local-pvwmp5r\" entered phase \"Released\"\nI0527 00:40:08.989921       1 utils.go:413] couldn't find ipfamilies for headless service: services-4792/externalname-service. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.99.54).\nI0527 00:40:09.150539       1 pv_controller_base.go:504] deletion of claim \"persistent-local-volumes-test-1593/pvc-jnjqb\" was already processed\nI0527 00:40:09.635950       1 namespace_controller.go:185] Namespace has been deleted nettest-205\nI0527 00:40:10.152786       1 pvc_protection_controller.go:291] PVC volumemode-7628/csi-hostpathc8dm6 is unused\nI0527 00:40:10.182571       1 pv_controller.go:638] volume \"pvc-1c817ea0-ff5f-4d61-81b8-2b61b8a9dbd4\" is released and reclaim policy \"Delete\" will be executed\nI0527 00:40:10.199870       1 pv_controller.go:864] volume \"pvc-1c817ea0-ff5f-4d61-81b8-2b61b8a9dbd4\" entered phase \"Released\"\nI0527 00:40:10.208930       1 pv_controller.go:1326] isVolumeReleased[pvc-1c817ea0-ff5f-4d61-81b8-2b61b8a9dbd4]: volume is released\nI0527 00:40:10.255307       1 pv_controller_base.go:504] deletion of claim \"volumemode-7628/csi-hostpathc8dm6\" was already processed\nI0527 00:40:10.690431       1 namespace_controller.go:185] Namespace has been deleted dns-2413\nI0527 00:40:10.817305       1 utils.go:413] couldn't find ipfamilies for headless service: services-4792/externalname-service. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.99.54).\nI0527 00:40:10.818257       1 namespace_controller.go:185] Namespace has been deleted provisioning-2421\nI0527 00:40:10.841080       1 namespace_controller.go:185] Namespace has been deleted downward-api-7640\nI0527 00:40:11.000126       1 deployment_controller.go:581] Deployment deployment-4442/test-rolling-update-with-lb has been deleted\nI0527 00:40:11.264113       1 pvc_protection_controller.go:291] PVC fsgroupchangepolicy-7842/awsmrlhm is unused\nI0527 00:40:11.272122       1 pv_controller.go:638] volume \"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\" is released and reclaim policy \"Delete\" will be executed\nI0527 00:40:11.277394       1 pv_controller.go:864] volume \"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\" entered phase \"Released\"\nI0527 00:40:11.281211       1 pv_controller.go:1326] isVolumeReleased[pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef]: volume is released\nI0527 00:40:11.489558       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-1a/vol-0226586ae109ac335: error deleting EBS volume \"vol-0226586ae109ac335\" since volume is currently attached to \"i-069a67f4c9afb4c56\"\nE0527 00:40:11.489751       1 goroutinemap.go:150] Operation for \"delete-pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef[3c195398-8a3d-4cf1-882a-03a473305ad8]\" failed. No retries permitted until 2021-05-27 00:40:11.989734983 +0000 UTC m=+1142.245997572 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-0226586ae109ac335\\\" since volume is currently attached to \\\"i-069a67f4c9afb4c56\\\"\"\nI0527 00:40:11.489877       1 event.go:291] \"Event occurred\" object=\"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0226586ae109ac335\\\" since volume is currently attached to \\\"i-069a67f4c9afb4c56\\\"\"\nE0527 00:40:11.500915       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:40:11.996558       1 namespace_controller.go:185] Namespace has been deleted kubectl-6751\nE0527 00:40:12.000547       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0527 00:40:12.554019       1 tokens_controller.go:262] error synchronizing serviceaccount nettest-9385/default: secrets \"default-token-pcgbh\" is forbidden: unable to create new content in namespace nettest-9385 because it is being terminated\nI0527 00:40:12.582994       1 expand_controller.go:277] Ignoring the PVC \"volume-expand-5875/csi-hostpathx2qj8\" (uid: \"07f4aadc-e7f1-4b78-a8f3-74a6336177c3\") : didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nI0527 00:40:12.583242       1 event.go:291] \"Event occurred\" object=\"volume-expand-5875/csi-hostpathx2qj8\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ExternalExpanding\" message=\"Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\"\nI0527 00:40:12.756214       1 namespace_controller.go:185] Namespace has been deleted resourcequota-2163\nI0527 00:40:13.110080       1 namespace_controller.go:185] Namespace has been deleted kubectl-3495\nI0527 00:40:13.153255       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0226586ae109ac335\") on node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:40:13.157209       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0226586ae109ac335\") on node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:40:13.253534       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-07f4aadc-e7f1-4b78-a8f3-74a6336177c3\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-5875^094b97e6-be84-11eb-b721-56d090557a50\") from node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nI0527 00:40:13.258407       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-07f4aadc-e7f1-4b78-a8f3-74a6336177c3\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-5875^094b97e6-be84-11eb-b721-56d090557a50\") from node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nI0527 00:40:13.258576       1 event.go:291] \"Event occurred\" object=\"volume-expand-5875/pod-eb283b63-d12d-4765-a340-1ece55ca0e03\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-07f4aadc-e7f1-4b78-a8f3-74a6336177c3\\\" \"\nI0527 00:40:13.265385       1 event.go:291] \"Event occurred\" object=\"provisioning-9632/nfsddw8w\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-provisioning-9632\\\" or manually created by system administrator\"\nI0527 00:40:13.540418       1 namespace_controller.go:185] Namespace has been deleted disruption-7273\nE0527 00:40:13.720876       1 tokens_controller.go:262] error synchronizing serviceaccount dns-6938/default: secrets \"default-token-q7tvp\" is forbidden: unable to create new content in namespace dns-6938 because it is being terminated\nE0527 00:40:14.026959       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:40:14.158427       1 garbagecollector.go:471] \"Processing object\" object=\"gc-3569/simpletest.rc\" objectUID=2614da4b-2d67-47f4-87e8-45cce32f12b0 kind=\"ReplicationController\" virtual=false\nI0527 00:40:14.159003       1 garbagecollector.go:471] \"Processing object\" object=\"gc-3569/simpletest.rc\" objectUID=2614da4b-2d67-47f4-87e8-45cce32f12b0 kind=\"ReplicationController\" virtual=false\nI0527 00:40:14.164205       1 garbagecollector.go:471] \"Processing object\" object=\"gc-3569/simpletest.rc\" objectUID=2614da4b-2d67-47f4-87e8-45cce32f12b0 kind=\"ReplicationController\" virtual=false\nI0527 00:40:14.164729       1 garbagecollector.go:471] \"Processing object\" object=\"gc-3569/simpletest.rc\" objectUID=2614da4b-2d67-47f4-87e8-45cce32f12b0 kind=\"ReplicationController\" virtual=false\nI0527 00:40:14.165192       1 garbagecollector.go:471] \"Processing object\" object=\"gc-3569/simpletest.rc\" objectUID=2614da4b-2d67-47f4-87e8-45cce32f12b0 kind=\"ReplicationController\" virtual=false\nI0527 00:40:14.168200       1 garbagecollector.go:471] \"Processing object\" object=\"gc-3569/simpletest.rc\" objectUID=2614da4b-2d67-47f4-87e8-45cce32f12b0 kind=\"ReplicationController\" virtual=false\nI0527 00:40:14.168719       1 garbagecollector.go:471] \"Processing object\" object=\"gc-3569/simpletest.rc\" objectUID=2614da4b-2d67-47f4-87e8-45cce32f12b0 kind=\"ReplicationController\" virtual=false\nI0527 00:40:14.169153       1 garbagecollector.go:471] \"Processing object\" object=\"gc-3569/simpletest.rc\" objectUID=2614da4b-2d67-47f4-87e8-45cce32f12b0 kind=\"ReplicationController\" virtual=false\nI0527 00:40:14.169299       1 garbagecollector.go:471] \"Processing object\" object=\"gc-3569/simpletest.rc\" objectUID=2614da4b-2d67-47f4-87e8-45cce32f12b0 kind=\"ReplicationController\" virtual=false\nI0527 00:40:14.490739       1 namespace_controller.go:185] Namespace has been deleted provisioning-7625\nE0527 00:40:14.681430       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-669/default: secrets \"default-token-779br\" is forbidden: unable to create new content in namespace provisioning-669 because it is being terminated\nI0527 00:40:15.036226       1 pv_controller.go:864] volume \"pvc-d696d43d-eb53-4d11-85e5-5c2b50565ab4\" entered phase \"Bound\"\nI0527 00:40:15.036260       1 pv_controller.go:967] volume \"pvc-d696d43d-eb53-4d11-85e5-5c2b50565ab4\" bound to claim \"provisioning-9632/nfsddw8w\"\nI0527 00:40:15.041371       1 pv_controller.go:808] claim \"provisioning-9632/nfsddw8w\" entered phase \"Bound\"\nI0527 00:40:15.340989       1 namespace_controller.go:185] Namespace has been deleted svcaccounts-4583\nI0527 00:40:15.369184       1 event.go:291] \"Event occurred\" object=\"provisioning-8319/awsj7qlw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nE0527 00:40:15.739517       1 tokens_controller.go:262] error synchronizing serviceaccount volumemode-7628/default: secrets \"default-token-d27lb\" is forbidden: unable to create new content in namespace volumemode-7628 because it is being terminated\nE0527 00:40:15.906525       1 tokens_controller.go:262] error synchronizing serviceaccount metrics-grabber-3770/default: secrets \"default-token-nwkjp\" is forbidden: unable to create new content in namespace metrics-grabber-3770 because it is being terminated\nI0527 00:40:15.993947       1 pv_controller.go:915] claim \"provisioning-2334/pvc-q9ttq\" bound to volume \"local-dnn2h\"\nI0527 00:40:15.999838       1 pv_controller.go:1326] isVolumeReleased[pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef]: volume is released\nI0527 00:40:16.007851       1 pv_controller.go:864] volume \"local-dnn2h\" entered phase \"Bound\"\nI0527 00:40:16.008060       1 pv_controller.go:967] volume \"local-dnn2h\" bound to claim \"provisioning-2334/pvc-q9ttq\"\nI0527 00:40:16.027998       1 pv_controller.go:808] claim \"provisioning-2334/pvc-q9ttq\" entered phase \"Bound\"\nI0527 00:40:16.211365       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-1a/vol-0226586ae109ac335: error deleting EBS volume \"vol-0226586ae109ac335\" since volume is currently attached to \"i-069a67f4c9afb4c56\"\nE0527 00:40:16.213964       1 goroutinemap.go:150] Operation for \"delete-pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef[3c195398-8a3d-4cf1-882a-03a473305ad8]\" failed. No retries permitted until 2021-05-27 00:40:17.213926633 +0000 UTC m=+1147.470189223 (durationBeforeRetry 1s). Error: \"error deleting EBS volume \\\"vol-0226586ae109ac335\\\" since volume is currently attached to \\\"i-069a67f4c9afb4c56\\\"\"\nI0527 00:40:16.214030       1 event.go:291] \"Event occurred\" object=\"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0226586ae109ac335\\\" since volume is currently attached to \\\"i-069a67f4c9afb4c56\\\"\"\nI0527 00:40:16.621456       1 namespace_controller.go:185] Namespace has been deleted emptydir-125\nI0527 00:40:17.000138       1 deployment_controller.go:581] Deployment deployment-6359/webserver has been deleted\nI0527 00:40:17.598239       1 namespace_controller.go:185] Namespace has been deleted nettest-9385\nI0527 00:40:18.004971       1 route_controller.go:294] set node ip-172-20-40-209.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:18.005059       1 route_controller.go:294] set node ip-172-20-42-187.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:18.005072       1 route_controller.go:294] set node ip-172-20-33-93.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:18.005082       1 route_controller.go:294] set node ip-172-20-41-144.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:18.005092       1 route_controller.go:294] set node ip-172-20-40-196.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:18.015664       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7499-1425/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0527 00:40:18.587631       1 aws.go:2291] Waiting for volume \"vol-0226586ae109ac335\" state: actual=detaching, desired=detached\nI0527 00:40:18.743261       1 namespace_controller.go:185] Namespace has been deleted dns-6938\nI0527 00:40:19.767484       1 namespace_controller.go:185] Namespace has been deleted provisioning-669\nI0527 00:40:20.196363       1 namespace_controller.go:185] Namespace has been deleted emptydir-3391\nE0527 00:40:20.234400       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-1593/default: secrets \"default-token-dxdnt\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-1593 because it is being terminated\nI0527 00:40:20.553876       1 namespace_controller.go:185] Namespace has been deleted provisioning-6905\nI0527 00:40:20.697142       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-05-27 00:39:43 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdcx\",\n  InstanceId: \"i-069a67f4c9afb4c56\",\n  State: \"detaching\",\n  VolumeId: \"vol-0226586ae109ac335\"\n}\nI0527 00:40:20.697194       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0226586ae109ac335\") on node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:40:20.851501       1 namespace_controller.go:185] Namespace has been deleted volumemode-7628\nI0527 00:40:20.908827       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-7628-9717/csi-hostpath-attacher-ct5lq\" objectUID=45f52c6c-7cd2-448b-abc4-2b623e91a356 kind=\"EndpointSlice\" virtual=false\nI0527 00:40:20.951451       1 namespace_controller.go:185] Namespace has been deleted metrics-grabber-3770\nI0527 00:40:21.124566       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-7628-9717/csi-hostpath-attacher-777c6c5855\" objectUID=be980694-c0c7-42fd-b191-1bfa4638855f kind=\"ControllerRevision\" virtual=false\nI0527 00:40:21.124633       1 stateful_set.go:419] StatefulSet has been deleted volumemode-7628-9717/csi-hostpath-attacher\nI0527 00:40:21.124687       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-7628-9717/csi-hostpath-attacher-0\" objectUID=8952e6d0-0bd2-4d6b-a069-917d750ed557 kind=\"Pod\" virtual=false\nI0527 00:40:21.125062       1 aws_util.go:113] Successfully created EBS Disk volume aws://ap-southeast-1a/vol-0c15f8a6dd022e65e\nI0527 00:40:21.187302       1 pv_controller.go:1652] volume \"pvc-6ba4850f-9e3a-4d68-810b-8633e67195c2\" provisioned for claim \"provisioning-8319/awsj7qlw\"\nI0527 00:40:21.187653       1 event.go:291] \"Event occurred\" object=\"provisioning-8319/awsj7qlw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-6ba4850f-9e3a-4d68-810b-8633e67195c2 using kubernetes.io/aws-ebs\"\nI0527 00:40:21.191197       1 pv_controller.go:864] volume \"pvc-6ba4850f-9e3a-4d68-810b-8633e67195c2\" entered phase \"Bound\"\nI0527 00:40:21.191328       1 pv_controller.go:967] volume \"pvc-6ba4850f-9e3a-4d68-810b-8633e67195c2\" bound to claim \"provisioning-8319/awsj7qlw\"\nI0527 00:40:21.197093       1 pv_controller.go:808] claim \"provisioning-8319/awsj7qlw\" entered phase \"Bound\"\nE0527 00:40:21.306783       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:40:21.369256       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-7628-9717/csi-hostpath-attacher-777c6c5855\" objectUID=be980694-c0c7-42fd-b191-1bfa4638855f kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:40:21.369685       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-7628-9717/csi-hostpath-attacher-ct5lq\" objectUID=45f52c6c-7cd2-448b-abc4-2b623e91a356 kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:40:21.370043       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-7628-9717/csi-hostpath-attacher-0\" objectUID=8952e6d0-0bd2-4d6b-a069-917d750ed557 kind=\"Pod\" propagationPolicy=Background\nI0527 00:40:21.507419       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-7628-9717/csi-hostpathplugin-cmpl6\" objectUID=559e0548-e007-4a5c-86fd-5d908a2d1e74 kind=\"EndpointSlice\" virtual=false\nI0527 00:40:21.510962       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-7628-9717/csi-hostpathplugin-cmpl6\" objectUID=559e0548-e007-4a5c-86fd-5d908a2d1e74 kind=\"EndpointSlice\" propagationPolicy=Background\nE0527 00:40:21.624904       1 tokens_controller.go:262] error synchronizing serviceaccount disruption-2-6403/default: secrets \"default-token-8scjp\" is forbidden: unable to create new content in namespace disruption-2-6403 because it is being terminated\nI0527 00:40:21.708130       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-7628-9717/csi-hostpathplugin-6ffb59fc9c\" objectUID=0a3aa461-8bca-41fc-925c-03d4b8c72c83 kind=\"ControllerRevision\" virtual=false\nI0527 00:40:21.708622       1 stateful_set.go:419] StatefulSet has been deleted volumemode-7628-9717/csi-hostpathplugin\nI0527 00:40:21.709060       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-7628-9717/csi-hostpathplugin-0\" objectUID=b2718fb0-f8ea-40d6-974c-74bd17007605 kind=\"Pod\" virtual=false\nI0527 00:40:21.711522       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-7628-9717/csi-hostpathplugin-6ffb59fc9c\" objectUID=0a3aa461-8bca-41fc-925c-03d4b8c72c83 kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:40:21.711522       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-7628-9717/csi-hostpathplugin-0\" objectUID=b2718fb0-f8ea-40d6-974c-74bd17007605 kind=\"Pod\" propagationPolicy=Background\nI0527 00:40:21.875648       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-6ba4850f-9e3a-4d68-810b-8633e67195c2\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0c15f8a6dd022e65e\") from node \"ip-172-20-40-196.ap-southeast-1.compute.internal\" \nE0527 00:40:21.880720       1 tokens_controller.go:262] error synchronizing serviceaccount ingress-9152/default: secrets \"default-token-7j577\" is forbidden: unable to create new content in namespace ingress-9152 because it is being terminated\nI0527 00:40:21.899247       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-7628-9717/csi-hostpath-provisioner-nbxqc\" objectUID=1dd5c1eb-2652-4519-9045-6bffc8405ea9 kind=\"EndpointSlice\" virtual=false\nI0527 00:40:21.905108       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-7628-9717/csi-hostpath-provisioner-nbxqc\" objectUID=1dd5c1eb-2652-4519-9045-6bffc8405ea9 kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:40:21.953216       1 aws.go:2014] Assigned mount device bc -> volume vol-0c15f8a6dd022e65e\nE0527 00:40:22.027703       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0527 00:40:22.032584       1 tokens_controller.go:262] error synchronizing serviceaccount disruption-1163/default: secrets \"default-token-hxqph\" is forbidden: unable to create new content in namespace disruption-1163 because it is being terminated\nI0527 00:40:22.099141       1 pvc_protection_controller.go:291] PVC provisioning-9632/nfsddw8w is unused\nI0527 00:40:22.108189       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-7628-9717/csi-hostpath-provisioner-57f589fdb5\" objectUID=9353c48d-7c31-4e6e-817d-bc91d721dde1 kind=\"ControllerRevision\" virtual=false\nI0527 00:40:22.108581       1 stateful_set.go:419] StatefulSet has been deleted volumemode-7628-9717/csi-hostpath-provisioner\nI0527 00:40:22.109008       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-7628-9717/csi-hostpath-provisioner-0\" objectUID=c838a404-7299-4c7a-90a6-b80e1c40977a kind=\"Pod\" virtual=false\nI0527 00:40:22.112845       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-7628-9717/csi-hostpath-provisioner-0\" objectUID=c838a404-7299-4c7a-90a6-b80e1c40977a kind=\"Pod\" propagationPolicy=Background\nI0527 00:40:22.113395       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-7628-9717/csi-hostpath-provisioner-57f589fdb5\" objectUID=9353c48d-7c31-4e6e-817d-bc91d721dde1 kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:40:22.115734       1 pv_controller.go:638] volume \"pvc-d696d43d-eb53-4d11-85e5-5c2b50565ab4\" is released and reclaim policy \"Delete\" will be executed\nI0527 00:40:22.122117       1 pv_controller.go:864] volume \"pvc-d696d43d-eb53-4d11-85e5-5c2b50565ab4\" entered phase \"Released\"\nI0527 00:40:22.126741       1 pv_controller.go:1326] isVolumeReleased[pvc-d696d43d-eb53-4d11-85e5-5c2b50565ab4]: volume is released\nI0527 00:40:22.137537       1 pv_controller_base.go:504] deletion of claim \"provisioning-9632/nfsddw8w\" was already processed\nI0527 00:40:22.320323       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-7628-9717/csi-hostpath-resizer-96xkc\" objectUID=5388099b-fd36-476a-8241-6dd0b2e518ef kind=\"EndpointSlice\" virtual=false\nI0527 00:40:22.333088       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-7628-9717/csi-hostpath-resizer-96xkc\" objectUID=5388099b-fd36-476a-8241-6dd0b2e518ef kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:40:22.384952       1 aws.go:2427] AttachVolume volume=\"vol-0c15f8a6dd022e65e\" instance=\"i-063fbd80874e99720\" request returned {\n  AttachTime: 2021-05-27 00:40:22.371 +0000 UTC,\n  Device: \"/dev/xvdbc\",\n  InstanceId: \"i-063fbd80874e99720\",\n  State: \"attaching\",\n  VolumeId: \"vol-0c15f8a6dd022e65e\"\n}\nI0527 00:40:22.546777       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-7628-9717/csi-hostpath-resizer-779c899b9d\" objectUID=3acad8f2-5f83-453f-b805-664c5f8323f6 kind=\"ControllerRevision\" virtual=false\nI0527 00:40:22.547186       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-7628-9717/csi-hostpath-resizer-0\" objectUID=94c12f1c-3264-462b-95d9-b383e70b888b kind=\"Pod\" virtual=false\nI0527 00:40:22.547128       1 stateful_set.go:419] StatefulSet has been deleted volumemode-7628-9717/csi-hostpath-resizer\nI0527 00:40:22.549027       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-7628-9717/csi-hostpath-resizer-0\" objectUID=94c12f1c-3264-462b-95d9-b383e70b888b kind=\"Pod\" propagationPolicy=Background\nI0527 00:40:22.549425       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-7628-9717/csi-hostpath-resizer-779c899b9d\" objectUID=3acad8f2-5f83-453f-b805-664c5f8323f6 kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:40:22.605863       1 namespace_controller.go:185] Namespace has been deleted provisioning-8850\nI0527 00:40:22.737345       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-7628-9717/csi-hostpath-snapshotter-54qxv\" objectUID=ea681daa-bdad-41bd-b30b-9d7271000693 kind=\"EndpointSlice\" virtual=false\nI0527 00:40:22.743731       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-7628-9717/csi-hostpath-snapshotter-54qxv\" objectUID=ea681daa-bdad-41bd-b30b-9d7271000693 kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:40:22.895092       1 garbagecollector.go:471] \"Processing object\" object=\"services-4792/externalname-service-q86w8\" objectUID=9a2c1a8a-1311-4482-81ee-67a2ca682b61 kind=\"EndpointSlice\" virtual=false\nI0527 00:40:22.898158       1 garbagecollector.go:580] \"Deleting object\" object=\"services-4792/externalname-service-q86w8\" objectUID=9a2c1a8a-1311-4482-81ee-67a2ca682b61 kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:40:22.935738       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-7628-9717/csi-hostpath-snapshotter-c4b5946d6\" objectUID=0204411f-84e7-4a6a-b6ed-4f2dffe448c4 kind=\"ControllerRevision\" virtual=false\nI0527 00:40:22.936090       1 stateful_set.go:419] StatefulSet has been deleted volumemode-7628-9717/csi-hostpath-snapshotter\nI0527 00:40:22.936132       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-7628-9717/csi-hostpath-snapshotter-0\" objectUID=12fb4192-7338-48f8-b31d-e36b1f34dd60 kind=\"Pod\" virtual=false\nI0527 00:40:22.937940       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-7628-9717/csi-hostpath-snapshotter-c4b5946d6\" objectUID=0204411f-84e7-4a6a-b6ed-4f2dffe448c4 kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:40:22.938150       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-7628-9717/csi-hostpath-snapshotter-0\" objectUID=12fb4192-7338-48f8-b31d-e36b1f34dd60 kind=\"Pod\" propagationPolicy=Background\nI0527 00:40:23.143401       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.117.206).\nI0527 00:40:23.339860       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.117.206).\nI0527 00:40:23.343472       1 event.go:291] \"Event occurred\" object=\"volume-expand-7299-9009/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nE0527 00:40:23.375121       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:40:23.500291       1 pvc_protection_controller.go:291] PVC topology-290/pvc-jzrcn is unused\nI0527 00:40:23.514348       1 pv_controller.go:638] volume \"pvc-c264216c-9abe-451a-ac1b-f31f143369d2\" is released and reclaim policy \"Delete\" will be executed\nI0527 00:40:23.517069       1 pv_controller.go:864] volume \"pvc-c264216c-9abe-451a-ac1b-f31f143369d2\" entered phase \"Released\"\nI0527 00:40:23.518298       1 pv_controller.go:1326] isVolumeReleased[pvc-c264216c-9abe-451a-ac1b-f31f143369d2]: volume is released\nI0527 00:40:23.718256       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.131.235).\nI0527 00:40:23.724659       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-1a/vol-0edaa34b7b8587ef9: error deleting EBS volume \"vol-0edaa34b7b8587ef9\" since volume is currently attached to \"i-081c5901a8830e60d\"\nE0527 00:40:23.726410       1 goroutinemap.go:150] Operation for \"delete-pvc-c264216c-9abe-451a-ac1b-f31f143369d2[02540198-364e-4d1d-860e-64d63023c40d]\" failed. No retries permitted until 2021-05-27 00:40:24.226394832 +0000 UTC m=+1154.482657418 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-0edaa34b7b8587ef9\\\" since volume is currently attached to \\\"i-081c5901a8830e60d\\\"\"\nI0527 00:40:23.726473       1 event.go:291] \"Event occurred\" object=\"pvc-c264216c-9abe-451a-ac1b-f31f143369d2\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0edaa34b7b8587ef9\\\" since volume is currently attached to \\\"i-081c5901a8830e60d\\\"\"\nI0527 00:40:23.922579       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.131.235).\nI0527 00:40:23.922944       1 event.go:291] \"Event occurred\" object=\"volume-expand-7299-9009/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0527 00:40:23.924765       1 namespace_controller.go:185] Namespace has been deleted configmap-6209\nI0527 00:40:23.982469       1 namespace_controller.go:185] Namespace has been deleted security-context-test-4000\nI0527 00:40:24.108878       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.165.82).\nI0527 00:40:24.312485       1 event.go:291] \"Event occurred\" object=\"volume-expand-7299-9009/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0527 00:40:24.312873       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.165.82).\nE0527 00:40:24.481295       1 tokens_controller.go:262] error synchronizing serviceaccount prestop-8006/default: secrets \"default-token-nj7mh\" is forbidden: unable to create new content in namespace prestop-8006 because it is being terminated\nI0527 00:40:24.499799       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.164.22).\nI0527 00:40:24.504841       1 aws.go:2037] Releasing in-process attachment entry: bc -> volume vol-0c15f8a6dd022e65e\nI0527 00:40:24.505222       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-6ba4850f-9e3a-4d68-810b-8633e67195c2\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0c15f8a6dd022e65e\") from node \"ip-172-20-40-196.ap-southeast-1.compute.internal\" \nI0527 00:40:24.506019       1 event.go:291] \"Event occurred\" object=\"provisioning-8319/pod-subpath-test-dynamicpv-sgnz\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-6ba4850f-9e3a-4d68-810b-8633e67195c2\\\" \"\nI0527 00:40:24.516055       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7499/pvc-bx9wd\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-7499\\\" or manually created by system administrator\"\nI0527 00:40:24.548751       1 pv_controller.go:864] volume \"pvc-b91c0ad1-8277-4cde-8fbb-c9f3057190cb\" entered phase \"Bound\"\nI0527 00:40:24.548891       1 pv_controller.go:967] volume \"pvc-b91c0ad1-8277-4cde-8fbb-c9f3057190cb\" bound to claim \"csi-mock-volumes-7499/pvc-bx9wd\"\nI0527 00:40:24.555088       1 pv_controller.go:808] claim \"csi-mock-volumes-7499/pvc-bx9wd\" entered phase \"Bound\"\nI0527 00:40:24.568267       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.117.206).\nI0527 00:40:24.701117       1 event.go:291] \"Event occurred\" object=\"volume-expand-7299-9009/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0527 00:40:24.701400       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.164.22).\nI0527 00:40:24.885858       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.2.64).\nI0527 00:40:25.085639       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.2.64).\nI0527 00:40:25.086934       1 event.go:291] \"Event occurred\" object=\"volume-expand-7299-9009/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0527 00:40:25.349791       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-1593\nI0527 00:40:25.504430       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-07f4aadc-e7f1-4b78-a8f3-74a6336177c3\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-5875^094b97e6-be84-11eb-b721-56d090557a50\") on node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nI0527 00:40:25.505902       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-07f4aadc-e7f1-4b78-a8f3-74a6336177c3\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-5875^094b97e6-be84-11eb-b721-56d090557a50\") on node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nI0527 00:40:25.507565       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-07f4aadc-e7f1-4b78-a8f3-74a6336177c3\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-5875^094b97e6-be84-11eb-b721-56d090557a50\") on node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nI0527 00:40:25.578114       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.117.206).\nI0527 00:40:25.610102       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-c264216c-9abe-451a-ac1b-f31f143369d2\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0edaa34b7b8587ef9\") on node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nI0527 00:40:25.612725       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-c264216c-9abe-451a-ac1b-f31f143369d2\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0edaa34b7b8587ef9\") on node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nI0527 00:40:25.660057       1 event.go:291] \"Event occurred\" object=\"volume-expand-7299/csi-hostpath287k4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-7299\\\" or manually created by system administrator\"\nI0527 00:40:25.890001       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.2.64).\nE0527 00:40:26.029511       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:40:26.070948       1 pv_controller.go:864] volume \"pvc-a326299d-3eee-420f-8ac4-0d0ea218f4a1\" entered phase \"Bound\"\nI0527 00:40:26.071002       1 pv_controller.go:967] volume \"pvc-a326299d-3eee-420f-8ac4-0d0ea218f4a1\" bound to claim \"volume-expand-7299/csi-hostpath287k4\"\nI0527 00:40:26.076227       1 pv_controller.go:808] claim \"volume-expand-7299/csi-hostpath287k4\" entered phase \"Bound\"\nI0527 00:40:26.137712       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.131.235).\nI0527 00:40:26.191150       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.165.82).\nI0527 00:40:26.498510       1 namespace_controller.go:185] Namespace has been deleted ephemeral-9915-136\nI0527 00:40:26.700370       1 namespace_controller.go:185] Namespace has been deleted disruption-2-6403\nI0527 00:40:26.918302       1 namespace_controller.go:185] Namespace has been deleted ingress-9152\nI0527 00:40:26.927893       1 pvc_protection_controller.go:291] PVC provisioning-2334/pvc-q9ttq is unused\nI0527 00:40:26.935143       1 pv_controller.go:638] volume \"local-dnn2h\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:40:26.938356       1 pv_controller.go:864] volume \"local-dnn2h\" entered phase \"Released\"\nE0527 00:40:26.949618       1 tokens_controller.go:262] error synchronizing serviceaccount volume-1383/default: secrets \"default-token-8q77l\" is forbidden: unable to create new content in namespace volume-1383 because it is being terminated\nI0527 00:40:27.123494       1 pv_controller_base.go:504] deletion of claim \"provisioning-2334/pvc-q9ttq\" was already processed\nI0527 00:40:27.144326       1 namespace_controller.go:185] Namespace has been deleted disruption-1163\nI0527 00:40:27.145744       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.131.235).\nI0527 00:40:27.377485       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.2.64).\nI0527 00:40:27.784697       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.164.22).\nI0527 00:40:27.949615       1 route_controller.go:294] set node ip-172-20-42-187.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:27.949629       1 route_controller.go:294] set node ip-172-20-33-93.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:27.949615       1 route_controller.go:294] set node ip-172-20-40-209.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:27.949642       1 route_controller.go:294] set node ip-172-20-41-144.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:27.949646       1 route_controller.go:294] set node ip-172-20-40-196.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nE0527 00:40:28.348814       1 tokens_controller.go:262] error synchronizing serviceaccount services-4792/default: secrets \"default-token-pn67x\" is forbidden: unable to create new content in namespace services-4792 because it is being terminated\nI0527 00:40:28.382436       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"services-4792/externalname-service\" need=2 creating=1\nI0527 00:40:28.387686       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.2.64).\nI0527 00:40:28.721848       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-a326299d-3eee-420f-8ac4-0d0ea218f4a1\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-7299^20d01924-be84-11eb-8d3c-8ed3a13c2114\") from node \"ip-172-20-40-196.ap-southeast-1.compute.internal\" \nI0527 00:40:28.730567       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-a326299d-3eee-420f-8ac4-0d0ea218f4a1\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-7299^20d01924-be84-11eb-8d3c-8ed3a13c2114\") from node \"ip-172-20-40-196.ap-southeast-1.compute.internal\" \nI0527 00:40:28.730908       1 event.go:291] \"Event occurred\" object=\"volume-expand-7299/pod-29820a6b-787e-489d-94a1-6678fac5311f\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-a326299d-3eee-420f-8ac4-0d0ea218f4a1\\\" \"\nI0527 00:40:28.790230       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-7299-9009/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.164.22).\nI0527 00:40:30.149484       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"webhook-1940/sample-webhook-deployment-6bd9446d55\" need=1 creating=1\nI0527 00:40:30.149890       1 event.go:291] \"Event occurred\" object=\"webhook-1940/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-6bd9446d55 to 1\"\nI0527 00:40:30.158031       1 event.go:291] \"Event occurred\" object=\"webhook-1940/sample-webhook-deployment-6bd9446d55\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-6bd9446d55-qgrk2\"\nI0527 00:40:30.162506       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-1940/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:40:30.178379       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-1940/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:40:30.188929       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-1940/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:40:30.886151       1 namespace_controller.go:185] Namespace has been deleted proxy-5529\nI0527 00:40:30.994718       1 pv_controller.go:1326] isVolumeReleased[pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef]: volume is released\nI0527 00:40:30.994729       1 pv_controller.go:1326] isVolumeReleased[pvc-c264216c-9abe-451a-ac1b-f31f143369d2]: volume is released\nI0527 00:40:31.106333       1 aws.go:2291] Waiting for volume \"vol-0edaa34b7b8587ef9\" state: actual=detaching, desired=detached\nI0527 00:40:31.137604       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-1a/vol-0edaa34b7b8587ef9: error deleting EBS volume \"vol-0edaa34b7b8587ef9\" since volume is currently attached to \"i-081c5901a8830e60d\"\nE0527 00:40:31.137793       1 goroutinemap.go:150] Operation for \"delete-pvc-c264216c-9abe-451a-ac1b-f31f143369d2[02540198-364e-4d1d-860e-64d63023c40d]\" failed. No retries permitted until 2021-05-27 00:40:32.13777598 +0000 UTC m=+1162.394038569 (durationBeforeRetry 1s). Error: \"error deleting EBS volume \\\"vol-0edaa34b7b8587ef9\\\" since volume is currently attached to \\\"i-081c5901a8830e60d\\\"\"\nI0527 00:40:31.138132       1 event.go:291] \"Event occurred\" object=\"pvc-c264216c-9abe-451a-ac1b-f31f143369d2\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0edaa34b7b8587ef9\\\" since volume is currently attached to \\\"i-081c5901a8830e60d\\\"\"\nI0527 00:40:31.240338       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ap-southeast-1a/vol-0226586ae109ac335\nI0527 00:40:31.240369       1 pv_controller.go:1421] volume \"pvc-939e3676-3ea8-48ae-a93d-2103aa8a73ef\" deleted\nI0527 00:40:31.251264       1 pv_controller_base.go:504] deletion of claim \"fsgroupchangepolicy-7842/awsmrlhm\" was already processed\nE0527 00:40:31.545530       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0527 00:40:31.834878       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-9907/pvc-c7926: storageclass.storage.k8s.io \"provisioning-9907\" not found\nI0527 00:40:31.835082       1 event.go:291] \"Event occurred\" object=\"provisioning-9907/pvc-c7926\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-9907\\\" not found\"\nI0527 00:40:31.988284       1 namespace_controller.go:185] Namespace has been deleted volume-1383\nI0527 00:40:32.040278       1 pv_controller.go:864] volume \"local-f6srw\" entered phase \"Available\"\nI0527 00:40:32.172507       1 namespace_controller.go:185] Namespace has been deleted apf-9629\nI0527 00:40:32.593105       1 pv_controller.go:864] volume \"local-pvs45mg\" entered phase \"Available\"\nI0527 00:40:32.781712       1 pv_controller.go:915] claim \"persistent-local-volumes-test-9482/pvc-9wrwl\" bound to volume \"local-pvs45mg\"\nI0527 00:40:32.792171       1 pv_controller.go:864] volume \"local-pvs45mg\" entered phase \"Bound\"\nI0527 00:40:32.792199       1 pv_controller.go:967] volume \"local-pvs45mg\" bound to claim \"persistent-local-volumes-test-9482/pvc-9wrwl\"\nI0527 00:40:32.797694       1 pv_controller.go:808] claim \"persistent-local-volumes-test-9482/pvc-9wrwl\" entered phase \"Bound\"\nI0527 00:40:33.084538       1 utils.go:413] couldn't find ipfamilies for headless service: webhook-1940/e2e-test-webhook. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.152.152).\nE0527 00:40:33.090433       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:40:33.178943       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-05-27 00:39:59 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdbo\",\n  InstanceId: \"i-081c5901a8830e60d\",\n  State: \"detaching\",\n  VolumeId: \"vol-0edaa34b7b8587ef9\"\n}\nI0527 00:40:33.178991       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-c264216c-9abe-451a-ac1b-f31f143369d2\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0edaa34b7b8587ef9\") on node \"ip-172-20-33-93.ap-southeast-1.compute.internal\" \nI0527 00:40:33.260718       1 pvc_protection_controller.go:291] PVC volume-expand-5875/csi-hostpathx2qj8 is unused\nI0527 00:40:33.266516       1 pv_controller.go:638] volume \"pvc-07f4aadc-e7f1-4b78-a8f3-74a6336177c3\" is released and reclaim policy \"Delete\" will be executed\nI0527 00:40:33.269884       1 pv_controller.go:864] volume \"pvc-07f4aadc-e7f1-4b78-a8f3-74a6336177c3\" entered phase \"Released\"\nI0527 00:40:33.275638       1 pv_controller.go:1326] isVolumeReleased[pvc-07f4aadc-e7f1-4b78-a8f3-74a6336177c3]: volume is released\nI0527 00:40:33.332836       1 pv_controller_base.go:504] deletion of claim \"volume-expand-5875/csi-hostpathx2qj8\" was already processed\nI0527 00:40:33.465504       1 namespace_controller.go:185] Namespace has been deleted volume-8967\nI0527 00:40:33.544245       1 namespace_controller.go:185] Namespace has been deleted services-4792\nI0527 00:40:33.565894       1 expand_controller.go:277] Ignoring the PVC \"volume-expand-7299/csi-hostpath287k4\" (uid: \"a326299d-3eee-420f-8ac4-0d0ea218f4a1\") : didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nI0527 00:40:33.566130       1 event.go:291] \"Event occurred\" object=\"volume-expand-7299/csi-hostpath287k4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ExternalExpanding\" message=\"Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\"\nE0527 00:40:34.370234       1 tokens_controller.go:262] error synchronizing serviceaccount projected-8979/default: secrets \"default-token-j92s2\" is forbidden: unable to create new content in namespace projected-8979 because it is being terminated\nI0527 00:40:34.404395       1 pvc_protection_controller.go:291] PVC volume-1942/pvc-9s9jn is unused\nI0527 00:40:34.410632       1 pv_controller.go:638] volume \"local-f6r48\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:40:34.416115       1 pv_controller.go:864] volume \"local-f6r48\" entered phase \"Released\"\nI0527 00:40:34.610298       1 pv_controller_base.go:504] deletion of claim \"volume-1942/pvc-9s9jn\" was already processed\nE0527 00:40:34.707986       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-5937/pvc-bwtdn: storageclass.storage.k8s.io \"provisioning-5937\" not found\nI0527 00:40:34.708210       1 event.go:291] \"Event occurred\" object=\"provisioning-5937/pvc-bwtdn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-5937\\\" not found\"\nI0527 00:40:34.911568       1 pv_controller.go:864] volume \"local-v4974\" entered phase \"Available\"\nI0527 00:40:35.182570       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.108.112).\nI0527 00:40:35.378193       1 event.go:291] \"Event occurred\" object=\"volume-5070-4209/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0527 00:40:35.378563       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.108.112).\nI0527 00:40:35.642681       1 utils.go:413] couldn't find ipfamilies for headless service: conntrack-4818/svc-udp. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.236.6).\nI0527 00:40:35.751959       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.188.62).\nI0527 00:40:35.947237       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.188.62).\nI0527 00:40:35.948143       1 event.go:291] \"Event occurred\" object=\"volume-5070-4209/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0527 00:40:36.070615       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-827/pod-release\" need=1 creating=1\nI0527 00:40:36.074227       1 event.go:291] \"Event occurred\" object=\"replication-controller-827/pod-release\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: pod-release-j4dtf\"\nI0527 00:40:36.129861       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.13.234).\nI0527 00:40:36.324222       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.13.234).\nI0527 00:40:36.324786       1 event.go:291] \"Event occurred\" object=\"volume-5070-4209/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0527 00:40:36.509780       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.210.34).\nI0527 00:40:36.653255       1 utils.go:413] couldn't find ipfamilies for headless service: conntrack-4818/svc-udp. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.236.6).\nI0527 00:40:36.668461       1 controller_ref_manager.go:229] patching pod replication-controller-827_pod-release-j4dtf to remove its controllerRef to v1/ReplicationController:pod-release\nI0527 00:40:36.671874       1 garbagecollector.go:471] \"Processing object\" object=\"replication-controller-827/pod-release\" objectUID=c515d27f-5202-4b03-a70f-ea67337b718c kind=\"ReplicationController\" virtual=false\nI0527 00:40:36.673995       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-827/pod-release\" need=1 creating=1\nI0527 00:40:36.675241       1 garbagecollector.go:510] object [v1/ReplicationController, namespace: replication-controller-827, name: pod-release, uid: c515d27f-5202-4b03-a70f-ea67337b718c]'s doesn't have an owner, continue on next item\nI0527 00:40:36.681940       1 event.go:291] \"Event occurred\" object=\"replication-controller-827/pod-release\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: pod-release-nl76t\"\nI0527 00:40:36.709561       1 event.go:291] \"Event occurred\" object=\"volume-5070-4209/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0527 00:40:36.710714       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.210.34).\nI0527 00:40:36.893156       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.239.176).\nI0527 00:40:36.953532       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-1940/e2e-test-webhook-n7mjw\" objectUID=dd7f9030-b667-45c1-8cd8-765894955f1a kind=\"EndpointSlice\" virtual=false\nI0527 00:40:36.959640       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-1940/e2e-test-webhook-n7mjw\" objectUID=dd7f9030-b667-45c1-8cd8-765894955f1a kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:40:37.086002       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.239.176).\nI0527 00:40:37.086343       1 event.go:291] \"Event occurred\" object=\"volume-5070-4209/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0527 00:40:37.134051       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.13.234).\nI0527 00:40:37.151708       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.108.112).\nI0527 00:40:37.169796       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-1940/sample-webhook-deployment-6bd9446d55\" objectUID=ab0335b0-a205-449a-ad52-0d4429b08486 kind=\"ReplicaSet\" virtual=false\nI0527 00:40:37.170185       1 deployment_controller.go:581] Deployment webhook-1940/sample-webhook-deployment has been deleted\nI0527 00:40:37.171619       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-1940/sample-webhook-deployment-6bd9446d55\" objectUID=ab0335b0-a205-449a-ad52-0d4429b08486 kind=\"ReplicaSet\" propagationPolicy=Background\nI0527 00:40:37.174153       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-1940/sample-webhook-deployment-6bd9446d55-qgrk2\" objectUID=67cbd96f-8758-4641-8317-c442d0d3356d kind=\"Pod\" virtual=false\nI0527 00:40:37.175481       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-1940/sample-webhook-deployment-6bd9446d55-qgrk2\" objectUID=67cbd96f-8758-4641-8317-c442d0d3356d kind=\"Pod\" propagationPolicy=Background\nI0527 00:40:37.514250       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.210.34).\nI0527 00:40:37.653864       1 event.go:291] \"Event occurred\" object=\"volume-5070/csi-hostpath29262\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-5070\\\" or manually created by system administrator\"\nI0527 00:40:37.774094       1 namespace_controller.go:185] Namespace has been deleted provisioning-9632\nI0527 00:40:37.905798       1 pv_controller.go:864] volume \"pvc-81f82502-55b7-4fec-b5d6-54957b0f17e8\" entered phase \"Bound\"\nI0527 00:40:37.905964       1 pv_controller.go:967] volume \"pvc-81f82502-55b7-4fec-b5d6-54957b0f17e8\" bound to claim \"volume-5070/csi-hostpath29262\"\nI0527 00:40:37.918206       1 pv_controller.go:808] claim \"volume-5070/csi-hostpath29262\" entered phase \"Bound\"\nI0527 00:40:37.933565       1 route_controller.go:294] set node ip-172-20-40-209.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:37.933565       1 route_controller.go:294] set node ip-172-20-33-93.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:37.933595       1 route_controller.go:294] set node ip-172-20-42-187.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:37.933611       1 route_controller.go:294] set node ip-172-20-41-144.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:37.933627       1 route_controller.go:294] set node ip-172-20-40-196.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:38.269505       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.13.234).\nI0527 00:40:38.393522       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.188.62).\nI0527 00:40:38.743003       1 namespace_controller.go:185] Namespace has been deleted volumemode-7628-9717\nI0527 00:40:38.797751       1 utils.go:413] couldn't find ipfamilies for headless service: conntrack-4818/svc-udp. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.236.6).\nE0527 00:40:38.823281       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0527 00:40:38.866966       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-5875/default: secrets \"default-token-x5fm8\" is forbidden: unable to create new content in namespace volume-expand-5875 because it is being terminated\nI0527 00:40:39.046344       1 namespace_controller.go:185] Namespace has been deleted provisioning-2334\nI0527 00:40:39.278403       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.13.234).\nI0527 00:40:39.304504       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:39.385312       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.210.34).\nI0527 00:40:39.398770       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.239.176).\nI0527 00:40:39.400878       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.188.62).\nI0527 00:40:39.458500       1 namespace_controller.go:185] Namespace has been deleted projected-8979\nI0527 00:40:39.885950       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:39.893558       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0527 00:40:39.897262       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"RecreatingFailedPod\" message=\"StatefulSet statefulset-9111/ss is recreating failed Pod ss-0\"\nI0527 00:40:39.904726       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:39.911548       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0527 00:40:39.913650       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:39.920491       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0527 00:40:39.928948       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:39.935342       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"RecreatingFailedPod\" message=\"StatefulSet statefulset-9111/ss is recreating failed Pod ss-0\"\nI0527 00:40:39.944901       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:39.947849       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0527 00:40:39.954228       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0527 00:40:39.958393       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:39.983281       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"RecreatingFailedPod\" message=\"StatefulSet statefulset-9111/ss is recreating failed Pod ss-0\"\nI0527 00:40:39.990597       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:39.996452       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0527 00:40:40.000075       1 deployment_controller.go:581] Deployment webhook-7871/sample-webhook-deployment has been deleted\nI0527 00:40:40.005678       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:40.008475       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0527 00:40:40.010845       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nE0527 00:40:40.010487       1 stateful_set.go:392] error syncing StatefulSet statefulset-9111/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0527 00:40:40.015906       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0527 00:40:40.023324       1 tokens_controller.go:262] error synchronizing serviceaccount subpath-7134/default: secrets \"default-token-p6qnk\" is forbidden: unable to create new content in namespace subpath-7134 because it is being terminated\nI0527 00:40:40.311862       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:40.393142       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.210.34).\nI0527 00:40:40.405547       1 utils.go:413] couldn't find ipfamilies for headless service: volume-5070-4209/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.239.176).\nI0527 00:40:40.520721       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"RecreatingFailedPod\" message=\"StatefulSet statefulset-9111/ss is recreating failed Pod ss-0\"\nI0527 00:40:40.523846       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:40.528583       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0527 00:40:40.530388       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:40.535494       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0527 00:40:40.536778       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:40.635678       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9482/pod-941a04ec-0bfc-4aad-b9c4-7f265e6cf655 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-9wrwl pvc- persistent-local-volumes-test-9482  6921dcac-bee9-40b9-ade0-6ec1cc789d30 32557 0 2021-05-27 00:40:32 +0000 UTC 2021-05-27 00:40:40 +0000 UTC 0xc001001718 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:40:32 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:40:32 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvs45mg,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9482,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:40:40.635796       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9482/pvc-9wrwl because it is still being used\nI0527 00:40:40.653630       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-81f82502-55b7-4fec-b5d6-54957b0f17e8\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-5070^27e07695-be84-11eb-871e-d6202ea27c86\") from node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nI0527 00:40:40.663896       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-81f82502-55b7-4fec-b5d6-54957b0f17e8\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-5070^27e07695-be84-11eb-871e-d6202ea27c86\") from node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nI0527 00:40:40.664076       1 event.go:291] \"Event occurred\" object=\"volume-5070/hostpath-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-81f82502-55b7-4fec-b5d6-54957b0f17e8\\\" \"\nI0527 00:40:41.113045       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9482/pod-941a04ec-0bfc-4aad-b9c4-7f265e6cf655 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-9wrwl pvc- persistent-local-volumes-test-9482  6921dcac-bee9-40b9-ade0-6ec1cc789d30 32557 0 2021-05-27 00:40:32 +0000 UTC 2021-05-27 00:40:40 +0000 UTC 0xc001001718 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:40:32 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:40:32 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvs45mg,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9482,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:40:41.113156       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9482/pvc-9wrwl because it is still being used\nI0527 00:40:41.164286       1 pvc_protection_controller.go:291] PVC csi-mock-volumes-7499/pvc-bx9wd is unused\nI0527 00:40:41.171227       1 pv_controller.go:638] volume \"pvc-b91c0ad1-8277-4cde-8fbb-c9f3057190cb\" is released and reclaim policy \"Delete\" will be executed\nI0527 00:40:41.174339       1 pv_controller.go:864] volume \"pvc-b91c0ad1-8277-4cde-8fbb-c9f3057190cb\" entered phase \"Released\"\nI0527 00:40:41.177592       1 pv_controller.go:1326] isVolumeReleased[pvc-b91c0ad1-8277-4cde-8fbb-c9f3057190cb]: volume is released\nI0527 00:40:41.188900       1 pv_controller_base.go:504] deletion of claim \"csi-mock-volumes-7499/pvc-bx9wd\" was already processed\nI0527 00:40:41.321243       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"RecreatingFailedPod\" message=\"StatefulSet statefulset-9111/ss is recreating failed Pod ss-0\"\nI0527 00:40:41.325420       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:41.328975       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0527 00:40:41.333098       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0527 00:40:41.333981       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0527 00:40:41.666362       1 pv_controller.go:1437] error finding provisioning plugin for claim volume-6675/pvc-s8flh: storageclass.storage.k8s.io \"volume-6675\" not found\nI0527 00:40:41.666853       1 event.go:291] \"Event occurred\" object=\"volume-6675/pvc-s8flh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-6675\\\" not found\"\nI0527 00:40:41.684854       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"webhook-3959/sample-webhook-deployment-6bd9446d55\" need=1 creating=1\nI0527 00:40:41.685428       1 event.go:291] \"Event occurred\" object=\"webhook-3959/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-6bd9446d55 to 1\"\nI0527 00:40:41.696141       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-3959/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:40:41.699916       1 event.go:291] \"Event occurred\" object=\"webhook-3959/sample-webhook-deployment-6bd9446d55\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-6bd9446d55-lztwj\"\nE0527 00:40:41.735785       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-1940/default: secrets \"default-token-xqzlv\" is forbidden: unable to create new content in namespace webhook-1940 because it is being terminated\nI0527 00:40:41.865400       1 pv_controller.go:864] volume \"nfs-lwrkv\" entered phase \"Available\"\nE0527 00:40:41.880118       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-1940-markers/default: secrets \"default-token-8clkr\" is forbidden: unable to create new content in namespace webhook-1940-markers because it is being terminated\nE0527 00:40:42.196881       1 tokens_controller.go:262] error synchronizing serviceaccount port-forwarding-1278/default: secrets \"default-token-t2898\" is forbidden: unable to create new content in namespace port-forwarding-1278 because it is being terminated\nI0527 00:40:42.306416       1 garbagecollector.go:471] \"Processing object\" object=\"replication-controller-827/pod-release-nl76t\" objectUID=336bcb82-4a82-47c5-842e-ffee30d5bcae kind=\"Pod\" virtual=false\nI0527 00:40:42.308416       1 garbagecollector.go:580] \"Deleting object\" object=\"replication-controller-827/pod-release-nl76t\" objectUID=336bcb82-4a82-47c5-842e-ffee30d5bcae kind=\"Pod\" propagationPolicy=Background\nI0527 00:40:42.450082       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9482/pod-941a04ec-0bfc-4aad-b9c4-7f265e6cf655 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-9wrwl pvc- persistent-local-volumes-test-9482  6921dcac-bee9-40b9-ade0-6ec1cc789d30 32557 0 2021-05-27 00:40:32 +0000 UTC 2021-05-27 00:40:40 +0000 UTC 0xc001001718 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:40:32 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:40:32 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvs45mg,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9482,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:40:42.450176       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9482/pvc-9wrwl because it is still being used\nI0527 00:40:42.455224       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-9482/pvc-9wrwl is unused\nI0527 00:40:42.462695       1 pv_controller.go:638] volume \"local-pvs45mg\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:40:42.468895       1 pv_controller.go:864] volume \"local-pvs45mg\" entered phase \"Released\"\nI0527 00:40:42.476561       1 pv_controller_base.go:504] deletion of claim \"persistent-local-volumes-test-9482/pvc-9wrwl\" was already processed\nI0527 00:40:42.993104       1 namespace_controller.go:185] Namespace has been deleted fsgroupchangepolicy-7842\nI0527 00:40:43.120961       1 utils.go:413] couldn't find ipfamilies for headless service: conntrack-4818/svc-udp. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.236.6).\nI0527 00:40:43.288949       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.48.240).\nI0527 00:40:43.498312       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.48.240).\nI0527 00:40:43.499076       1 event.go:291] \"Event occurred\" object=\"provisioning-4265-3439/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0527 00:40:43.897217       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.50.133).\nI0527 00:40:43.992091       1 namespace_controller.go:185] Namespace has been deleted volume-expand-5875\nE0527 00:40:43.994402       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:40:44.108650       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5875-5066/csi-hostpath-attacher-7ncwv\" objectUID=ba313a56-c49b-4057-8a28-1da82596beac kind=\"EndpointSlice\" virtual=false\nI0527 00:40:44.114927       1 event.go:291] \"Event occurred\" object=\"provisioning-4265-3439/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0527 00:40:44.119638       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.50.133).\nI0527 00:40:44.121120       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5875-5066/csi-hostpath-attacher-7ncwv\" objectUID=ba313a56-c49b-4057-8a28-1da82596beac kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:40:44.285792       1 namespace_controller.go:185] Namespace has been deleted provisioning-1805\nI0527 00:40:44.293006       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.48.240).\nI0527 00:40:44.302363       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.209.193).\nI0527 00:40:44.323472       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5875-5066/csi-hostpath-attacher-dc4858494\" objectUID=8dec125d-0719-46b3-849f-e8d4a7131b4b kind=\"ControllerRevision\" virtual=false\nI0527 00:40:44.323546       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-5875-5066/csi-hostpath-attacher\nI0527 00:40:44.323642       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5875-5066/csi-hostpath-attacher-0\" objectUID=777c0c52-d0ac-418e-bb8f-e4c7b8d234a8 kind=\"Pod\" virtual=false\nI0527 00:40:44.325287       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5875-5066/csi-hostpath-attacher-0\" objectUID=777c0c52-d0ac-418e-bb8f-e4c7b8d234a8 kind=\"Pod\" propagationPolicy=Background\nI0527 00:40:44.325716       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5875-5066/csi-hostpath-attacher-dc4858494\" objectUID=8dec125d-0719-46b3-849f-e8d4a7131b4b kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:40:44.516750       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.209.193).\nI0527 00:40:44.518538       1 event.go:291] \"Event occurred\" object=\"provisioning-4265-3439/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0527 00:40:44.707853       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.149.15).\nI0527 00:40:44.721044       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"RecreatingFailedPod\" message=\"StatefulSet statefulset-9111/ss is recreating failed Pod ss-0\"\nI0527 00:40:44.725200       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5875-5066/csi-hostpathplugin-w5k7s\" objectUID=6c8e28b0-ccc7-4003-9cfe-be906a443d72 kind=\"EndpointSlice\" virtual=false\nI0527 00:40:44.728806       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:44.731963       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5875-5066/csi-hostpathplugin-w5k7s\" objectUID=6c8e28b0-ccc7-4003-9cfe-be906a443d72 kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:40:44.737268       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0527 00:40:44.737699       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:44.744245       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0527 00:40:44.747516       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:44.904391       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.50.133).\nI0527 00:40:44.918524       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.149.15).\nI0527 00:40:44.920003       1 event.go:291] \"Event occurred\" object=\"provisioning-4265-3439/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0527 00:40:44.942205       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5875-5066/csi-hostpathplugin-5c9b9bf698\" objectUID=7da07b3c-9d85-4fba-8e7e-1212525837bc kind=\"ControllerRevision\" virtual=false\nI0527 00:40:44.942523       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-5875-5066/csi-hostpathplugin\nI0527 00:40:44.942861       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5875-5066/csi-hostpathplugin-0\" objectUID=65b27b22-5864-4441-923b-e6d95fb2947d kind=\"Pod\" virtual=false\nI0527 00:40:44.945907       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5875-5066/csi-hostpathplugin-0\" objectUID=65b27b22-5864-4441-923b-e6d95fb2947d kind=\"Pod\" propagationPolicy=Background\nI0527 00:40:44.946206       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5875-5066/csi-hostpathplugin-5c9b9bf698\" objectUID=7da07b3c-9d85-4fba-8e7e-1212525837bc kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:40:45.114844       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.12.39).\nI0527 00:40:45.123901       1 namespace_controller.go:185] Namespace has been deleted subpath-7134\nI0527 00:40:45.137230       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5875-5066/csi-hostpath-provisioner-lz82h\" objectUID=e3cf246c-c67e-49de-b124-48118fc8b7ed kind=\"EndpointSlice\" virtual=false\nI0527 00:40:45.148314       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5875-5066/csi-hostpath-provisioner-lz82h\" objectUID=e3cf246c-c67e-49de-b124-48118fc8b7ed kind=\"EndpointSlice\" propagationPolicy=Background\nE0527 00:40:45.255850       1 namespace_controller.go:162] deletion of namespace port-forwarding-8002 failed: unexpected items still remain in namespace: port-forwarding-8002 for gvr: /v1, Resource=pods\nI0527 00:40:45.308969       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.209.193).\nI0527 00:40:45.331868       1 event.go:291] \"Event occurred\" object=\"provisioning-4265-3439/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0527 00:40:45.332248       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.12.39).\nI0527 00:40:45.350181       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5875-5066/csi-hostpath-provisioner-5bd879d796\" objectUID=54b765c7-b115-45f3-91c6-f3e11b773c2e kind=\"ControllerRevision\" virtual=false\nI0527 00:40:45.350529       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-5875-5066/csi-hostpath-provisioner\nI0527 00:40:45.350692       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5875-5066/csi-hostpath-provisioner-0\" objectUID=3a554f23-73ef-4c96-a897-41630bf15f95 kind=\"Pod\" virtual=false\nI0527 00:40:45.353165       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5875-5066/csi-hostpath-provisioner-5bd879d796\" objectUID=54b765c7-b115-45f3-91c6-f3e11b773c2e kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:40:45.353662       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5875-5066/csi-hostpath-provisioner-0\" objectUID=3a554f23-73ef-4c96-a897-41630bf15f95 kind=\"Pod\" propagationPolicy=Background\nE0527 00:40:45.408527       1 namespace_controller.go:162] deletion of namespace port-forwarding-8002 failed: unexpected items still remain in namespace: port-forwarding-8002 for gvr: /v1, Resource=pods\nI0527 00:40:45.496119       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.164.253).\nI0527 00:40:45.542059       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5875-5066/csi-hostpath-resizer-9td6l\" objectUID=a3f3b71a-d100-489f-a917-e4d7f1ea6323 kind=\"EndpointSlice\" virtual=false\nI0527 00:40:45.545638       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5875-5066/csi-hostpath-resizer-9td6l\" objectUID=a3f3b71a-d100-489f-a917-e4d7f1ea6323 kind=\"EndpointSlice\" propagationPolicy=Background\nE0527 00:40:45.557281       1 namespace_controller.go:162] deletion of namespace port-forwarding-8002 failed: unexpected items still remain in namespace: port-forwarding-8002 for gvr: /v1, Resource=pods\nI0527 00:40:45.634100       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5875-5066/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.236.224).\nE0527 00:40:45.691489       1 namespace_controller.go:162] deletion of namespace port-forwarding-8002 failed: unexpected items still remain in namespace: port-forwarding-8002 for gvr: /v1, Resource=pods\nI0527 00:40:45.751585       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5875-5066/csi-hostpath-resizer-79659dff65\" objectUID=4de08050-fe79-4d76-9b04-d787221be240 kind=\"ControllerRevision\" virtual=false\nI0527 00:40:45.752063       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-5875-5066/csi-hostpath-resizer\nI0527 00:40:45.752099       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5875-5066/csi-hostpath-resizer-0\" objectUID=8feb9fc7-506c-40e8-8a12-9026901c1dfa kind=\"Pod\" virtual=false\nI0527 00:40:45.756947       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5875-5066/csi-hostpath-resizer-79659dff65\" objectUID=4de08050-fe79-4d76-9b04-d787221be240 kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:40:45.757582       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5875-5066/csi-hostpath-resizer-0\" objectUID=8feb9fc7-506c-40e8-8a12-9026901c1dfa kind=\"Pod\" propagationPolicy=Background\nE0527 00:40:45.859406       1 namespace_controller.go:162] deletion of namespace port-forwarding-8002 failed: unexpected items still remain in namespace: port-forwarding-8002 for gvr: /v1, Resource=pods\nI0527 00:40:45.933028       1 event.go:291] \"Event occurred\" object=\"provisioning-4265/csi-hostpathqgrzl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-4265\\\" or manually created by system administrator\"\nI0527 00:40:45.950493       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5875-5066/csi-hostpath-snapshotter-d2mql\" objectUID=be798c22-446e-4b3b-a7bf-550a9b2e5662 kind=\"EndpointSlice\" virtual=false\nI0527 00:40:45.959171       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5875-5066/csi-hostpath-snapshotter-d2mql\" objectUID=be798c22-446e-4b3b-a7bf-550a9b2e5662 kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:40:45.993068       1 pv_controller.go:915] claim \"provisioning-5937/pvc-bwtdn\" bound to volume \"local-v4974\"\nI0527 00:40:45.998898       1 pv_controller.go:1326] isVolumeReleased[pvc-c264216c-9abe-451a-ac1b-f31f143369d2]: volume is released\nI0527 00:40:46.005736       1 pv_controller.go:864] volume \"local-v4974\" entered phase \"Bound\"\nI0527 00:40:46.005799       1 pv_controller.go:967] volume \"local-v4974\" bound to claim \"provisioning-5937/pvc-bwtdn\"\nI0527 00:40:46.015494       1 pv_controller.go:808] claim \"provisioning-5937/pvc-bwtdn\" entered phase \"Bound\"\nI0527 00:40:46.015872       1 pv_controller.go:915] claim \"provisioning-9907/pvc-c7926\" bound to volume \"local-f6srw\"\nI0527 00:40:46.023435       1 pv_controller.go:864] volume \"local-f6srw\" entered phase \"Bound\"\nI0527 00:40:46.023662       1 pv_controller.go:967] volume \"local-f6srw\" bound to claim \"provisioning-9907/pvc-c7926\"\nI0527 00:40:46.029510       1 pv_controller.go:808] claim \"provisioning-9907/pvc-c7926\" entered phase \"Bound\"\nI0527 00:40:46.029903       1 pv_controller.go:915] claim \"volume-6675/pvc-s8flh\" bound to volume \"nfs-lwrkv\"\nI0527 00:40:46.030115       1 event.go:291] \"Event occurred\" object=\"provisioning-4265/csi-hostpathqgrzl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-4265\\\" or manually created by system administrator\"\nI0527 00:40:46.038265       1 pv_controller.go:864] volume \"nfs-lwrkv\" entered phase \"Bound\"\nI0527 00:40:46.039862       1 pv_controller.go:967] volume \"nfs-lwrkv\" bound to claim \"volume-6675/pvc-s8flh\"\nI0527 00:40:46.048309       1 pv_controller.go:808] claim \"volume-6675/pvc-s8flh\" entered phase \"Bound\"\nE0527 00:40:46.097031       1 namespace_controller.go:162] deletion of namespace port-forwarding-8002 failed: unexpected items still remain in namespace: port-forwarding-8002 for gvr: /v1, Resource=pods\nI0527 00:40:46.157388       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-5875-5066/csi-hostpath-snapshotter\nI0527 00:40:46.157668       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5875-5066/csi-hostpath-snapshotter-5df8c6bdc9\" objectUID=008d5b31-3028-4240-aeff-b23e87a75b00 kind=\"ControllerRevision\" virtual=false\nI0527 00:40:46.157973       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5875-5066/csi-hostpath-snapshotter-0\" objectUID=a7fcb8d9-f0d6-4382-9dcd-6bf62cc39c5a kind=\"Pod\" virtual=false\nI0527 00:40:46.159712       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5875-5066/csi-hostpath-snapshotter-5df8c6bdc9\" objectUID=008d5b31-3028-4240-aeff-b23e87a75b00 kind=\"ControllerRevision\" propagationPolicy=Background\nI0527 00:40:46.160593       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5875-5066/csi-hostpath-snapshotter-0\" objectUID=a7fcb8d9-f0d6-4382-9dcd-6bf62cc39c5a kind=\"Pod\" propagationPolicy=Background\nI0527 00:40:46.182161       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ap-southeast-1a/vol-0edaa34b7b8587ef9\nI0527 00:40:46.182345       1 pv_controller.go:1421] volume \"pvc-c264216c-9abe-451a-ac1b-f31f143369d2\" deleted\nI0527 00:40:46.193569       1 pv_controller_base.go:504] deletion of claim \"topology-290/pvc-jzrcn\" was already processed\nI0527 00:40:46.328958       1 pv_controller.go:864] volume \"local-pvzrxvp\" entered phase \"Available\"\nE0527 00:40:46.356800       1 namespace_controller.go:162] deletion of namespace port-forwarding-8002 failed: unexpected items still remain in namespace: port-forwarding-8002 for gvr: /v1, Resource=pods\nI0527 00:40:46.513583       1 pv_controller.go:915] claim \"persistent-local-volumes-test-1260/pvc-47lzv\" bound to volume \"local-pvzrxvp\"\nI0527 00:40:46.522087       1 pv_controller.go:864] volume \"local-pvzrxvp\" entered phase \"Bound\"\nI0527 00:40:46.522117       1 pv_controller.go:967] volume \"local-pvzrxvp\" bound to claim \"persistent-local-volumes-test-1260/pvc-47lzv\"\nI0527 00:40:46.527249       1 pv_controller.go:808] claim \"persistent-local-volumes-test-1260/pvc-47lzv\" entered phase \"Bound\"\nI0527 00:40:46.658753       1 utils.go:413] couldn't find ipfamilies for headless service: webhook-3959/e2e-test-webhook. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.73.175).\nE0527 00:40:46.803262       1 namespace_controller.go:162] deletion of namespace port-forwarding-8002 failed: unexpected items still remain in namespace: port-forwarding-8002 for gvr: /v1, Resource=pods\nI0527 00:40:46.847590       1 namespace_controller.go:185] Namespace has been deleted webhook-1940\nI0527 00:40:46.922257       1 namespace_controller.go:185] Namespace has been deleted volume-1942\nI0527 00:40:47.016327       1 namespace_controller.go:185] Namespace has been deleted webhook-1940-markers\nI0527 00:40:47.406527       1 namespace_controller.go:185] Namespace has been deleted replication-controller-827\nE0527 00:40:47.425989       1 namespace_controller.go:162] deletion of namespace port-forwarding-1278 failed: unexpected items still remain in namespace: port-forwarding-1278 for gvr: /v1, Resource=pods\nI0527 00:40:47.547062       1 namespace_controller.go:185] Namespace has been deleted secrets-4030\nE0527 00:40:47.677866       1 namespace_controller.go:162] deletion of namespace port-forwarding-1278 failed: unexpected items still remain in namespace: port-forwarding-1278 for gvr: /v1, Resource=pods\nE0527 00:40:47.683737       1 namespace_controller.go:162] deletion of namespace port-forwarding-8002 failed: unexpected items still remain in namespace: port-forwarding-8002 for gvr: /v1, Resource=pods\nI0527 00:40:47.733073       1 utils.go:413] couldn't find ipfamilies for headless service: services-2519/nodeport-collision-1. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.216.117).\nE0527 00:40:47.802214       1 namespace_controller.go:162] deletion of namespace port-forwarding-1278 failed: unexpected items still remain in namespace: port-forwarding-1278 for gvr: /v1, Resource=pods\nI0527 00:40:47.898918       1 pv_controller.go:864] volume \"pvc-f7f74b47-6472-495b-b21f-2efb6d211378\" entered phase \"Bound\"\nI0527 00:40:47.898949       1 pv_controller.go:967] volume \"pvc-f7f74b47-6472-495b-b21f-2efb6d211378\" bound to claim \"provisioning-4265/csi-hostpathqgrzl\"\nI0527 00:40:47.916072       1 pv_controller.go:808] claim \"provisioning-4265/csi-hostpathqgrzl\" entered phase \"Bound\"\nI0527 00:40:47.941128       1 route_controller.go:294] set node ip-172-20-41-144.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:47.941155       1 route_controller.go:294] set node ip-172-20-40-196.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:47.941293       1 route_controller.go:294] set node ip-172-20-40-209.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:47.941399       1 route_controller.go:294] set node ip-172-20-42-187.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:47.941493       1 route_controller.go:294] set node ip-172-20-33-93.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nE0527 00:40:48.033406       1 namespace_controller.go:162] deletion of namespace port-forwarding-1278 failed: [unable to retrieve the complete list of server APIs: mygroup.example.com/v1beta1: the server could not find the requested resource, unexpected items still remain in namespace: port-forwarding-1278 for gvr: /v1, Resource=pods]\nI0527 00:40:48.135152       1 garbagecollector.go:471] \"Processing object\" object=\"services-2519/nodeport-collision-1-fkxfq\" objectUID=f8c8ed08-32c0-433a-a2ff-63384fd03a07 kind=\"EndpointSlice\" virtual=false\nE0527 00:40:48.184718       1 namespace_controller.go:162] deletion of namespace port-forwarding-1278 failed: unexpected items still remain in namespace: port-forwarding-1278 for gvr: /v1, Resource=pods\nI0527 00:40:48.387393       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.48.240).\nI0527 00:40:48.541865       1 utils.go:413] couldn't find ipfamilies for headless service: services-2519/nodeport-collision-2. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.236.176).\nI0527 00:40:48.672560       1 garbagecollector.go:580] \"Deleting object\" object=\"services-2519/nodeport-collision-1-fkxfq\" objectUID=f8c8ed08-32c0-433a-a2ff-63384fd03a07 kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:40:48.747228       1 garbagecollector.go:471] \"Processing object\" object=\"services-2519/nodeport-collision-2-4zzsb\" objectUID=55fdec28-9745-402a-97f8-0a61bc67e351 kind=\"EndpointSlice\" virtual=false\nI0527 00:40:48.753237       1 garbagecollector.go:580] \"Deleting object\" object=\"services-2519/nodeport-collision-2-4zzsb\" objectUID=55fdec28-9745-402a-97f8-0a61bc67e351 kind=\"EndpointSlice\" propagationPolicy=Background\nE0527 00:40:48.787504       1 namespace_controller.go:162] deletion of namespace port-forwarding-1278 failed: unexpected items still remain in namespace: port-forwarding-1278 for gvr: /v1, Resource=pods\nI0527 00:40:49.092027       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-f7f74b47-6472-495b-b21f-2efb6d211378\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-4265^2dd4eb6a-be84-11eb-a5c1-2a3ee2928945\") from node \"ip-172-20-40-196.ap-southeast-1.compute.internal\" \nI0527 00:40:49.092079       1 namespace_controller.go:185] Namespace has been deleted volumemode-3922\nI0527 00:40:49.106798       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-f7f74b47-6472-495b-b21f-2efb6d211378\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-4265^2dd4eb6a-be84-11eb-a5c1-2a3ee2928945\") from node \"ip-172-20-40-196.ap-southeast-1.compute.internal\" \nI0527 00:40:49.106919       1 event.go:291] \"Event occurred\" object=\"provisioning-4265/pod-subpath-test-dynamicpv-qcm6\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-f7f74b47-6472-495b-b21f-2efb6d211378\\\" \"\nE0527 00:40:49.254998       1 namespace_controller.go:162] deletion of namespace port-forwarding-1278 failed: unexpected items still remain in namespace: port-forwarding-1278 for gvr: /v1, Resource=pods\nE0527 00:40:49.261112       1 namespace_controller.go:162] deletion of namespace port-forwarding-8002 failed: unexpected items still remain in namespace: port-forwarding-8002 for gvr: /v1, Resource=pods\nI0527 00:40:49.432986       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.48.240).\nE0527 00:40:49.665931       1 namespace_controller.go:162] deletion of namespace port-forwarding-1278 failed: unexpected items still remain in namespace: port-forwarding-1278 for gvr: /v1, Resource=pods\nE0527 00:40:50.694644       1 namespace_controller.go:162] deletion of namespace port-forwarding-1278 failed: unexpected items still remain in namespace: port-forwarding-1278 for gvr: /v1, Resource=pods\nE0527 00:40:51.175595       1 namespace_controller.go:162] deletion of namespace configmap-3475 failed: unexpected items still remain in namespace: configmap-3475 for gvr: /v1, Resource=pods\nE0527 00:40:51.281529       1 namespace_controller.go:162] deletion of namespace configmap-3475 failed: unexpected items still remain in namespace: configmap-3475 for gvr: /v1, Resource=pods\nE0527 00:40:51.391678       1 namespace_controller.go:162] deletion of namespace configmap-3475 failed: unexpected items still remain in namespace: configmap-3475 for gvr: /v1, Resource=pods\nE0527 00:40:51.498678       1 namespace_controller.go:162] deletion of namespace configmap-3475 failed: unexpected items still remain in namespace: configmap-3475 for gvr: /v1, Resource=pods\nI0527 00:40:51.641411       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-6ba4850f-9e3a-4d68-810b-8633e67195c2\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0c15f8a6dd022e65e\") on node \"ip-172-20-40-196.ap-southeast-1.compute.internal\" \nI0527 00:40:51.647565       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-6ba4850f-9e3a-4d68-810b-8633e67195c2\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0c15f8a6dd022e65e\") on node \"ip-172-20-40-196.ap-southeast-1.compute.internal\" \nE0527 00:40:51.776463       1 tokens_controller.go:262] error synchronizing serviceaccount ingressclass-3616/default: secrets \"default-token-mvpq2\" is forbidden: unable to create new content in namespace ingressclass-3616 because it is being terminated\nE0527 00:40:51.796985       1 namespace_controller.go:162] deletion of namespace configmap-3475 failed: unexpected items still remain in namespace: configmap-3475 for gvr: /v1, Resource=pods\nE0527 00:40:51.811206       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-9482/default: secrets \"default-token-95d55\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-9482 because it is being terminated\nI0527 00:40:51.893467       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-7499\nI0527 00:40:52.036162       1 namespace_controller.go:185] Namespace has been deleted downward-api-3750\nE0527 00:40:52.105891       1 namespace_controller.go:162] deletion of namespace configmap-3475 failed: unexpected items still remain in namespace: configmap-3475 for gvr: /v1, Resource=pods\nE0527 00:40:52.365391       1 namespace_controller.go:162] deletion of namespace configmap-3475 failed: unexpected items still remain in namespace: configmap-3475 for gvr: /v1, Resource=pods\nI0527 00:40:52.677420       1 namespace_controller.go:185] Namespace has been deleted emptydir-6026\nI0527 00:40:52.707666       1 namespace_controller.go:185] Namespace has been deleted kubelet-test-8555\nE0527 00:40:52.784446       1 namespace_controller.go:162] deletion of namespace configmap-3475 failed: unexpected items still remain in namespace: configmap-3475 for gvr: /v1, Resource=pods\nI0527 00:40:53.192509       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3959/e2e-test-webhook-7fwf2\" objectUID=1ea83fcb-3623-47cf-a192-7e8f0271b577 kind=\"EndpointSlice\" virtual=false\nI0527 00:40:53.196989       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3959/e2e-test-webhook-7fwf2\" objectUID=1ea83fcb-3623-47cf-a192-7e8f0271b577 kind=\"EndpointSlice\" propagationPolicy=Background\nI0527 00:40:53.321833       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"RecreatingFailedPod\" message=\"StatefulSet statefulset-9111/ss is recreating failed Pod ss-0\"\nI0527 00:40:53.325638       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:53.330116       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0527 00:40:53.333843       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:53.337223       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0527 00:40:53.341492       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0527 00:40:53.342781       1 stateful_set.go:392] error syncing StatefulSet statefulset-9111/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0527 00:40:53.344074       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nI0527 00:40:53.397694       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3959/sample-webhook-deployment-6bd9446d55\" objectUID=7425bcba-b66e-4d90-bd27-af144ff363c7 kind=\"ReplicaSet\" virtual=false\nI0527 00:40:53.397915       1 deployment_controller.go:581] Deployment webhook-3959/sample-webhook-deployment has been deleted\nI0527 00:40:53.399225       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3959/sample-webhook-deployment-6bd9446d55\" objectUID=7425bcba-b66e-4d90-bd27-af144ff363c7 kind=\"ReplicaSet\" propagationPolicy=Background\nI0527 00:40:53.402395       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3959/sample-webhook-deployment-6bd9446d55-lztwj\" objectUID=04473671-7653-4c86-9635-30c91905a2bf kind=\"Pod\" virtual=false\nI0527 00:40:53.403708       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3959/sample-webhook-deployment-6bd9446d55-lztwj\" objectUID=04473671-7653-4c86-9635-30c91905a2bf kind=\"Pod\" propagationPolicy=Background\nE0527 00:40:53.507003       1 namespace_controller.go:162] deletion of namespace configmap-3475 failed: unexpected items still remain in namespace: configmap-3475 for gvr: /v1, Resource=pods\nI0527 00:40:53.834031       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7499-1425/csi-mockplugin-848657c588\" objectUID=0e274fdc-481e-460d-adb3-e25f356cbf45 kind=\"ControllerRevision\" virtual=false\nI0527 00:40:53.834260       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-7499-1425/csi-mockplugin\nI0527 00:40:53.834301       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7499-1425/csi-mockplugin-0\" objectUID=c83e273e-45f9-43ab-aeda-c7a2b31d8e79 kind=\"Pod\" virtual=false\nI0527 00:40:53.836801       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7499-1425/csi-mockplugin-0\" objectUID=c83e273e-45f9-43ab-aeda-c7a2b31d8e79 kind=\"Pod\" propagationPolicy=Background\nI0527 00:40:53.836801       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7499-1425/csi-mockplugin-848657c588\" objectUID=0e274fdc-481e-460d-adb3-e25f356cbf45 kind=\"ControllerRevision\" propagationPolicy=Background\nE0527 00:40:54.273235       1 tokens_controller.go:262] error synchronizing serviceaccount services-2519/default: secrets \"default-token-sqngj\" is forbidden: unable to create new content in namespace services-2519 because it is being terminated\nI0527 00:40:54.320883       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.209.193).\nE0527 00:40:54.551838       1 tokens_controller.go:262] error synchronizing serviceaccount custom-resource-definition-6054/default: secrets \"default-token-rlplv\" is forbidden: unable to create new content in namespace custom-resource-definition-6054 because it is being terminated\nI0527 00:40:54.728390       1 utils.go:413] couldn't find ipfamilies for headless service: conntrack-4818/svc-udp. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.236.6).\nE0527 00:40:54.874384       1 namespace_controller.go:162] deletion of namespace configmap-3475 failed: unexpected items still remain in namespace: configmap-3475 for gvr: /v1, Resource=pods\nI0527 00:40:54.921319       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.149.15).\nI0527 00:40:55.101304       1 pvc_protection_controller.go:291] PVC volume-6675/pvc-s8flh is unused\nI0527 00:40:55.107457       1 pv_controller.go:638] volume \"nfs-lwrkv\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:40:55.110488       1 pv_controller.go:864] volume \"nfs-lwrkv\" entered phase \"Released\"\nI0527 00:40:55.296178       1 pv_controller_base.go:504] deletion of claim \"volume-6675/pvc-s8flh\" was already processed\nI0527 00:40:55.329982       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.50.133).\nI0527 00:40:55.347266       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.209.193).\nE0527 00:40:55.672206       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-3171/pvc-vhnxz: storageclass.storage.k8s.io \"provisioning-3171\" not found\nI0527 00:40:55.672462       1 event.go:291] \"Event occurred\" object=\"provisioning-3171/pvc-vhnxz\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-3171\\\" not found\"\nE0527 00:40:55.710441       1 tokens_controller.go:262] error synchronizing serviceaccount nettest-1692/default: secrets \"default-token-w6s79\" is forbidden: unable to create new content in namespace nettest-1692 because it is being terminated\nI0527 00:40:55.871900       1 pv_controller.go:864] volume \"local-rrxpw\" entered phase \"Available\"\nI0527 00:40:56.821153       1 namespace_controller.go:185] Namespace has been deleted ingressclass-3616\nE0527 00:40:56.904814       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:40:56.979053       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-9482\nI0527 00:40:57.077477       1 namespace_controller.go:185] Namespace has been deleted port-forwarding-8002\nI0527 00:40:57.107658       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.138.122).\nI0527 00:40:57.134981       1 namespace_controller.go:185] Namespace has been deleted port-forwarding-1278\nI0527 00:40:57.229305       1 aws.go:2291] Waiting for volume \"vol-0c15f8a6dd022e65e\" state: actual=detaching, desired=detached\nI0527 00:40:57.305919       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.138.122).\nI0527 00:40:57.309589       1 event.go:291] \"Event occurred\" object=\"provisioning-678-2413/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0527 00:40:57.322539       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-4265-3439/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.12.39).\nI0527 00:40:57.529202       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-1260/pod-be26bb53-33a2-4cd6-b3e9-9ad8b9c5937c uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-47lzv pvc- persistent-local-volumes-test-1260  be806aa4-28ec-4a8c-ad0b-cd854a63c286 33593 0 2021-05-27 00:40:46 +0000 UTC 2021-05-27 00:40:57 +0000 UTC 0xc0024f13f8 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:40:46 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:40:46 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvzrxvp,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-1260,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:40:57.529467       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-1260/pvc-47lzv because it is still being used\nI0527 00:40:57.687932       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.95.64).\nE0527 00:40:57.861104       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-3959/default: secrets \"default-token-nplph\" is forbidden: unable to create new content in namespace webhook-3959 because it is being terminated\nI0527 00:40:57.894827       1 route_controller.go:294] set node ip-172-20-40-209.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:57.894857       1 route_controller.go:294] set node ip-172-20-42-187.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:57.895142       1 route_controller.go:294] set node ip-172-20-33-93.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:57.895183       1 route_controller.go:294] set node ip-172-20-41-144.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:57.895198       1 route_controller.go:294] set node ip-172-20-40-196.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:40:57.915069       1 event.go:291] \"Event occurred\" object=\"provisioning-678-2413/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0527 00:40:57.916521       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.95.64).\nE0527 00:40:58.075849       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-3959-markers/default: secrets \"default-token-hpm82\" is forbidden: unable to create new content in namespace webhook-3959-markers because it is being terminated\nI0527 00:40:58.082241       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.204.42).\nI0527 00:40:58.113619       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.138.122).\nI0527 00:40:58.282034       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.204.42).\nI0527 00:40:58.283243       1 event.go:291] \"Event occurred\" object=\"provisioning-678-2413/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0527 00:40:58.470556       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.137.150).\nI0527 00:40:58.672265       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.137.150).\nI0527 00:40:58.673764       1 event.go:291] \"Event occurred\" object=\"provisioning-678-2413/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0527 00:40:58.696090       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.95.64).\nE0527 00:40:58.812539       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-5993/pvc-tws92: storageclass.storage.k8s.io \"provisioning-5993\" not found\nI0527 00:40:58.813250       1 event.go:291] \"Event occurred\" object=\"provisioning-5993/pvc-tws92\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-5993\\\" not found\"\nI0527 00:40:58.861788       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.60.173).\nI0527 00:40:58.920633       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:40:58.989052       1 pvc_protection_controller.go:291] PVC provisioning-9907/pvc-c7926 is unused\nI0527 00:40:58.995460       1 pv_controller.go:638] volume \"local-f6srw\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:40:58.998686       1 pv_controller.go:864] volume \"local-f6srw\" entered phase \"Released\"\nI0527 00:40:59.004734       1 pv_controller.go:864] volume \"local-d5lzn\" entered phase \"Available\"\nI0527 00:40:59.056265       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-1260/pod-5a8c9251-54f5-4c91-9f04-9598f961284a uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-47lzv pvc- persistent-local-volumes-test-1260  be806aa4-28ec-4a8c-ad0b-cd854a63c286 33593 0 2021-05-27 00:40:46 +0000 UTC 2021-05-27 00:40:57 +0000 UTC 0xc0024f13f8 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:40:46 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:40:46 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvzrxvp,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-1260,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:40:59.056329       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-1260/pvc-47lzv because it is still being used\nI0527 00:40:59.063851       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.60.173).\nI0527 00:40:59.064646       1 event.go:291] \"Event occurred\" object=\"provisioning-678-2413/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0527 00:40:59.087294       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.204.42).\nI0527 00:40:59.195086       1 pv_controller_base.go:504] deletion of claim \"provisioning-9907/pvc-c7926\" was already processed\nI0527 00:40:59.268769       1 pv_controller.go:864] volume \"local-pvm92wv\" entered phase \"Available\"\nE0527 00:40:59.289144       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:40:59.294749       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-05-27 00:40:22 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdbc\",\n  InstanceId: \"i-063fbd80874e99720\",\n  State: \"detaching\",\n  VolumeId: \"vol-0c15f8a6dd022e65e\"\n}\nI0527 00:40:59.294977       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-6ba4850f-9e3a-4d68-810b-8633e67195c2\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0c15f8a6dd022e65e\") on node \"ip-172-20-40-196.ap-southeast-1.compute.internal\" \nI0527 00:40:59.365071       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-6ba4850f-9e3a-4d68-810b-8633e67195c2\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0c15f8a6dd022e65e\") from node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:40:59.387333       1 namespace_controller.go:185] Namespace has been deleted services-2519\nI0527 00:40:59.454633       1 aws.go:2014] Assigned mount device bu -> volume vol-0c15f8a6dd022e65e\nI0527 00:40:59.458019       1 pv_controller.go:915] claim \"persistent-local-volumes-test-5020/pvc-xwmrp\" bound to volume \"local-pvm92wv\"\nI0527 00:40:59.464878       1 pv_controller.go:864] volume \"local-pvm92wv\" entered phase \"Bound\"\nI0527 00:40:59.464903       1 pv_controller.go:967] volume \"local-pvm92wv\" bound to claim \"persistent-local-volumes-test-5020/pvc-xwmrp\"\nI0527 00:40:59.469460       1 pv_controller.go:808] claim \"persistent-local-volumes-test-5020/pvc-xwmrp\" entered phase \"Bound\"\nI0527 00:40:59.520374       1 utils.go:413] couldn't find ipfamilies for headless service: conntrack-4818/svc-udp. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.236.6).\nI0527 00:40:59.551943       1 pvc_protection_controller.go:291] PVC provisioning-5937/pvc-bwtdn is unused\nI0527 00:40:59.557304       1 pv_controller.go:638] volume \"local-v4974\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:40:59.559925       1 pv_controller.go:864] volume \"local-v4974\" entered phase \"Released\"\nI0527 00:40:59.614781       1 namespace_controller.go:185] Namespace has been deleted custom-resource-definition-6054\nI0527 00:40:59.645399       1 event.go:291] \"Event occurred\" object=\"provisioning-678/pvc-p7rn8\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-678\\\" or manually created by system administrator\"\nI0527 00:40:59.645819       1 event.go:291] \"Event occurred\" object=\"provisioning-678/pvc-p7rn8\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-678\\\" or manually created by system administrator\"\nI0527 00:40:59.756365       1 pv_controller_base.go:504] deletion of claim \"provisioning-5937/pvc-bwtdn\" was already processed\nI0527 00:40:59.822931       1 aws.go:2427] AttachVolume volume=\"vol-0c15f8a6dd022e65e\" instance=\"i-069a67f4c9afb4c56\" request returned {\n  AttachTime: 2021-05-27 00:40:59.813 +0000 UTC,\n  Device: \"/dev/xvdbu\",\n  InstanceId: \"i-069a67f4c9afb4c56\",\n  State: \"attaching\",\n  VolumeId: \"vol-0c15f8a6dd022e65e\"\n}\nI0527 00:40:59.868830       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.60.173).\nI0527 00:40:59.906308       1 pv_controller.go:864] volume \"pvc-1b9c2d56-b07d-451b-9c7e-8ee2fee05eee\" entered phase \"Bound\"\nI0527 00:40:59.906336       1 pv_controller.go:967] volume \"pvc-1b9c2d56-b07d-451b-9c7e-8ee2fee05eee\" bound to claim \"provisioning-678/pvc-p7rn8\"\nI0527 00:40:59.912904       1 pv_controller.go:808] claim \"provisioning-678/pvc-p7rn8\" entered phase \"Bound\"\nI0527 00:41:00.088592       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-1260/pod-be26bb53-33a2-4cd6-b3e9-9ad8b9c5937c uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-47lzv pvc- persistent-local-volumes-test-1260  be806aa4-28ec-4a8c-ad0b-cd854a63c286 33593 0 2021-05-27 00:40:46 +0000 UTC 2021-05-27 00:40:57 +0000 UTC 0xc0024f13f8 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:40:46 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:40:46 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvzrxvp,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-1260,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:41:00.088690       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-1260/pvc-47lzv because it is still being used\nI0527 00:41:00.092547       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-1260/pod-be26bb53-33a2-4cd6-b3e9-9ad8b9c5937c uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-47lzv pvc- persistent-local-volumes-test-1260  be806aa4-28ec-4a8c-ad0b-cd854a63c286 33593 0 2021-05-27 00:40:46 +0000 UTC 2021-05-27 00:40:57 +0000 UTC 0xc0024f13f8 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:40:46 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:40:46 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvzrxvp,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-1260,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:41:00.092609       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-1260/pvc-47lzv because it is still being used\nI0527 00:41:00.435200       1 namespace_controller.go:185] Namespace has been deleted topology-290\nI0527 00:41:00.526516       1 utils.go:413] couldn't find ipfamilies for headless service: conntrack-4818/svc-udp. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.236.6).\nI0527 00:41:00.766437       1 namespace_controller.go:185] Namespace has been deleted nettest-1692\nI0527 00:41:00.993069       1 pv_controller.go:915] claim \"provisioning-5993/pvc-tws92\" bound to volume \"local-d5lzn\"\nI0527 00:41:01.000010       1 pv_controller.go:864] volume \"local-d5lzn\" entered phase \"Bound\"\nI0527 00:41:01.000042       1 pv_controller.go:967] volume \"local-d5lzn\" bound to claim \"provisioning-5993/pvc-tws92\"\nI0527 00:41:01.005464       1 pv_controller.go:808] claim \"provisioning-5993/pvc-tws92\" entered phase \"Bound\"\nI0527 00:41:01.005823       1 pv_controller.go:915] claim \"provisioning-3171/pvc-vhnxz\" bound to volume \"local-rrxpw\"\nI0527 00:41:01.013763       1 stateful_set_control.go:489] StatefulSet statefulset-9111/ss terminating Pod ss-0 for scale down\nI0527 00:41:01.015254       1 pv_controller.go:864] volume \"local-rrxpw\" entered phase \"Bound\"\nI0527 00:41:01.015774       1 pv_controller.go:967] volume \"local-rrxpw\" bound to claim \"provisioning-3171/pvc-vhnxz\"\nI0527 00:41:01.029036       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:41:01.035733       1 event.go:291] \"Event occurred\" object=\"statefulset-9111/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0527 00:41:01.039428       1 pv_controller.go:808] claim \"provisioning-3171/pvc-vhnxz\" entered phase \"Bound\"\nI0527 00:41:01.292429       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-1260/pod-be26bb53-33a2-4cd6-b3e9-9ad8b9c5937c uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-47lzv pvc- persistent-local-volumes-test-1260  be806aa4-28ec-4a8c-ad0b-cd854a63c286 33593 0 2021-05-27 00:40:46 +0000 UTC 2021-05-27 00:40:57 +0000 UTC 0xc0024f13f8 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:40:46 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:40:46 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvzrxvp,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-1260,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:41:01.292792       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-1260/pvc-47lzv because it is still being used\nI0527 00:41:01.297039       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-1260/pvc-47lzv is unused\nI0527 00:41:01.302998       1 pv_controller.go:638] volume \"local-pvzrxvp\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:41:01.305078       1 pv_controller.go:864] volume \"local-pvzrxvp\" entered phase \"Released\"\nI0527 00:41:01.309774       1 pv_controller_base.go:504] deletion of claim \"persistent-local-volumes-test-1260/pvc-47lzv\" was already processed\nI0527 00:41:01.497434       1 utils.go:413] couldn't find ipfamilies for headless service: conntrack-4818/svc-udp. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.236.6).\nI0527 00:41:01.504648       1 utils.go:413] couldn't find ipfamilies for headless service: conntrack-4818/svc-udp. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.236.6).\nI0527 00:41:01.512823       1 endpoints_controller.go:363] \"Error syncing endpoints, retrying\" service=\"conntrack-4818/svc-udp\" err=\"Operation cannot be fulfilled on endpoints \\\"svc-udp\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:41:01.513030       1 event.go:291] \"Event occurred\" object=\"conntrack-4818/svc-udp\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint conntrack-4818/svc-udp: Operation cannot be fulfilled on endpoints \\\"svc-udp\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0527 00:41:01.785818       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:41:01.952282       1 aws.go:2037] Releasing in-process attachment entry: bu -> volume vol-0c15f8a6dd022e65e\nI0527 00:41:01.952336       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-6ba4850f-9e3a-4d68-810b-8633e67195c2\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-1a/vol-0c15f8a6dd022e65e\") from node \"ip-172-20-40-209.ap-southeast-1.compute.internal\" \nI0527 00:41:01.952443       1 event.go:291] \"Event occurred\" object=\"provisioning-8319/pod-subpath-test-dynamicpv-sgnz\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-6ba4850f-9e3a-4d68-810b-8633e67195c2\\\" \"\nI0527 00:41:02.040297       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:41:02.187867       1 namespace_controller.go:185] Namespace has been deleted volume-expand-5875-5066\nI0527 00:41:02.364949       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.138.122).\nI0527 00:41:02.388124       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-1b9c2d56-b07d-451b-9c7e-8ee2fee05eee\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-678^34f82ec9-be84-11eb-875f-2ac6ac4e9bcd\") from node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nI0527 00:41:02.408787       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-1b9c2d56-b07d-451b-9c7e-8ee2fee05eee\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-678^34f82ec9-be84-11eb-875f-2ac6ac4e9bcd\") from node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nI0527 00:41:02.409039       1 event.go:291] \"Event occurred\" object=\"provisioning-678/hostpath-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-1b9c2d56-b07d-451b-9c7e-8ee2fee05eee\\\" \"\nI0527 00:41:02.457763       1 event.go:291] \"Event occurred\" object=\"cronjob-6308/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job concurrent-1622076060\"\nI0527 00:41:02.467143       1 cronjob_controller.go:188] Unable to update status for cronjob-6308/concurrent (rv = 30804): Operation cannot be fulfilled on cronjobs.batch \"concurrent\": the object has been modified; please apply your changes to the latest version and try again\nI0527 00:41:02.467613       1 event.go:291] \"Event occurred\" object=\"cronjob-6308/concurrent-1622076060\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: concurrent-1622076060-5gl9w\"\nI0527 00:41:02.516641       1 namespace_controller.go:185] Namespace has been deleted configmap-3475\nI0527 00:41:03.170138       1 namespace_controller.go:185] Namespace has been deleted webhook-3959-markers\nI0527 00:41:03.204276       1 namespace_controller.go:185] Namespace has been deleted prestop-5422\nI0527 00:41:03.272711       1 namespace_controller.go:185] Namespace has been deleted nettest-6126\nI0527 00:41:03.567479       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.95.64).\nE0527 00:41:03.825408       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:41:03.964297       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.204.42).\nI0527 00:41:04.373766       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.60.173).\nI0527 00:41:04.383881       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-7499-1425\nI0527 00:41:04.582300       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.95.64).\nI0527 00:41:04.665515       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-6308/concurrent-1622076000\" objectUID=013faa11-32ec-4e19-a1bc-1d450e536ef8 kind=\"Job\" virtual=false\nI0527 00:41:04.665711       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-6308/concurrent-1622076060\" objectUID=45d8d14a-aab0-4aef-b74a-93ea27656ec4 kind=\"Job\" virtual=false\nI0527 00:41:04.668582       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-6308/concurrent-1622076060\" objectUID=45d8d14a-aab0-4aef-b74a-93ea27656ec4 kind=\"Job\" propagationPolicy=Background\nI0527 00:41:04.669041       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-6308/concurrent-1622076000\" objectUID=013faa11-32ec-4e19-a1bc-1d450e536ef8 kind=\"Job\" propagationPolicy=Background\nI0527 00:41:04.673529       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-6308/concurrent-1622076000-v6rgt\" objectUID=99870744-d7fa-4cab-8310-c23ba9ae99c0 kind=\"Pod\" virtual=false\nI0527 00:41:04.675685       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-6308/concurrent-1622076060-5gl9w\" objectUID=803d3304-af1a-4613-9b40-030999af6d88 kind=\"Pod\" virtual=false\nI0527 00:41:04.675995       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-6308/concurrent-1622076000-v6rgt\" objectUID=99870744-d7fa-4cab-8310-c23ba9ae99c0 kind=\"Pod\" propagationPolicy=Background\nI0527 00:41:04.678989       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-6308/concurrent-1622076060-5gl9w\" objectUID=803d3304-af1a-4613-9b40-030999af6d88 kind=\"Pod\" propagationPolicy=Background\nI0527 00:41:04.897016       1 pv_controller.go:864] volume \"local-pvw6qbw\" entered phase \"Available\"\nI0527 00:41:05.085105       1 pv_controller.go:915] claim \"persistent-local-volumes-test-708/pvc-tw85v\" bound to volume \"local-pvw6qbw\"\nI0527 00:41:05.092782       1 pv_controller.go:864] volume \"local-pvw6qbw\" entered phase \"Bound\"\nI0527 00:41:05.092926       1 pv_controller.go:967] volume \"local-pvw6qbw\" bound to claim \"persistent-local-volumes-test-708/pvc-tw85v\"\nI0527 00:41:05.098296       1 pv_controller.go:808] claim \"persistent-local-volumes-test-708/pvc-tw85v\" entered phase \"Bound\"\nI0527 00:41:05.564666       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-678-2413/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.137.150).\nI0527 00:41:06.009032       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-81f82502-55b7-4fec-b5d6-54957b0f17e8\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-5070^27e07695-be84-11eb-871e-d6202ea27c86\") on node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nI0527 00:41:06.013137       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-81f82502-55b7-4fec-b5d6-54957b0f17e8\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-5070^27e07695-be84-11eb-871e-d6202ea27c86\") on node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nI0527 00:41:06.017715       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-81f82502-55b7-4fec-b5d6-54957b0f17e8\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-5070^27e07695-be84-11eb-871e-d6202ea27c86\") on node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nE0527 00:41:06.755158       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-5937/default: secrets \"default-token-2qn2b\" is forbidden: unable to create new content in namespace provisioning-5937 because it is being terminated\nI0527 00:41:06.923085       1 namespace_controller.go:185] Namespace has been deleted prestop-8006\nE0527 00:41:07.577332       1 tokens_controller.go:262] error synchronizing serviceaccount gc-1256/default: secrets \"default-token-jskzq\" is forbidden: unable to create new content in namespace gc-1256 because it is being terminated\nI0527 00:41:07.878544       1 route_controller.go:294] set node ip-172-20-40-209.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:41:07.878572       1 route_controller.go:294] set node ip-172-20-42-187.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:41:07.878645       1 route_controller.go:294] set node ip-172-20-33-93.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:41:07.878678       1 route_controller.go:294] set node ip-172-20-41-144.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:41:07.878689       1 route_controller.go:294] set node ip-172-20-40-196.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:41:08.085672       1 pvc_protection_controller.go:291] PVC provisioning-5993/pvc-tws92 is unused\nI0527 00:41:08.092715       1 pv_controller.go:638] volume \"local-d5lzn\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:41:08.099835       1 pv_controller.go:864] volume \"local-d5lzn\" entered phase \"Released\"\nI0527 00:41:08.123327       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-6865/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:41:08.146034       1 namespace_controller.go:185] Namespace has been deleted webhook-3959\nI0527 00:41:08.277001       1 pv_controller_base.go:504] deletion of claim \"provisioning-5993/pvc-tws92\" was already processed\nI0527 00:41:08.320093       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-6865/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:41:08.320396       1 event.go:291] \"Event occurred\" object=\"statefulset-6865/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0527 00:41:08.412399       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-81f82502-55b7-4fec-b5d6-54957b0f17e8\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-5070^27e07695-be84-11eb-871e-d6202ea27c86\") from node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nI0527 00:41:08.419722       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-81f82502-55b7-4fec-b5d6-54957b0f17e8\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-5070^27e07695-be84-11eb-871e-d6202ea27c86\") from node \"ip-172-20-41-144.ap-southeast-1.compute.internal\" \nI0527 00:41:08.419973       1 event.go:291] \"Event occurred\" object=\"volume-5070/hostpath-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-81f82502-55b7-4fec-b5d6-54957b0f17e8\\\" \"\nE0527 00:41:08.992603       1 tokens_controller.go:262] error synchronizing serviceaccount projected-4275/default: secrets \"default-token-4nbnt\" is forbidden: unable to create new content in namespace projected-4275 because it is being terminated\nI0527 00:41:09.126257       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-6865/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:41:09.414640       1 pvc_protection_controller.go:291] PVC provisioning-3171/pvc-vhnxz is unused\nI0527 00:41:09.423362       1 pv_controller.go:638] volume \"local-rrxpw\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:41:09.426461       1 pv_controller.go:864] volume \"local-rrxpw\" entered phase \"Released\"\nI0527 00:41:09.614495       1 pv_controller_base.go:504] deletion of claim \"provisioning-3171/pvc-vhnxz\" was already processed\nI0527 00:41:09.665094       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-1260\nE0527 00:41:09.910805       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:41:10.989264       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-6865/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:41:11.271711       1 event.go:291] \"Event occurred\" object=\"provisioning-716/nfsjd2rr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-provisioning-716\\\" or manually created by system administrator\"\nI0527 00:41:11.388797       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-6865/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:41:11.397153       1 event.go:291] \"Event occurred\" object=\"statefulset-6865/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nI0527 00:41:11.402861       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-6865/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:41:11.760482       1 namespace_controller.go:185] Namespace has been deleted provisioning-5937\nI0527 00:41:11.995299       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-6865/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:41:12.632845       1 namespace_controller.go:185] Namespace has been deleted gc-1256\nE0527 00:41:12.865027       1 tokens_controller.go:262] error synchronizing serviceaccount container-runtime-5556/default: secrets \"default-token-s87b2\" is forbidden: unable to create new content in namespace container-runtime-5556 because it is being terminated\nI0527 00:41:13.253271       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-5020/pod-aa6e0645-363d-49aa-97b5-d9fd152a6fda uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-xwmrp pvc- persistent-local-volumes-test-5020  7cd37cd9-6a8b-48fb-8ed1-f8f948851fba 34592 0 2021-05-27 00:40:59 +0000 UTC 2021-05-27 00:41:13 +0000 UTC 0xc00284a938 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:40:59 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:40:59 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvm92wv,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-5020,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:41:13.253317       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-5020/pvc-xwmrp because it is still being used\nE0527 00:41:13.286963       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:41:13.309562       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-4779/test-quota\nI0527 00:41:13.688401       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-6865/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:41:13.842726       1 namespace_controller.go:185] Namespace has been deleted provisioning-9907\nI0527 00:41:14.023151       1 namespace_controller.go:185] Namespace has been deleted projected-4275\nI0527 00:41:14.667337       1 pv_controller.go:864] volume \"pvc-fc38bbf8-27e2-4d0f-9825-a29aa0d449c4\" entered phase \"Bound\"\nI0527 00:41:14.667535       1 pv_controller.go:967] volume \"pvc-fc38bbf8-27e2-4d0f-9825-a29aa0d449c4\" bound to claim \"provisioning-716/nfsjd2rr\"\nI0527 00:41:14.673304       1 pv_controller.go:808] claim \"provisioning-716/nfsjd2rr\" entered phase \"Bound\"\nI0527 00:41:14.692595       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-6865/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:41:14.815966       1 namespace_controller.go:185] Namespace has been deleted security-context-test-6695\nI0527 00:41:15.338004       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:41:15.352620       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9111/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:41:16.088121       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-6865/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:41:16.101638       1 event.go:291] \"Event occurred\" object=\"statefulset-6865/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0527 00:41:16.111305       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-6865/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0527 00:41:16.439521       1 pvc_protection_controller.go:291] PVC provisioning-8319/awsj7qlw is unused\nI0527 00:41:16.450608       1 pv_controller.go:638] volume \"pvc-6ba4850f-9e3a-4d68-810b-8633e67195c2\" is released and reclaim policy \"Delete\" will be executed\nI0527 00:41:16.458342       1 pv_controller.go:864] volume \"pvc-6ba4850f-9e3a-4d68-810b-8633e67195c2\" entered phase \"Released\"\nI0527 00:41:16.460178       1 pv_controller.go:1326] isVolumeReleased[pvc-6ba4850f-9e3a-4d68-810b-8633e67195c2]: volume is released\nI0527 00:41:16.614608       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-1a/vol-0c15f8a6dd022e65e: error deleting EBS volume \"vol-0c15f8a6dd022e65e\" since volume is currently attached to \"i-069a67f4c9afb4c56\"\nE0527 00:41:16.614904       1 goroutinemap.go:150] Operation for \"delete-pvc-6ba4850f-9e3a-4d68-810b-8633e67195c2[ff700e1d-88b8-4ae7-a8d0-8557d639421d]\" failed. No retries permitted until 2021-05-27 00:41:17.114876381 +0000 UTC m=+1207.371138973 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-0c15f8a6dd022e65e\\\" since volume is currently attached to \\\"i-069a67f4c9afb4c56\\\"\"\nI0527 00:41:16.615408       1 event.go:291] \"Event occurred\" object=\"pvc-6ba4850f-9e3a-4d68-810b-8633e67195c2\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0c15f8a6dd022e65e\\\" since volume is currently attached to \\\"i-069a67f4c9afb4c56\\\"\"\nI0527 00:41:16.633129       1 namespace_controller.go:185] Namespace has been deleted security-context-test-1641\nI0527 00:41:16.670005       1 namespace_controller.go:185] Namespace has been deleted chunking-7526\nI0527 00:41:16.712634       1 utils.go:413] couldn't find ipfamilies for headless service: kubectl-2235/agnhost-replica. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.4.140).\nI0527 00:41:17.007728       1 namespace_controller.go:185] Namespace has been deleted health-1311\nI0527 00:41:17.108794       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-6865/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0527 00:41:17.179476       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:41:17.300889       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-5020/pod-aa6e0645-363d-49aa-97b5-d9fd152a6fda uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-xwmrp pvc- persistent-local-volumes-test-5020  7cd37cd9-6a8b-48fb-8ed1-f8f948851fba 34592 0 2021-05-27 00:40:59 +0000 UTC 2021-05-27 00:41:13 +0000 UTC 0xc00284a938 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:40:59 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:40:59 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvm92wv,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-5020,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:41:17.300945       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-5020/pvc-xwmrp because it is still being used\nI0527 00:41:17.723818       1 utils.go:413] couldn't find ipfamilies for headless service: kubectl-2235/agnhost-replica. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.4.140).\nI0527 00:41:17.885569       1 route_controller.go:294] set node ip-172-20-40-196.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:41:17.886420       1 route_controller.go:294] set node ip-172-20-42-187.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:41:17.886436       1 route_controller.go:294] set node ip-172-20-40-209.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:41:17.886452       1 route_controller.go:294] set node ip-172-20-33-93.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:41:17.886468       1 route_controller.go:294] set node ip-172-20-41-144.ap-southeast-1.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0527 00:41:17.952746       1 namespace_controller.go:185] Namespace has been deleted container-runtime-5556\nE0527 00:41:17.985360       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-3171/default: secrets \"default-token-c67sj\" is forbidden: unable to create new content in namespace provisioning-3171 because it is being terminated\nI0527 00:41:18.287741       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-5020/pod-aa6e0645-363d-49aa-97b5-d9fd152a6fda uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-xwmrp pvc- persistent-local-volumes-test-5020  7cd37cd9-6a8b-48fb-8ed1-f8f948851fba 34592 0 2021-05-27 00:40:59 +0000 UTC 2021-05-27 00:41:13 +0000 UTC 0xc00284a938 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:40:59 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:40:59 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvm92wv,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-5020,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:41:18.287886       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-5020/pvc-xwmrp because it is still being used\nI0527 00:41:18.400854       1 namespace_controller.go:185] Namespace has been deleted resourcequota-4779\nI0527 00:41:18.486950       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-5020/pod-aa6e0645-363d-49aa-97b5-d9fd152a6fda uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-xwmrp pvc- persistent-local-volumes-test-5020  7cd37cd9-6a8b-48fb-8ed1-f8f948851fba 34592 0 2021-05-27 00:40:59 +0000 UTC 2021-05-27 00:41:13 +0000 UTC 0xc00284a938 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:40:59 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:40:59 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvm92wv,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-5020,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:41:18.487052       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-5020/pvc-xwmrp because it is still being used\nI0527 00:41:18.489503       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-5020/pod-7ba4c71e-a311-4f5d-9d05-332bbc0c4fa9 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-xwmrp pvc- persistent-local-volumes-test-5020  7cd37cd9-6a8b-48fb-8ed1-f8f948851fba 34592 0 2021-05-27 00:40:59 +0000 UTC 2021-05-27 00:41:13 +0000 UTC 0xc00284a938 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:40:59 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:40:59 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvm92wv,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-5020,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:41:18.489762       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-5020/pvc-xwmrp because it is still being used\nE0527 00:41:19.208857       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0527 00:41:19.686595       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-5020/pod-7ba4c71e-a311-4f5d-9d05-332bbc0c4fa9 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-xwmrp pvc- persistent-local-volumes-test-5020  7cd37cd9-6a8b-48fb-8ed1-f8f948851fba 34592 0 2021-05-27 00:40:59 +0000 UTC 2021-05-27 00:41:13 +0000 UTC 0xc00284a938 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:40:59 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:40:59 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvm92wv,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-5020,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:41:19.686662       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-5020/pvc-xwmrp because it is still being used\nI0527 00:41:19.692698       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-5020/pvc-xwmrp is unused\nI0527 00:41:19.698216       1 pv_controller.go:638] volume \"local-pvm92wv\" is released and reclaim policy \"Retain\" will be executed\nI0527 00:41:19.701341       1 pv_controller.go:864] volume \"local-pvm92wv\" entered phase \"Released\"\nI0527 00:41:19.705948       1 pv_controller_base.go:504] deletion of claim \"persistent-local-volumes-test-5020/pvc-xwmrp\" was already processed\nE0527 00:41:19.840990       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0527 00:41:20.069500       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0527 00:41:20.094421       1 tokens_controller.go:262] error synchronizing serviceaccount volume-6675/default: secrets \"default-token-bnndm\" is forbidden: unable to create new content in namespace volume-6675 because it is being terminated\nI0527 00:41:20.184908       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-708/pod-0b7652bb-3289-4685-b297-e9aa0696ac0d uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-tw85v pvc- persistent-local-volumes-test-708  13482c35-fd6b-4b84-84ef-d4512b5014c1 34806 0 2021-05-27 00:41:05 +0000 UTC 2021-05-27 00:41:20 +0000 UTC 0xc003875498 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-27 00:41:05 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-27 00:41:05 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvw6qbw,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-708,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0527 00:41:20.184988       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-708/pvc-tw85v because it is still being used\nI0527 00:41:20.313054       1 namespace_controller.go:185] Namespace has been deleted container-runtime-4981\nI0527 00:41:20.348027       1 namespace_controller.go:185] Namespace has been deleted provisioning-5993\nI0527 00:41:20.987885       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-6865/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0527 00:41:21.003963       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-8577/pvc-9nl82: storageclass.storage.k8s.io \"provisioning-8577\" not found\nI0527 00:41:21.004257       1 event.go:291] \"Event occurred\" object=\"provisioning-8577/pvc-9nl82\" kind=\"PersistentVolumeClaim\" apiVersion=\&