This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-08-25 04:02
Elapsed36m55s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 126 lines ...
I0825 04:03:27.978344    4061 up.go:43] Cleaning up any leaked resources from previous cluster
I0825 04:03:27.978381    4061 dumplogs.go:38] /logs/artifacts/357c760e-0559-11ec-8b87-fa1b3e902a44/kops toolbox dump --name e2e-187541ca57-a9514.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ec2-user
I0825 04:03:27.996538    4082 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0825 04:03:27.996639    4082 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

Cluster.kops.k8s.io "e2e-187541ca57-a9514.test-cncf-aws.k8s.io" not found
W0825 04:03:28.500371    4061 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0825 04:03:28.500426    4061 down.go:48] /logs/artifacts/357c760e-0559-11ec-8b87-fa1b3e902a44/kops delete cluster --name e2e-187541ca57-a9514.test-cncf-aws.k8s.io --yes
I0825 04:03:28.515901    4092 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0825 04:03:28.515998    4092 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-187541ca57-a9514.test-cncf-aws.k8s.io" not found
I0825 04:03:29.034216    4061 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/08/25 04:03:29 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0825 04:03:29.042407    4061 http.go:37] curl https://ip.jsb.workers.dev
I0825 04:03:29.141327    4061 up.go:144] /logs/artifacts/357c760e-0559-11ec-8b87-fa1b3e902a44/kops create cluster --name e2e-187541ca57-a9514.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.20.10 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=amazon/amzn2-ami-hvm-2.0.20210721.2-x86_64-gp2 --channel=alpha --networking=flannel --container-runtime=containerd --admin-access 34.67.247.30/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-3a --master-size c5.large
I0825 04:03:29.155497    4102 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0825 04:03:29.155595    4102 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I0825 04:03:29.199801    4102 create_cluster.go:724] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0825 04:03:29.695458    4102 new_cluster.go:962]  Cloud Provider ID = aws
... skipping 52 lines ...

I0825 04:03:55.625612    4061 up.go:181] /logs/artifacts/357c760e-0559-11ec-8b87-fa1b3e902a44/kops validate cluster --name e2e-187541ca57-a9514.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0825 04:03:55.639647    4122 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0825 04:03:55.639859    4122 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-187541ca57-a9514.test-cncf-aws.k8s.io

W0825 04:03:56.894004    4122 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-187541ca57-a9514.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0825 04:04:06.929974    4122 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-187541ca57-a9514.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0825 04:04:16.961638    4122 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-187541ca57-a9514.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0825 04:04:26.998369    4122 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0825 04:04:37.030694    4122 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0825 04:04:47.060076    4122 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0825 04:04:57.108071    4122 validate_cluster.go:221] (will retry): cluster not yet healthy
W0825 04:05:07.140356    4122 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-187541ca57-a9514.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0825 04:05:17.184313    4122 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0825 04:05:27.233041    4122 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0825 04:05:37.277649    4122 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0825 04:05:47.309786    4122 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0825 04:05:57.360935    4122 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0825 04:06:07.407017    4122 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0825 04:06:17.437342    4122 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0825 04:06:27.469468    4122 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0825 04:06:37.500403    4122 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0825 04:06:47.530515    4122 validate_cluster.go:221] (will retry): cluster not yet healthy
W0825 04:06:57.560148    4122 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-187541ca57-a9514.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0825 04:07:07.579882    4122 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-187541ca57-a9514.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0825 04:07:17.612877    4122 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 13 lines ...
Pod	kube-system/coredns-5489b75945-wxzfq		system-cluster-critical pod "coredns-5489b75945-wxzfq" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-swfmx	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-swfmx" is pending
Pod	kube-system/kube-flannel-ds-4f8d9		system-node-critical pod "kube-flannel-ds-4f8d9" is pending
Pod	kube-system/kube-flannel-ds-q4bc4		system-node-critical pod "kube-flannel-ds-q4bc4" is pending
Pod	kube-system/kube-flannel-ds-qmrf7		system-node-critical pod "kube-flannel-ds-qmrf7" is pending

Validation Failed
W0825 04:07:30.042540    4122 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 8 lines ...
VALIDATION ERRORS
KIND	NAME						MESSAGE
Node	ip-172-20-37-233.eu-west-3.compute.internal	node "ip-172-20-37-233.eu-west-3.compute.internal" is not ready
Node	ip-172-20-44-96.eu-west-3.compute.internal	master "ip-172-20-44-96.eu-west-3.compute.internal" is missing kube-controller-manager pod
Pod	kube-system/coredns-5489b75945-f586w		system-cluster-critical pod "coredns-5489b75945-f586w" is not ready (coredns)

Validation Failed
W0825 04:07:41.589129    4122 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 6 lines ...
ip-172-20-44-96.eu-west-3.compute.internal	master	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Node	ip-172-20-37-233.eu-west-3.compute.internal	node "ip-172-20-37-233.eu-west-3.compute.internal" is not ready

Validation Failed
W0825 04:07:53.140363    4122 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.medium	4	4	eu-west-3a

... skipping 1415 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:10:15.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1519" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":1,"skipped":8,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] [sig-node] crictl
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 85 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:10:17.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2331" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 4 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:10:17.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":2,"skipped":26,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:17.595: INFO: Only supported for providers [openstack] (not aws)
... skipping 138 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Aug 25 04:10:14.514: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd3c8b49-60d9-40d0-b988-5be0cfb777f6" in namespace "downward-api-5272" to be "Succeeded or Failed"
Aug 25 04:10:14.617: INFO: Pod "downwardapi-volume-bd3c8b49-60d9-40d0-b988-5be0cfb777f6": Phase="Pending", Reason="", readiness=false. Elapsed: 103.226843ms
Aug 25 04:10:16.721: INFO: Pod "downwardapi-volume-bd3c8b49-60d9-40d0-b988-5be0cfb777f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20671572s
Aug 25 04:10:18.825: INFO: Pod "downwardapi-volume-bd3c8b49-60d9-40d0-b988-5be0cfb777f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310577797s
Aug 25 04:10:20.929: INFO: Pod "downwardapi-volume-bd3c8b49-60d9-40d0-b988-5be0cfb777f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.414558888s
STEP: Saw pod success
Aug 25 04:10:20.929: INFO: Pod "downwardapi-volume-bd3c8b49-60d9-40d0-b988-5be0cfb777f6" satisfied condition "Succeeded or Failed"
Aug 25 04:10:21.032: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod downwardapi-volume-bd3c8b49-60d9-40d0-b988-5be0cfb777f6 container client-container: <nil>
STEP: delete the pod
Aug 25 04:10:21.265: INFO: Waiting for pod downwardapi-volume-bd3c8b49-60d9-40d0-b988-5be0cfb777f6 to disappear
Aug 25 04:10:21.368: INFO: Pod downwardapi-volume-bd3c8b49-60d9-40d0-b988-5be0cfb777f6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.003 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:21.696: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 75 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 26 lines ...
Aug 25 04:10:14.043: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Aug 25 04:10:14.147: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 25 04:10:22.394: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
... skipping 21 lines ...
Aug 25 04:10:14.177: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-map-3861faec-0583-4286-a4f0-f49ac9b9e264
STEP: Creating a pod to test consume configMaps
Aug 25 04:10:14.598: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cc5da15d-97f3-4818-9dc1-f22a3a67a988" in namespace "projected-916" to be "Succeeded or Failed"
Aug 25 04:10:14.702: INFO: Pod "pod-projected-configmaps-cc5da15d-97f3-4818-9dc1-f22a3a67a988": Phase="Pending", Reason="", readiness=false. Elapsed: 103.520912ms
Aug 25 04:10:16.805: INFO: Pod "pod-projected-configmaps-cc5da15d-97f3-4818-9dc1-f22a3a67a988": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20725126s
Aug 25 04:10:18.909: INFO: Pod "pod-projected-configmaps-cc5da15d-97f3-4818-9dc1-f22a3a67a988": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311320807s
Aug 25 04:10:21.013: INFO: Pod "pod-projected-configmaps-cc5da15d-97f3-4818-9dc1-f22a3a67a988": Phase="Pending", Reason="", readiness=false. Elapsed: 6.415234131s
Aug 25 04:10:23.118: INFO: Pod "pod-projected-configmaps-cc5da15d-97f3-4818-9dc1-f22a3a67a988": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.519729563s
STEP: Saw pod success
Aug 25 04:10:23.118: INFO: Pod "pod-projected-configmaps-cc5da15d-97f3-4818-9dc1-f22a3a67a988" satisfied condition "Succeeded or Failed"
Aug 25 04:10:23.223: INFO: Trying to get logs from node ip-172-20-38-132.eu-west-3.compute.internal pod pod-projected-configmaps-cc5da15d-97f3-4818-9dc1-f22a3a67a988 container agnhost-container: <nil>
STEP: delete the pod
Aug 25 04:10:23.823: INFO: Waiting for pod pod-projected-configmaps-cc5da15d-97f3-4818-9dc1-f22a3a67a988 to disappear
Aug 25 04:10:23.926: INFO: Pod pod-projected-configmaps-cc5da15d-97f3-4818-9dc1-f22a3a67a988 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:10.589 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:11.345 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:26.917: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Aug 25 04:10:15.709: INFO: Waiting up to 5m0s for pod "downwardapi-volume-674ff96f-3f2a-4867-b8fb-2b6de3dcabec" in namespace "projected-589" to be "Succeeded or Failed"
Aug 25 04:10:15.815: INFO: Pod "downwardapi-volume-674ff96f-3f2a-4867-b8fb-2b6de3dcabec": Phase="Pending", Reason="", readiness=false. Elapsed: 105.68981ms
Aug 25 04:10:17.920: INFO: Pod "downwardapi-volume-674ff96f-3f2a-4867-b8fb-2b6de3dcabec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210198093s
Aug 25 04:10:20.030: INFO: Pod "downwardapi-volume-674ff96f-3f2a-4867-b8fb-2b6de3dcabec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320603777s
Aug 25 04:10:22.134: INFO: Pod "downwardapi-volume-674ff96f-3f2a-4867-b8fb-2b6de3dcabec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.424739963s
Aug 25 04:10:24.238: INFO: Pod "downwardapi-volume-674ff96f-3f2a-4867-b8fb-2b6de3dcabec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.529027817s
Aug 25 04:10:26.345: INFO: Pod "downwardapi-volume-674ff96f-3f2a-4867-b8fb-2b6de3dcabec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.635827333s
STEP: Saw pod success
Aug 25 04:10:26.345: INFO: Pod "downwardapi-volume-674ff96f-3f2a-4867-b8fb-2b6de3dcabec" satisfied condition "Succeeded or Failed"
Aug 25 04:10:26.449: INFO: Trying to get logs from node ip-172-20-38-132.eu-west-3.compute.internal pod downwardapi-volume-674ff96f-3f2a-4867-b8fb-2b6de3dcabec container client-container: <nil>
STEP: delete the pod
Aug 25 04:10:26.664: INFO: Waiting for pod downwardapi-volume-674ff96f-3f2a-4867-b8fb-2b6de3dcabec to disappear
Aug 25 04:10:26.768: INFO: Pod downwardapi-volume-674ff96f-3f2a-4867-b8fb-2b6de3dcabec no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:13.366 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:27.093: INFO: Driver hostPathSymlink doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 47 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:27.984: INFO: Only supported for providers [gce gke] (not aws)
... skipping 60 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
SSS
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:10:22.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap configmap-3919/configmap-test-78aa0b1d-d13e-4748-8242-b022345dace6
STEP: Creating a pod to test consume configMaps
Aug 25 04:10:23.656: INFO: Waiting up to 5m0s for pod "pod-configmaps-98d39201-c0ea-4674-b3b0-b0869138d502" in namespace "configmap-3919" to be "Succeeded or Failed"
Aug 25 04:10:23.759: INFO: Pod "pod-configmaps-98d39201-c0ea-4674-b3b0-b0869138d502": Phase="Pending", Reason="", readiness=false. Elapsed: 102.885698ms
Aug 25 04:10:25.862: INFO: Pod "pod-configmaps-98d39201-c0ea-4674-b3b0-b0869138d502": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206301766s
Aug 25 04:10:27.967: INFO: Pod "pod-configmaps-98d39201-c0ea-4674-b3b0-b0869138d502": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311168854s
Aug 25 04:10:30.070: INFO: Pod "pod-configmaps-98d39201-c0ea-4674-b3b0-b0869138d502": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.414386287s
STEP: Saw pod success
Aug 25 04:10:30.070: INFO: Pod "pod-configmaps-98d39201-c0ea-4674-b3b0-b0869138d502" satisfied condition "Succeeded or Failed"
Aug 25 04:10:30.173: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod pod-configmaps-98d39201-c0ea-4674-b3b0-b0869138d502 container env-test: <nil>
STEP: delete the pod
Aug 25 04:10:30.397: INFO: Waiting for pod pod-configmaps-98d39201-c0ea-4674-b3b0-b0869138d502 to disappear
Aug 25 04:10:30.499: INFO: Pod pod-configmaps-98d39201-c0ea-4674-b3b0-b0869138d502 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:7.806 seconds]
[sig-node] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:30.747: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 105 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:32.542: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 71 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:33.287: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 45 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
Aug 25 04:10:28.558: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Aug 25 04:10:28.559: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-zbbs
STEP: Creating a pod to test subpath
Aug 25 04:10:28.664: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-zbbs" in namespace "provisioning-7213" to be "Succeeded or Failed"
Aug 25 04:10:28.767: INFO: Pod "pod-subpath-test-inlinevolume-zbbs": Phase="Pending", Reason="", readiness=false. Elapsed: 103.086946ms
Aug 25 04:10:30.871: INFO: Pod "pod-subpath-test-inlinevolume-zbbs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206597877s
Aug 25 04:10:32.986: INFO: Pod "pod-subpath-test-inlinevolume-zbbs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.321875367s
STEP: Saw pod success
Aug 25 04:10:32.986: INFO: Pod "pod-subpath-test-inlinevolume-zbbs" satisfied condition "Succeeded or Failed"
Aug 25 04:10:33.095: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod pod-subpath-test-inlinevolume-zbbs container test-container-subpath-inlinevolume-zbbs: <nil>
STEP: delete the pod
Aug 25 04:10:33.314: INFO: Waiting for pod pod-subpath-test-inlinevolume-zbbs to disappear
Aug 25 04:10:33.418: INFO: Pod pod-subpath-test-inlinevolume-zbbs no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-zbbs
Aug 25 04:10:33.418: INFO: Deleting pod "pod-subpath-test-inlinevolume-zbbs" in namespace "provisioning-7213"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:33.863: INFO: Only supported for providers [gce gke] (not aws)
... skipping 113 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297
    should create and stop a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":1,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:34.399: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 96 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:10:34.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4551" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":3,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:34.960: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 49 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 130 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:10:31.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Aug 25 04:10:32.624: INFO: Waiting up to 5m0s for pod "busybox-user-65534-8febc304-fcc4-4e12-9e80-8df957288102" in namespace "security-context-test-115" to be "Succeeded or Failed"
Aug 25 04:10:32.729: INFO: Pod "busybox-user-65534-8febc304-fcc4-4e12-9e80-8df957288102": Phase="Pending", Reason="", readiness=false. Elapsed: 104.798666ms
Aug 25 04:10:34.834: INFO: Pod "busybox-user-65534-8febc304-fcc4-4e12-9e80-8df957288102": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209783157s
Aug 25 04:10:36.939: INFO: Pod "busybox-user-65534-8febc304-fcc4-4e12-9e80-8df957288102": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.314716269s
Aug 25 04:10:36.939: INFO: Pod "busybox-user-65534-8febc304-fcc4-4e12-9e80-8df957288102" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:10:36.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-115" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  When creating a container with runAsUser
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:37.164: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 29 lines ...
Aug 25 04:10:14.373: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Aug 25 04:10:15.240: INFO: Successfully created a new PD: "aws://eu-west-3a/vol-0f36b5f2b8afdad78".
Aug 25 04:10:15.240: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-zkmz
STEP: Creating a pod to test exec-volume-test
Aug 25 04:10:15.345: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-zkmz" in namespace "volume-6392" to be "Succeeded or Failed"
Aug 25 04:10:15.449: INFO: Pod "exec-volume-test-inlinevolume-zkmz": Phase="Pending", Reason="", readiness=false. Elapsed: 103.58237ms
Aug 25 04:10:17.553: INFO: Pod "exec-volume-test-inlinevolume-zkmz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207074305s
Aug 25 04:10:19.656: INFO: Pod "exec-volume-test-inlinevolume-zkmz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310822905s
Aug 25 04:10:21.760: INFO: Pod "exec-volume-test-inlinevolume-zkmz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414244782s
Aug 25 04:10:23.863: INFO: Pod "exec-volume-test-inlinevolume-zkmz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.517805724s
Aug 25 04:10:25.967: INFO: Pod "exec-volume-test-inlinevolume-zkmz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.621183128s
Aug 25 04:10:28.071: INFO: Pod "exec-volume-test-inlinevolume-zkmz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.725771941s
Aug 25 04:10:30.174: INFO: Pod "exec-volume-test-inlinevolume-zkmz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.828942794s
Aug 25 04:10:32.295: INFO: Pod "exec-volume-test-inlinevolume-zkmz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.949170274s
STEP: Saw pod success
Aug 25 04:10:32.295: INFO: Pod "exec-volume-test-inlinevolume-zkmz" satisfied condition "Succeeded or Failed"
Aug 25 04:10:32.399: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod exec-volume-test-inlinevolume-zkmz container exec-container-inlinevolume-zkmz: <nil>
STEP: delete the pod
Aug 25 04:10:32.613: INFO: Waiting for pod exec-volume-test-inlinevolume-zkmz to disappear
Aug 25 04:10:32.717: INFO: Pod exec-volume-test-inlinevolume-zkmz no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-zkmz
Aug 25 04:10:32.718: INFO: Deleting pod "exec-volume-test-inlinevolume-zkmz" in namespace "volume-6392"
Aug 25 04:10:33.046: INFO: Couldn't delete PD "aws://eu-west-3a/vol-0f36b5f2b8afdad78", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0f36b5f2b8afdad78 is currently attached to i-025aa92a1b8232ae0
	status code: 400, request id: 2e8ac178-b79e-465b-a0b1-ac03a21a17d3
Aug 25 04:10:38.600: INFO: Successfully deleted PD "aws://eu-west-3a/vol-0f36b5f2b8afdad78".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:10:38.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6392" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":2,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:38.968: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 137 lines ...
Aug 25 04:10:26.408: INFO: PersistentVolume nfs-wpgwx found and phase=Bound (103.670799ms)
Aug 25 04:10:26.512: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-bbpwt] to have phase Bound
Aug 25 04:10:26.616: INFO: PersistentVolumeClaim pvc-bbpwt found and phase=Bound (103.689594ms)
STEP: Checking pod has write access to PersistentVolumes
Aug 25 04:10:26.824: INFO: Creating nfs test pod
Aug 25 04:10:26.929: INFO: Pod should terminate with exitcode 0 (success)
Aug 25 04:10:26.929: INFO: Waiting up to 5m0s for pod "pvc-tester-lqg72" in namespace "pv-3410" to be "Succeeded or Failed"
Aug 25 04:10:27.036: INFO: Pod "pvc-tester-lqg72": Phase="Pending", Reason="", readiness=false. Elapsed: 106.600378ms
Aug 25 04:10:29.140: INFO: Pod "pvc-tester-lqg72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.210983869s
STEP: Saw pod success
Aug 25 04:10:29.141: INFO: Pod "pvc-tester-lqg72" satisfied condition "Succeeded or Failed"
Aug 25 04:10:29.141: INFO: Pod pvc-tester-lqg72 succeeded 
Aug 25 04:10:29.141: INFO: Deleting pod "pvc-tester-lqg72" in namespace "pv-3410"
Aug 25 04:10:29.268: INFO: Wait up to 5m0s for pod "pvc-tester-lqg72" to be fully deleted
Aug 25 04:10:29.478: INFO: Creating nfs test pod
Aug 25 04:10:29.582: INFO: Pod should terminate with exitcode 0 (success)
Aug 25 04:10:29.582: INFO: Waiting up to 5m0s for pod "pvc-tester-kflbn" in namespace "pv-3410" to be "Succeeded or Failed"
Aug 25 04:10:29.686: INFO: Pod "pvc-tester-kflbn": Phase="Pending", Reason="", readiness=false. Elapsed: 103.882863ms
Aug 25 04:10:31.791: INFO: Pod "pvc-tester-kflbn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208187638s
Aug 25 04:10:33.895: INFO: Pod "pvc-tester-kflbn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.312497182s
STEP: Saw pod success
Aug 25 04:10:33.895: INFO: Pod "pvc-tester-kflbn" satisfied condition "Succeeded or Failed"
Aug 25 04:10:33.895: INFO: Pod pvc-tester-kflbn succeeded 
Aug 25 04:10:33.895: INFO: Deleting pod "pvc-tester-kflbn" in namespace "pv-3410"
Aug 25 04:10:34.004: INFO: Wait up to 5m0s for pod "pvc-tester-kflbn" to be fully deleted
STEP: Deleting PVCs to invoke reclaim policy
Aug 25 04:10:34.419: INFO: Deleting PVC pvc-bbpwt to trigger reclamation of PV nfs-wpgwx
Aug 25 04:10:34.419: INFO: Deleting PersistentVolumeClaim "pvc-bbpwt"
... skipping 31 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with multiple PVs and PVCs all in same ns
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:212
      should create 2 PVs and 4 PVCs: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access","total":-1,"completed":1,"skipped":6,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:30.400 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:44.112: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 104 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:46.094: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 76 lines ...
• [SLOW TEST:12.781 seconds]
[k8s.io] [sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:46.657: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 62 lines ...
Aug 25 04:10:46.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
Aug 25 04:10:47.293: INFO: Waiting up to 5m0s for pod "downward-api-1a345b6c-4127-495e-adff-798059196975" in namespace "downward-api-5454" to be "Succeeded or Failed"
Aug 25 04:10:47.398: INFO: Pod "downward-api-1a345b6c-4127-495e-adff-798059196975": Phase="Pending", Reason="", readiness=false. Elapsed: 105.034507ms
Aug 25 04:10:49.502: INFO: Pod "downward-api-1a345b6c-4127-495e-adff-798059196975": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.208607016s
STEP: Saw pod success
Aug 25 04:10:49.502: INFO: Pod "downward-api-1a345b6c-4127-495e-adff-798059196975" satisfied condition "Succeeded or Failed"
Aug 25 04:10:49.605: INFO: Trying to get logs from node ip-172-20-38-132.eu-west-3.compute.internal pod downward-api-1a345b6c-4127-495e-adff-798059196975 container dapi-container: <nil>
STEP: delete the pod
Aug 25 04:10:49.823: INFO: Waiting for pod downward-api-1a345b6c-4127-495e-adff-798059196975 to disappear
Aug 25 04:10:49.926: INFO: Pod downward-api-1a345b6c-4127-495e-adff-798059196975 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:10:49.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5454" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:50.147: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 11 lines ...
      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:833
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":2,"skipped":26,"failed":0}
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:10:47.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
Aug 25 04:10:48.552: INFO: Waiting up to 5m0s for pod "downward-api-4f4e80e6-954d-47d5-a278-6a19b2987647" in namespace "downward-api-4686" to be "Succeeded or Failed"
Aug 25 04:10:48.656: INFO: Pod "downward-api-4f4e80e6-954d-47d5-a278-6a19b2987647": Phase="Pending", Reason="", readiness=false. Elapsed: 103.541418ms
Aug 25 04:10:50.759: INFO: Pod "downward-api-4f4e80e6-954d-47d5-a278-6a19b2987647": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.206967744s
STEP: Saw pod success
Aug 25 04:10:50.760: INFO: Pod "downward-api-4f4e80e6-954d-47d5-a278-6a19b2987647" satisfied condition "Succeeded or Failed"
Aug 25 04:10:50.863: INFO: Trying to get logs from node ip-172-20-38-132.eu-west-3.compute.internal pod downward-api-4f4e80e6-954d-47d5-a278-6a19b2987647 container dapi-container: <nil>
STEP: delete the pod
Aug 25 04:10:51.078: INFO: Waiting for pod downward-api-4f4e80e6-954d-47d5-a278-6a19b2987647 to disappear
Aug 25 04:10:51.182: INFO: Pod downward-api-4f4e80e6-954d-47d5-a278-6a19b2987647 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 32 lines ...
Aug 25 04:10:31.009: INFO: PersistentVolumeClaim pvc-97ncb found but phase is Pending instead of Bound.
Aug 25 04:10:33.114: INFO: PersistentVolumeClaim pvc-97ncb found and phase=Bound (6.416037155s)
Aug 25 04:10:33.114: INFO: Waiting up to 3m0s for PersistentVolume local-4qwzg to have phase Bound
Aug 25 04:10:33.221: INFO: PersistentVolume local-4qwzg found and phase=Bound (106.93796ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-jgcp
STEP: Creating a pod to test subpath
Aug 25 04:10:33.531: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jgcp" in namespace "provisioning-3181" to be "Succeeded or Failed"
Aug 25 04:10:33.634: INFO: Pod "pod-subpath-test-preprovisionedpv-jgcp": Phase="Pending", Reason="", readiness=false. Elapsed: 102.931192ms
Aug 25 04:10:35.737: INFO: Pod "pod-subpath-test-preprovisionedpv-jgcp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206016182s
Aug 25 04:10:37.841: INFO: Pod "pod-subpath-test-preprovisionedpv-jgcp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310121861s
Aug 25 04:10:39.944: INFO: Pod "pod-subpath-test-preprovisionedpv-jgcp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413204583s
Aug 25 04:10:42.047: INFO: Pod "pod-subpath-test-preprovisionedpv-jgcp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.516587333s
STEP: Saw pod success
Aug 25 04:10:42.048: INFO: Pod "pod-subpath-test-preprovisionedpv-jgcp" satisfied condition "Succeeded or Failed"
Aug 25 04:10:42.150: INFO: Trying to get logs from node ip-172-20-32-67.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-jgcp container test-container-subpath-preprovisionedpv-jgcp: <nil>
STEP: delete the pod
Aug 25 04:10:42.650: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jgcp to disappear
Aug 25 04:10:42.753: INFO: Pod pod-subpath-test-preprovisionedpv-jgcp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jgcp
Aug 25 04:10:42.753: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jgcp" in namespace "provisioning-3181"
STEP: Creating pod pod-subpath-test-preprovisionedpv-jgcp
STEP: Creating a pod to test subpath
Aug 25 04:10:42.959: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jgcp" in namespace "provisioning-3181" to be "Succeeded or Failed"
Aug 25 04:10:43.062: INFO: Pod "pod-subpath-test-preprovisionedpv-jgcp": Phase="Pending", Reason="", readiness=false. Elapsed: 102.685233ms
Aug 25 04:10:45.165: INFO: Pod "pod-subpath-test-preprovisionedpv-jgcp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206063283s
Aug 25 04:10:47.272: INFO: Pod "pod-subpath-test-preprovisionedpv-jgcp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.31233068s
STEP: Saw pod success
Aug 25 04:10:47.272: INFO: Pod "pod-subpath-test-preprovisionedpv-jgcp" satisfied condition "Succeeded or Failed"
Aug 25 04:10:47.375: INFO: Trying to get logs from node ip-172-20-32-67.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-jgcp container test-container-subpath-preprovisionedpv-jgcp: <nil>
STEP: delete the pod
Aug 25 04:10:47.594: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jgcp to disappear
Aug 25 04:10:47.698: INFO: Pod pod-subpath-test-preprovisionedpv-jgcp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jgcp
Aug 25 04:10:47.698: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jgcp" in namespace "provisioning-3181"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:391
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:51.437: INFO: Driver emptydir doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 442 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:10:53.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":2,"skipped":39,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:54.114: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 63 lines ...
Aug 25 04:10:35.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
STEP: Creating a pod to test service account token: 
Aug 25 04:10:35.878: INFO: Waiting up to 5m0s for pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77" in namespace "svcaccounts-3069" to be "Succeeded or Failed"
Aug 25 04:10:35.980: INFO: Pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77": Phase="Pending", Reason="", readiness=false. Elapsed: 101.611785ms
Aug 25 04:10:38.082: INFO: Pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203833535s
Aug 25 04:10:40.187: INFO: Pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.308692405s
STEP: Saw pod success
Aug 25 04:10:40.187: INFO: Pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77" satisfied condition "Succeeded or Failed"
Aug 25 04:10:40.290: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77 container agnhost-container: <nil>
STEP: delete the pod
Aug 25 04:10:40.499: INFO: Waiting for pod test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77 to disappear
Aug 25 04:10:40.601: INFO: Pod test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77 no longer exists
STEP: Creating a pod to test service account token: 
Aug 25 04:10:40.704: INFO: Waiting up to 5m0s for pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77" in namespace "svcaccounts-3069" to be "Succeeded or Failed"
Aug 25 04:10:40.806: INFO: Pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77": Phase="Pending", Reason="", readiness=false. Elapsed: 101.796667ms
Aug 25 04:10:42.908: INFO: Pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.204139618s
STEP: Saw pod success
Aug 25 04:10:42.908: INFO: Pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77" satisfied condition "Succeeded or Failed"
Aug 25 04:10:43.010: INFO: Trying to get logs from node ip-172-20-36-72.eu-west-3.compute.internal pod test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77 container agnhost-container: <nil>
STEP: delete the pod
Aug 25 04:10:43.219: INFO: Waiting for pod test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77 to disappear
Aug 25 04:10:43.320: INFO: Pod test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77 no longer exists
STEP: Creating a pod to test service account token: 
Aug 25 04:10:43.423: INFO: Waiting up to 5m0s for pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77" in namespace "svcaccounts-3069" to be "Succeeded or Failed"
Aug 25 04:10:43.525: INFO: Pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77": Phase="Pending", Reason="", readiness=false. Elapsed: 101.659711ms
Aug 25 04:10:45.628: INFO: Pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204206667s
Aug 25 04:10:47.730: INFO: Pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.306378087s
STEP: Saw pod success
Aug 25 04:10:47.730: INFO: Pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77" satisfied condition "Succeeded or Failed"
Aug 25 04:10:47.832: INFO: Trying to get logs from node ip-172-20-36-72.eu-west-3.compute.internal pod test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77 container agnhost-container: <nil>
STEP: delete the pod
Aug 25 04:10:48.051: INFO: Waiting for pod test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77 to disappear
Aug 25 04:10:48.152: INFO: Pod test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77 no longer exists
STEP: Creating a pod to test service account token: 
Aug 25 04:10:48.255: INFO: Waiting up to 5m0s for pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77" in namespace "svcaccounts-3069" to be "Succeeded or Failed"
Aug 25 04:10:48.357: INFO: Pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77": Phase="Pending", Reason="", readiness=false. Elapsed: 101.8108ms
Aug 25 04:10:50.460: INFO: Pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204190501s
Aug 25 04:10:52.563: INFO: Pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307162693s
Aug 25 04:10:54.665: INFO: Pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.409239177s
STEP: Saw pod success
Aug 25 04:10:54.665: INFO: Pod "test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77" satisfied condition "Succeeded or Failed"
Aug 25 04:10:54.766: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77 container agnhost-container: <nil>
STEP: delete the pod
Aug 25 04:10:54.979: INFO: Waiting for pod test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77 to disappear
Aug 25 04:10:55.081: INFO: Pod test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:20.042 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":40,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 64 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:55.344: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 230 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151

      Driver hostPath doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":26,"failed":0}
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:10:51.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-6cde3859-7948-4ca0-aa3e-234d2a8b9226
STEP: Creating a pod to test consume secrets
Aug 25 04:10:52.129: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a78b49c0-5526-4d7e-9e9d-54e139c9da13" in namespace "projected-7568" to be "Succeeded or Failed"
Aug 25 04:10:52.233: INFO: Pod "pod-projected-secrets-a78b49c0-5526-4d7e-9e9d-54e139c9da13": Phase="Pending", Reason="", readiness=false. Elapsed: 103.419005ms
Aug 25 04:10:54.336: INFO: Pod "pod-projected-secrets-a78b49c0-5526-4d7e-9e9d-54e139c9da13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206817021s
Aug 25 04:10:56.441: INFO: Pod "pod-projected-secrets-a78b49c0-5526-4d7e-9e9d-54e139c9da13": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311149908s
Aug 25 04:10:58.545: INFO: Pod "pod-projected-secrets-a78b49c0-5526-4d7e-9e9d-54e139c9da13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.415860954s
STEP: Saw pod success
Aug 25 04:10:58.545: INFO: Pod "pod-projected-secrets-a78b49c0-5526-4d7e-9e9d-54e139c9da13" satisfied condition "Succeeded or Failed"
Aug 25 04:10:58.650: INFO: Trying to get logs from node ip-172-20-32-67.eu-west-3.compute.internal pod pod-projected-secrets-a78b49c0-5526-4d7e-9e9d-54e139c9da13 container projected-secret-volume-test: <nil>
STEP: delete the pod
Aug 25 04:10:58.866: INFO: Waiting for pod pod-projected-secrets-a78b49c0-5526-4d7e-9e9d-54e139c9da13 to disappear
Aug 25 04:10:58.969: INFO: Pod pod-projected-secrets-a78b49c0-5526-4d7e-9e9d-54e139c9da13 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:7.777 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":26,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:10:59.209: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
... skipping 133 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 28 lines ...
Aug 25 04:10:45.727: INFO: PersistentVolumeClaim pvc-n2wpn found but phase is Pending instead of Bound.
Aug 25 04:10:47.831: INFO: PersistentVolumeClaim pvc-n2wpn found and phase=Bound (14.845222032s)
Aug 25 04:10:47.832: INFO: Waiting up to 3m0s for PersistentVolume local-qj6gd to have phase Bound
Aug 25 04:10:47.945: INFO: PersistentVolume local-qj6gd found and phase=Bound (113.388374ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-29r2
STEP: Creating a pod to test subpath
Aug 25 04:10:48.260: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-29r2" in namespace "provisioning-2581" to be "Succeeded or Failed"
Aug 25 04:10:48.364: INFO: Pod "pod-subpath-test-preprovisionedpv-29r2": Phase="Pending", Reason="", readiness=false. Elapsed: 103.747848ms
Aug 25 04:10:50.468: INFO: Pod "pod-subpath-test-preprovisionedpv-29r2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207904582s
Aug 25 04:10:52.572: INFO: Pod "pod-subpath-test-preprovisionedpv-29r2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31198328s
Aug 25 04:10:54.676: INFO: Pod "pod-subpath-test-preprovisionedpv-29r2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.416050854s
Aug 25 04:10:56.780: INFO: Pod "pod-subpath-test-preprovisionedpv-29r2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.520409768s
STEP: Saw pod success
Aug 25 04:10:56.780: INFO: Pod "pod-subpath-test-preprovisionedpv-29r2" satisfied condition "Succeeded or Failed"
Aug 25 04:10:56.886: INFO: Trying to get logs from node ip-172-20-36-72.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-29r2 container test-container-subpath-preprovisionedpv-29r2: <nil>
STEP: delete the pod
Aug 25 04:10:57.210: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-29r2 to disappear
Aug 25 04:10:57.314: INFO: Pod pod-subpath-test-preprovisionedpv-29r2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-29r2
Aug 25 04:10:57.314: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-29r2" in namespace "provisioning-2581"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:01.183: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 67 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-map-324cc42a-d6ff-4ed0-9f94-114e701b10a4
STEP: Creating a pod to test consume secrets
Aug 25 04:10:54.185: INFO: Waiting up to 5m0s for pod "pod-secrets-3c43aa69-c9c3-41cc-b07c-b894f318ddce" in namespace "secrets-1911" to be "Succeeded or Failed"
Aug 25 04:10:54.289: INFO: Pod "pod-secrets-3c43aa69-c9c3-41cc-b07c-b894f318ddce": Phase="Pending", Reason="", readiness=false. Elapsed: 103.666909ms
Aug 25 04:10:56.393: INFO: Pod "pod-secrets-3c43aa69-c9c3-41cc-b07c-b894f318ddce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207813827s
Aug 25 04:10:58.498: INFO: Pod "pod-secrets-3c43aa69-c9c3-41cc-b07c-b894f318ddce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312814673s
Aug 25 04:11:00.602: INFO: Pod "pod-secrets-3c43aa69-c9c3-41cc-b07c-b894f318ddce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.417253615s
STEP: Saw pod success
Aug 25 04:11:00.602: INFO: Pod "pod-secrets-3c43aa69-c9c3-41cc-b07c-b894f318ddce" satisfied condition "Succeeded or Failed"
Aug 25 04:11:00.706: INFO: Trying to get logs from node ip-172-20-32-67.eu-west-3.compute.internal pod pod-secrets-3c43aa69-c9c3-41cc-b07c-b894f318ddce container secret-volume-test: <nil>
STEP: delete the pod
Aug 25 04:11:00.919: INFO: Waiting for pod pod-secrets-3c43aa69-c9c3-41cc-b07c-b894f318ddce to disappear
Aug 25 04:11:01.023: INFO: Pod pod-secrets-3c43aa69-c9c3-41cc-b07c-b894f318ddce no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 37 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:110
STEP: Creating configMap with name projected-configmap-test-volume-map-b6936d1a-f3b5-4f17-a7a4-bdf71fb87782
STEP: Creating a pod to test consume configMaps
Aug 25 04:10:56.093: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-48b906c9-a249-498a-8580-2bd23df9e7ed" in namespace "projected-6259" to be "Succeeded or Failed"
Aug 25 04:10:56.198: INFO: Pod "pod-projected-configmaps-48b906c9-a249-498a-8580-2bd23df9e7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 104.358575ms
Aug 25 04:10:58.302: INFO: Pod "pod-projected-configmaps-48b906c9-a249-498a-8580-2bd23df9e7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20906149s
Aug 25 04:11:00.407: INFO: Pod "pod-projected-configmaps-48b906c9-a249-498a-8580-2bd23df9e7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314000532s
Aug 25 04:11:02.512: INFO: Pod "pod-projected-configmaps-48b906c9-a249-498a-8580-2bd23df9e7ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.419044594s
STEP: Saw pod success
Aug 25 04:11:02.512: INFO: Pod "pod-projected-configmaps-48b906c9-a249-498a-8580-2bd23df9e7ed" satisfied condition "Succeeded or Failed"
Aug 25 04:11:02.617: INFO: Trying to get logs from node ip-172-20-36-72.eu-west-3.compute.internal pod pod-projected-configmaps-48b906c9-a249-498a-8580-2bd23df9e7ed container agnhost-container: <nil>
STEP: delete the pod
Aug 25 04:11:02.863: INFO: Waiting for pod pod-projected-configmaps-48b906c9-a249-498a-8580-2bd23df9e7ed to disappear
Aug 25 04:11:02.967: INFO: Pod pod-projected-configmaps-48b906c9-a249-498a-8580-2bd23df9e7ed no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:7.819 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:110
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:03.191: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 168 lines ...
• [SLOW TEST:33.436 seconds]
[sig-storage] Mounted volume expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Should verify mounted devices can be resized
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:116
------------------------------
{"msg":"PASSED [sig-storage] Mounted volume expand Should verify mounted devices can be resized","total":-1,"completed":3,"skipped":4,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:04.239: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 100 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:11:04.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-9028" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":3,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:04.363: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 24 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64
[It] should support unsafe sysctls which are actually whitelisted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:5.376 seconds]
[k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should support unsafe sysctls which are actually whitelisted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":-1,"completed":2,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:06.649: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 58 lines ...
• [SLOW TEST:15.258 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should be able to schedule after more than 100 missed schedule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:173
------------------------------
{"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","total":-1,"completed":2,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-configmap-n7gr
STEP: Creating a pod to test atomic-volume-subpath
Aug 25 04:10:43.175: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-n7gr" in namespace "subpath-2978" to be "Succeeded or Failed"
Aug 25 04:10:43.279: INFO: Pod "pod-subpath-test-configmap-n7gr": Phase="Pending", Reason="", readiness=false. Elapsed: 104.019367ms
Aug 25 04:10:45.384: INFO: Pod "pod-subpath-test-configmap-n7gr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208706287s
Aug 25 04:10:47.488: INFO: Pod "pod-subpath-test-configmap-n7gr": Phase="Running", Reason="", readiness=true. Elapsed: 4.313367884s
Aug 25 04:10:49.593: INFO: Pod "pod-subpath-test-configmap-n7gr": Phase="Running", Reason="", readiness=true. Elapsed: 6.417720607s
Aug 25 04:10:51.697: INFO: Pod "pod-subpath-test-configmap-n7gr": Phase="Running", Reason="", readiness=true. Elapsed: 8.522408057s
Aug 25 04:10:53.802: INFO: Pod "pod-subpath-test-configmap-n7gr": Phase="Running", Reason="", readiness=true. Elapsed: 10.626669138s
Aug 25 04:10:55.907: INFO: Pod "pod-subpath-test-configmap-n7gr": Phase="Running", Reason="", readiness=true. Elapsed: 12.731793387s
Aug 25 04:10:58.011: INFO: Pod "pod-subpath-test-configmap-n7gr": Phase="Running", Reason="", readiness=true. Elapsed: 14.836391524s
Aug 25 04:11:00.116: INFO: Pod "pod-subpath-test-configmap-n7gr": Phase="Running", Reason="", readiness=true. Elapsed: 16.94105567s
Aug 25 04:11:02.221: INFO: Pod "pod-subpath-test-configmap-n7gr": Phase="Running", Reason="", readiness=true. Elapsed: 19.045560282s
Aug 25 04:11:04.326: INFO: Pod "pod-subpath-test-configmap-n7gr": Phase="Running", Reason="", readiness=true. Elapsed: 21.151393832s
Aug 25 04:11:06.440: INFO: Pod "pod-subpath-test-configmap-n7gr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.264518588s
STEP: Saw pod success
Aug 25 04:11:06.440: INFO: Pod "pod-subpath-test-configmap-n7gr" satisfied condition "Succeeded or Failed"
Aug 25 04:11:06.544: INFO: Trying to get logs from node ip-172-20-36-72.eu-west-3.compute.internal pod pod-subpath-test-configmap-n7gr container test-container-subpath-configmap-n7gr: <nil>
STEP: delete the pod
Aug 25 04:11:06.763: INFO: Waiting for pod pod-subpath-test-configmap-n7gr to disappear
Aug 25 04:11:06.867: INFO: Pod pod-subpath-test-configmap-n7gr no longer exists
STEP: Deleting pod pod-subpath-test-configmap-n7gr
Aug 25 04:11:06.867: INFO: Deleting pod "pod-subpath-test-configmap-n7gr" in namespace "subpath-2978"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:07.224: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 185 lines ...
Aug 25 04:11:00.966: INFO: PersistentVolumeClaim pvc-94q5g found but phase is Pending instead of Bound.
Aug 25 04:11:03.070: INFO: PersistentVolumeClaim pvc-94q5g found and phase=Bound (12.733252635s)
Aug 25 04:11:03.070: INFO: Waiting up to 3m0s for PersistentVolume local-dhwnc to have phase Bound
Aug 25 04:11:03.173: INFO: PersistentVolume local-dhwnc found and phase=Bound (102.965296ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-bb2l
STEP: Creating a pod to test exec-volume-test
Aug 25 04:11:03.484: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-bb2l" in namespace "volume-7271" to be "Succeeded or Failed"
Aug 25 04:11:03.587: INFO: Pod "exec-volume-test-preprovisionedpv-bb2l": Phase="Pending", Reason="", readiness=false. Elapsed: 103.210662ms
Aug 25 04:11:05.692: INFO: Pod "exec-volume-test-preprovisionedpv-bb2l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.20875922s
STEP: Saw pod success
Aug 25 04:11:05.693: INFO: Pod "exec-volume-test-preprovisionedpv-bb2l" satisfied condition "Succeeded or Failed"
Aug 25 04:11:05.796: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod exec-volume-test-preprovisionedpv-bb2l container exec-container-preprovisionedpv-bb2l: <nil>
STEP: delete the pod
Aug 25 04:11:06.008: INFO: Waiting for pod exec-volume-test-preprovisionedpv-bb2l to disappear
Aug 25 04:11:06.111: INFO: Pod exec-volume-test-preprovisionedpv-bb2l no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-bb2l
Aug 25 04:11:06.111: INFO: Deleting pod "exec-volume-test-preprovisionedpv-bb2l" in namespace "volume-7271"
... skipping 30 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name projected-secret-test-740f1b33-0e21-4993-8702-cf99840d3c89
STEP: Creating a pod to test consume secrets
Aug 25 04:11:05.077: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7e0538ad-e3b5-4745-b9eb-db77c2cc6a3b" in namespace "projected-8828" to be "Succeeded or Failed"
Aug 25 04:11:05.180: INFO: Pod "pod-projected-secrets-7e0538ad-e3b5-4745-b9eb-db77c2cc6a3b": Phase="Pending", Reason="", readiness=false. Elapsed: 103.002647ms
Aug 25 04:11:07.283: INFO: Pod "pod-projected-secrets-7e0538ad-e3b5-4745-b9eb-db77c2cc6a3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206726558s
Aug 25 04:11:09.392: INFO: Pod "pod-projected-secrets-7e0538ad-e3b5-4745-b9eb-db77c2cc6a3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.314865243s
STEP: Saw pod success
Aug 25 04:11:09.392: INFO: Pod "pod-projected-secrets-7e0538ad-e3b5-4745-b9eb-db77c2cc6a3b" satisfied condition "Succeeded or Failed"
Aug 25 04:11:09.495: INFO: Trying to get logs from node ip-172-20-36-72.eu-west-3.compute.internal pod pod-projected-secrets-7e0538ad-e3b5-4745-b9eb-db77c2cc6a3b container secret-volume-test: <nil>
STEP: delete the pod
Aug 25 04:11:09.711: INFO: Waiting for pod pod-projected-secrets-7e0538ad-e3b5-4745-b9eb-db77c2cc6a3b to disappear
Aug 25 04:11:09.816: INFO: Pod pod-projected-secrets-7e0538ad-e3b5-4745-b9eb-db77c2cc6a3b no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 114 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:310
    should not require VolumeAttach for drivers without attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:332
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:12.833: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:13.502: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 123 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":19,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:11:01.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 87 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:11:13.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:11:14.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3527" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:11:08.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test substitution in container's args
Aug 25 04:11:08.825: INFO: Waiting up to 5m0s for pod "var-expansion-4046d1a7-79d0-4802-9159-d7ca92e4d37d" in namespace "var-expansion-2389" to be "Succeeded or Failed"
Aug 25 04:11:08.928: INFO: Pod "var-expansion-4046d1a7-79d0-4802-9159-d7ca92e4d37d": Phase="Pending", Reason="", readiness=false. Elapsed: 102.871286ms
Aug 25 04:11:11.031: INFO: Pod "var-expansion-4046d1a7-79d0-4802-9159-d7ca92e4d37d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20615476s
Aug 25 04:11:13.135: INFO: Pod "var-expansion-4046d1a7-79d0-4802-9159-d7ca92e4d37d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309571344s
Aug 25 04:11:15.238: INFO: Pod "var-expansion-4046d1a7-79d0-4802-9159-d7ca92e4d37d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.413098938s
STEP: Saw pod success
Aug 25 04:11:15.239: INFO: Pod "var-expansion-4046d1a7-79d0-4802-9159-d7ca92e4d37d" satisfied condition "Succeeded or Failed"
Aug 25 04:11:15.341: INFO: Trying to get logs from node ip-172-20-38-132.eu-west-3.compute.internal pod var-expansion-4046d1a7-79d0-4802-9159-d7ca92e4d37d container dapi-container: <nil>
STEP: delete the pod
Aug 25 04:11:15.555: INFO: Waiting for pod var-expansion-4046d1a7-79d0-4802-9159-d7ca92e4d37d to disappear
Aug 25 04:11:15.658: INFO: Pod var-expansion-4046d1a7-79d0-4802-9159-d7ca92e4d37d no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:7.663 seconds]
[k8s.io] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":7,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:15.899: INFO: Only supported for providers [gce gke] (not aws)
... skipping 1458 lines ...
• [SLOW TEST:40.417 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:128
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":2,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:11:19.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2267" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 58 lines ...
Aug 25 04:10:28.856: INFO: PersistentVolumeClaim csi-hostpathjtrtk found but phase is Pending instead of Bound.
Aug 25 04:10:30.959: INFO: PersistentVolumeClaim csi-hostpathjtrtk found but phase is Pending instead of Bound.
Aug 25 04:10:33.064: INFO: PersistentVolumeClaim csi-hostpathjtrtk found but phase is Pending instead of Bound.
Aug 25 04:10:35.168: INFO: PersistentVolumeClaim csi-hostpathjtrtk found and phase=Bound (14.834356326s)
STEP: Creating pod pod-subpath-test-dynamicpv-rzf5
STEP: Creating a pod to test subpath
Aug 25 04:10:35.488: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-rzf5" in namespace "provisioning-3092" to be "Succeeded or Failed"
Aug 25 04:10:35.591: INFO: Pod "pod-subpath-test-dynamicpv-rzf5": Phase="Pending", Reason="", readiness=false. Elapsed: 103.154888ms
Aug 25 04:10:37.694: INFO: Pod "pod-subpath-test-dynamicpv-rzf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206651075s
Aug 25 04:10:39.798: INFO: Pod "pod-subpath-test-dynamicpv-rzf5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310623454s
Aug 25 04:10:41.902: INFO: Pod "pod-subpath-test-dynamicpv-rzf5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414197454s
Aug 25 04:10:44.006: INFO: Pod "pod-subpath-test-dynamicpv-rzf5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.51838005s
Aug 25 04:10:46.110: INFO: Pod "pod-subpath-test-dynamicpv-rzf5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.622131191s
Aug 25 04:10:48.215: INFO: Pod "pod-subpath-test-dynamicpv-rzf5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.727642073s
Aug 25 04:10:50.319: INFO: Pod "pod-subpath-test-dynamicpv-rzf5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.831583219s
Aug 25 04:10:52.424: INFO: Pod "pod-subpath-test-dynamicpv-rzf5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.936065025s
Aug 25 04:10:54.527: INFO: Pod "pod-subpath-test-dynamicpv-rzf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.039674301s
STEP: Saw pod success
Aug 25 04:10:54.528: INFO: Pod "pod-subpath-test-dynamicpv-rzf5" satisfied condition "Succeeded or Failed"
Aug 25 04:10:54.631: INFO: Trying to get logs from node ip-172-20-38-132.eu-west-3.compute.internal pod pod-subpath-test-dynamicpv-rzf5 container test-container-volume-dynamicpv-rzf5: <nil>
STEP: delete the pod
Aug 25 04:10:54.844: INFO: Waiting for pod pod-subpath-test-dynamicpv-rzf5 to disappear
Aug 25 04:10:54.947: INFO: Pod pod-subpath-test-dynamicpv-rzf5 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-rzf5
Aug 25 04:10:54.947: INFO: Deleting pod "pod-subpath-test-dynamicpv-rzf5" in namespace "provisioning-3092"
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:23.552: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":2,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:25.099: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 65 lines ...
• [SLOW TEST:58.096 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should be restarted with a docker exec liveness probe with timeout 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:216
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout ","total":-1,"completed":2,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:25.233: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 156 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":4,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:27.442: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 59 lines ...
Aug 25 04:11:15.785: INFO: PersistentVolumeClaim pvc-l2zt5 found but phase is Pending instead of Bound.
Aug 25 04:11:17.888: INFO: PersistentVolumeClaim pvc-l2zt5 found and phase=Bound (6.415726116s)
Aug 25 04:11:17.888: INFO: Waiting up to 3m0s for PersistentVolume local-mnx4v to have phase Bound
Aug 25 04:11:17.992: INFO: PersistentVolume local-mnx4v found and phase=Bound (103.554236ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-shd9
STEP: Creating a pod to test subpath
Aug 25 04:11:18.304: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-shd9" in namespace "provisioning-6888" to be "Succeeded or Failed"
Aug 25 04:11:18.408: INFO: Pod "pod-subpath-test-preprovisionedpv-shd9": Phase="Pending", Reason="", readiness=false. Elapsed: 103.682798ms
Aug 25 04:11:20.512: INFO: Pod "pod-subpath-test-preprovisionedpv-shd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207904956s
Aug 25 04:11:22.617: INFO: Pod "pod-subpath-test-preprovisionedpv-shd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.311940552s
STEP: Saw pod success
Aug 25 04:11:22.617: INFO: Pod "pod-subpath-test-preprovisionedpv-shd9" satisfied condition "Succeeded or Failed"
Aug 25 04:11:22.720: INFO: Trying to get logs from node ip-172-20-36-72.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-shd9 container test-container-subpath-preprovisionedpv-shd9: <nil>
STEP: delete the pod
Aug 25 04:11:22.941: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-shd9 to disappear
Aug 25 04:11:23.045: INFO: Pod pod-subpath-test-preprovisionedpv-shd9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-shd9
Aug 25 04:11:23.045: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-shd9" in namespace "provisioning-6888"
STEP: Creating pod pod-subpath-test-preprovisionedpv-shd9
STEP: Creating a pod to test subpath
Aug 25 04:11:23.254: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-shd9" in namespace "provisioning-6888" to be "Succeeded or Failed"
Aug 25 04:11:23.357: INFO: Pod "pod-subpath-test-preprovisionedpv-shd9": Phase="Pending", Reason="", readiness=false. Elapsed: 103.354629ms
Aug 25 04:11:25.461: INFO: Pod "pod-subpath-test-preprovisionedpv-shd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.20733608s
STEP: Saw pod success
Aug 25 04:11:25.461: INFO: Pod "pod-subpath-test-preprovisionedpv-shd9" satisfied condition "Succeeded or Failed"
Aug 25 04:11:25.565: INFO: Trying to get logs from node ip-172-20-36-72.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-shd9 container test-container-subpath-preprovisionedpv-shd9: <nil>
STEP: delete the pod
Aug 25 04:11:25.781: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-shd9 to disappear
Aug 25 04:11:25.884: INFO: Pod pod-subpath-test-preprovisionedpv-shd9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-shd9
Aug 25 04:11:25.884: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-shd9" in namespace "provisioning-6888"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:391
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":5,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:28.165: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 193 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362
Aug 25 04:11:25.779: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-521cf9e6-c375-41cc-9147-bd43c04179a0" in namespace "security-context-test-6729" to be "Succeeded or Failed"
Aug 25 04:11:25.882: INFO: Pod "alpine-nnp-true-521cf9e6-c375-41cc-9147-bd43c04179a0": Phase="Pending", Reason="", readiness=false. Elapsed: 102.952964ms
Aug 25 04:11:27.987: INFO: Pod "alpine-nnp-true-521cf9e6-c375-41cc-9147-bd43c04179a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208067565s
Aug 25 04:11:30.091: INFO: Pod "alpine-nnp-true-521cf9e6-c375-41cc-9147-bd43c04179a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.311932504s
Aug 25 04:11:30.091: INFO: Pod "alpine-nnp-true-521cf9e6-c375-41cc-9147-bd43c04179a0" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:11:30.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6729" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:30.433: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 116 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
Aug 25 04:11:25.798: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Aug 25 04:11:25.798: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-chc9
STEP: Creating a pod to test subpath
Aug 25 04:11:25.904: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-chc9" in namespace "provisioning-4634" to be "Succeeded or Failed"
Aug 25 04:11:26.008: INFO: Pod "pod-subpath-test-inlinevolume-chc9": Phase="Pending", Reason="", readiness=false. Elapsed: 104.077721ms
Aug 25 04:11:28.112: INFO: Pod "pod-subpath-test-inlinevolume-chc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208535718s
Aug 25 04:11:30.221: INFO: Pod "pod-subpath-test-inlinevolume-chc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.317306695s
STEP: Saw pod success
Aug 25 04:11:30.221: INFO: Pod "pod-subpath-test-inlinevolume-chc9" satisfied condition "Succeeded or Failed"
Aug 25 04:11:30.326: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod pod-subpath-test-inlinevolume-chc9 container test-container-volume-inlinevolume-chc9: <nil>
STEP: delete the pod
Aug 25 04:11:30.541: INFO: Waiting for pod pod-subpath-test-inlinevolume-chc9 to disappear
Aug 25 04:11:30.645: INFO: Pod pod-subpath-test-inlinevolume-chc9 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-chc9
Aug 25 04:11:30.645: INFO: Deleting pod "pod-subpath-test-inlinevolume-chc9" in namespace "provisioning-4634"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:31.083: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 46 lines ...
Aug 25 04:11:28.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 25 04:11:28.870: INFO: Waiting up to 5m0s for pod "pod-ed5a1d0d-77c5-49cf-bea5-c13e06d83a75" in namespace "emptydir-7880" to be "Succeeded or Failed"
Aug 25 04:11:28.974: INFO: Pod "pod-ed5a1d0d-77c5-49cf-bea5-c13e06d83a75": Phase="Pending", Reason="", readiness=false. Elapsed: 103.496222ms
Aug 25 04:11:31.078: INFO: Pod "pod-ed5a1d0d-77c5-49cf-bea5-c13e06d83a75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.207852041s
STEP: Saw pod success
Aug 25 04:11:31.078: INFO: Pod "pod-ed5a1d0d-77c5-49cf-bea5-c13e06d83a75" satisfied condition "Succeeded or Failed"
Aug 25 04:11:31.182: INFO: Trying to get logs from node ip-172-20-36-72.eu-west-3.compute.internal pod pod-ed5a1d0d-77c5-49cf-bea5-c13e06d83a75 container test-container: <nil>
STEP: delete the pod
Aug 25 04:11:31.399: INFO: Waiting for pod pod-ed5a1d0d-77c5-49cf-bea5-c13e06d83a75 to disappear
Aug 25 04:11:31.502: INFO: Pod pod-ed5a1d0d-77c5-49cf-bea5-c13e06d83a75 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:11:31.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7880" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:11:35.758: INFO: >>> kubeConfig: /root/.kube/config
... skipping 39 lines ...
Aug 25 04:10:35.508: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-3723-aws-sczv7rp
STEP: creating a claim
Aug 25 04:10:35.611: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-pff9
STEP: Creating a pod to test subpath
Aug 25 04:10:35.923: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-pff9" in namespace "provisioning-3723" to be "Succeeded or Failed"
Aug 25 04:10:36.026: INFO: Pod "pod-subpath-test-dynamicpv-pff9": Phase="Pending", Reason="", readiness=false. Elapsed: 102.761913ms
Aug 25 04:10:38.129: INFO: Pod "pod-subpath-test-dynamicpv-pff9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206316182s
Aug 25 04:10:40.233: INFO: Pod "pod-subpath-test-dynamicpv-pff9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309872495s
Aug 25 04:10:42.336: INFO: Pod "pod-subpath-test-dynamicpv-pff9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413255326s
Aug 25 04:10:44.440: INFO: Pod "pod-subpath-test-dynamicpv-pff9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.517122501s
Aug 25 04:10:46.545: INFO: Pod "pod-subpath-test-dynamicpv-pff9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.62168719s
... skipping 4 lines ...
Aug 25 04:10:57.061: INFO: Pod "pod-subpath-test-dynamicpv-pff9": Phase="Pending", Reason="", readiness=false. Elapsed: 21.138254391s
Aug 25 04:10:59.165: INFO: Pod "pod-subpath-test-dynamicpv-pff9": Phase="Pending", Reason="", readiness=false. Elapsed: 23.241718873s
Aug 25 04:11:01.269: INFO: Pod "pod-subpath-test-dynamicpv-pff9": Phase="Pending", Reason="", readiness=false. Elapsed: 25.34568044s
Aug 25 04:11:03.372: INFO: Pod "pod-subpath-test-dynamicpv-pff9": Phase="Pending", Reason="", readiness=false. Elapsed: 27.449036079s
Aug 25 04:11:05.475: INFO: Pod "pod-subpath-test-dynamicpv-pff9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.552543356s
STEP: Saw pod success
Aug 25 04:11:05.475: INFO: Pod "pod-subpath-test-dynamicpv-pff9" satisfied condition "Succeeded or Failed"
Aug 25 04:11:05.584: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod pod-subpath-test-dynamicpv-pff9 container test-container-subpath-dynamicpv-pff9: <nil>
STEP: delete the pod
Aug 25 04:11:05.797: INFO: Waiting for pod pod-subpath-test-dynamicpv-pff9 to disappear
Aug 25 04:11:05.901: INFO: Pod pod-subpath-test-dynamicpv-pff9 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-pff9
Aug 25 04:11:05.901: INFO: Deleting pod "pod-subpath-test-dynamicpv-pff9" in namespace "provisioning-3723"
... skipping 67 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":5,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:38.818: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 99 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:11:39.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-2283" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":6,"skipped":49,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:39.806: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 364 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":5,"skipped":40,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:44.257: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 63 lines ...
Aug 25 04:11:31.939: INFO: PersistentVolumeClaim pvc-8qqhq found but phase is Pending instead of Bound.
Aug 25 04:11:34.046: INFO: PersistentVolumeClaim pvc-8qqhq found and phase=Bound (10.63757625s)
Aug 25 04:11:34.046: INFO: Waiting up to 3m0s for PersistentVolume local-9cf4v to have phase Bound
Aug 25 04:11:34.151: INFO: PersistentVolume local-9cf4v found and phase=Bound (104.885859ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-hgbc
STEP: Creating a pod to test subpath
Aug 25 04:11:34.463: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hgbc" in namespace "provisioning-5246" to be "Succeeded or Failed"
Aug 25 04:11:34.566: INFO: Pod "pod-subpath-test-preprovisionedpv-hgbc": Phase="Pending", Reason="", readiness=false. Elapsed: 103.441694ms
Aug 25 04:11:36.670: INFO: Pod "pod-subpath-test-preprovisionedpv-hgbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207642369s
Aug 25 04:11:38.785: INFO: Pod "pod-subpath-test-preprovisionedpv-hgbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321983503s
Aug 25 04:11:40.891: INFO: Pod "pod-subpath-test-preprovisionedpv-hgbc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427708778s
Aug 25 04:11:42.995: INFO: Pod "pod-subpath-test-preprovisionedpv-hgbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.531705196s
STEP: Saw pod success
Aug 25 04:11:42.995: INFO: Pod "pod-subpath-test-preprovisionedpv-hgbc" satisfied condition "Succeeded or Failed"
Aug 25 04:11:43.099: INFO: Trying to get logs from node ip-172-20-38-132.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-hgbc container test-container-volume-preprovisionedpv-hgbc: <nil>
STEP: delete the pod
Aug 25 04:11:43.312: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hgbc to disappear
Aug 25 04:11:43.416: INFO: Pod pod-subpath-test-preprovisionedpv-hgbc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-hgbc
Aug 25 04:11:43.416: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hgbc" in namespace "provisioning-5246"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:44.884: INFO: Only supported for providers [gce gke] (not aws)
... skipping 193 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:666
    should expand volume without restarting pod if attach=off, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:681
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":1,"skipped":21,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:47.106: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:11:47.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-limits-on-node-6687" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Volume limits should verify that all nodes have volume limits","total":-1,"completed":2,"skipped":25,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:48.093: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 103 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":45,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:50.392: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 64 lines ...
Aug 25 04:11:44.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:89
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Aug 25 04:11:45.525: INFO: Waiting up to 5m0s for pod "security-context-44796083-4be0-459b-81a0-8553b4a1258b" in namespace "security-context-3186" to be "Succeeded or Failed"
Aug 25 04:11:45.632: INFO: Pod "security-context-44796083-4be0-459b-81a0-8553b4a1258b": Phase="Pending", Reason="", readiness=false. Elapsed: 107.148761ms
Aug 25 04:11:47.736: INFO: Pod "security-context-44796083-4be0-459b-81a0-8553b4a1258b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210759867s
Aug 25 04:11:49.840: INFO: Pod "security-context-44796083-4be0-459b-81a0-8553b4a1258b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.31512718s
STEP: Saw pod success
Aug 25 04:11:49.840: INFO: Pod "security-context-44796083-4be0-459b-81a0-8553b4a1258b" satisfied condition "Succeeded or Failed"
Aug 25 04:11:49.944: INFO: Trying to get logs from node ip-172-20-36-72.eu-west-3.compute.internal pod security-context-44796083-4be0-459b-81a0-8553b4a1258b container test-container: <nil>
STEP: delete the pod
Aug 25 04:11:50.157: INFO: Waiting for pod security-context-44796083-4be0-459b-81a0-8553b4a1258b to disappear
Aug 25 04:11:50.261: INFO: Pod security-context-44796083-4be0-459b-81a0-8553b4a1258b no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:5.573 seconds]
[k8s.io] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:89
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":5,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:11:50.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name secret-emptykey-test-881e454a-08cc-4a1c-8064-dfb8d4028ee8
[AfterEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:11:51.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9758" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":4,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:51.286: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 162 lines ...
• [SLOW TEST:11.792 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":30,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:11:37.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 48 lines ...
Aug 25 04:10:54.662: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug 25 04:10:54.662: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug 25 04:10:54.662: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-434-aws-scg89pq      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-434    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-434-aws-scg89pq,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-434    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-434-aws-scg89pq,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a StorageClass provisioning-434-aws-scg89pq
STEP: creating a claim
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
Aug 25 04:10:55.078: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-v5pv6" in namespace "provisioning-434" to be "Succeeded or Failed"
Aug 25 04:10:55.181: INFO: Pod "pvc-volume-tester-writer-v5pv6": Phase="Pending", Reason="", readiness=false. Elapsed: 103.196722ms
Aug 25 04:10:57.286: INFO: Pod "pvc-volume-tester-writer-v5pv6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207494315s
Aug 25 04:10:59.390: INFO: Pod "pvc-volume-tester-writer-v5pv6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311923038s
Aug 25 04:11:01.494: INFO: Pod "pvc-volume-tester-writer-v5pv6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.415735837s
Aug 25 04:11:03.598: INFO: Pod "pvc-volume-tester-writer-v5pv6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.519762331s
Aug 25 04:11:05.702: INFO: Pod "pvc-volume-tester-writer-v5pv6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.623548099s
... skipping 4 lines ...
Aug 25 04:11:16.222: INFO: Pod "pvc-volume-tester-writer-v5pv6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.143890945s
Aug 25 04:11:18.326: INFO: Pod "pvc-volume-tester-writer-v5pv6": Phase="Pending", Reason="", readiness=false. Elapsed: 23.247499437s
Aug 25 04:11:20.429: INFO: Pod "pvc-volume-tester-writer-v5pv6": Phase="Pending", Reason="", readiness=false. Elapsed: 25.351060194s
Aug 25 04:11:22.533: INFO: Pod "pvc-volume-tester-writer-v5pv6": Phase="Pending", Reason="", readiness=false. Elapsed: 27.454753366s
Aug 25 04:11:24.639: INFO: Pod "pvc-volume-tester-writer-v5pv6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.560696376s
STEP: Saw pod success
Aug 25 04:11:24.639: INFO: Pod "pvc-volume-tester-writer-v5pv6" satisfied condition "Succeeded or Failed"
Aug 25 04:11:24.847: INFO: Pod pvc-volume-tester-writer-v5pv6 has the following logs: 
Aug 25 04:11:24.847: INFO: Deleting pod "pvc-volume-tester-writer-v5pv6" in namespace "provisioning-434"
Aug 25 04:11:24.954: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-v5pv6" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-32-67.eu-west-3.compute.internal"
Aug 25 04:11:25.371: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-8dh2b" in namespace "provisioning-434" to be "Succeeded or Failed"
Aug 25 04:11:25.475: INFO: Pod "pvc-volume-tester-reader-8dh2b": Phase="Pending", Reason="", readiness=false. Elapsed: 103.140409ms
Aug 25 04:11:27.583: INFO: Pod "pvc-volume-tester-reader-8dh2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211083244s
Aug 25 04:11:29.687: INFO: Pod "pvc-volume-tester-reader-8dh2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315078372s
Aug 25 04:11:31.790: INFO: Pod "pvc-volume-tester-reader-8dh2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.41885772s
Aug 25 04:11:33.894: INFO: Pod "pvc-volume-tester-reader-8dh2b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.522576694s
Aug 25 04:11:35.998: INFO: Pod "pvc-volume-tester-reader-8dh2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.626199625s
STEP: Saw pod success
Aug 25 04:11:35.998: INFO: Pod "pvc-volume-tester-reader-8dh2b" satisfied condition "Succeeded or Failed"
Aug 25 04:11:36.103: INFO: Pod pvc-volume-tester-reader-8dh2b has the following logs: hello world

Aug 25 04:11:36.104: INFO: Deleting pod "pvc-volume-tester-reader-8dh2b" in namespace "provisioning-434"
Aug 25 04:11:36.211: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-8dh2b" to be fully deleted
Aug 25 04:11:36.315: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-spv5p] to have phase Bound
Aug 25 04:11:36.419: INFO: PersistentVolumeClaim pvc-spv5p found and phase=Bound (103.512005ms)
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":3,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:52.589: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 19 lines ...
STEP: Destroying namespace "services-358" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":4,"skipped":50,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:53.981: INFO: Only supported for providers [gce gke] (not aws)
... skipping 110 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:11:54.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8431" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":6,"skipped":10,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:55.072: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:11:54.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9399" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":5,"skipped":61,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:55.118: INFO: Only supported for providers [openstack] (not aws)
... skipping 14 lines ...
      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1094
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":44,"failed":0}
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:11:45.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 25 lines ...
• [SLOW TEST:10.556 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment reaping should cascade to its replica sets and pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:91
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":5,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:11:56.035: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 116 lines ...
• [SLOW TEST:106.737 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should be restarted startup probe fails
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:309
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted startup probe fails","total":-1,"completed":1,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:00.462: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 68 lines ...
Aug 25 04:11:17.635: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-hmhgg] to have phase Bound
Aug 25 04:11:17.739: INFO: PersistentVolumeClaim pvc-hmhgg found and phase=Bound (103.655964ms)
STEP: Deleting the previously created pod
Aug 25 04:11:30.260: INFO: Deleting pod "pvc-volume-tester-cd9pn" in namespace "csi-mock-volumes-9597"
Aug 25 04:11:30.368: INFO: Wait up to 5m0s for pod "pvc-volume-tester-cd9pn" to be fully deleted
STEP: Checking CSI driver logs
Aug 25 04:11:38.699: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/c0b12901-b5fc-4864-97b1-84de2fef6053/volumes/kubernetes.io~csi/pvc-7a3637a1-7e93-4b05-bd9e-8f69f1c44d5f/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-cd9pn
Aug 25 04:11:38.699: INFO: Deleting pod "pvc-volume-tester-cd9pn" in namespace "csi-mock-volumes-9597"
STEP: Deleting claim pvc-hmhgg
Aug 25 04:11:39.028: INFO: Waiting up to 2m0s for PersistentVolume pvc-7a3637a1-7e93-4b05-bd9e-8f69f1c44d5f to get deleted
Aug 25 04:11:39.134: INFO: PersistentVolume pvc-7a3637a1-7e93-4b05-bd9e-8f69f1c44d5f was removed
STEP: Deleting storageclass csi-mock-volumes-9597-sc
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:437
    should not be passed when podInfoOnMount=nil
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:487
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":3,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:01.318: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 74 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 106 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:01.598: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 61 lines ...
  Only supported for providers [gce] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:11:10.037: INFO: >>> kubeConfig: /root/.kube/config
... skipping 22 lines ...
Aug 25 04:11:30.216: INFO: PersistentVolumeClaim pvc-hthbj found but phase is Pending instead of Bound.
Aug 25 04:11:32.320: INFO: PersistentVolumeClaim pvc-hthbj found and phase=Bound (12.727156394s)
Aug 25 04:11:32.320: INFO: Waiting up to 3m0s for PersistentVolume local-btpr8 to have phase Bound
Aug 25 04:11:32.422: INFO: PersistentVolume local-btpr8 found and phase=Bound (102.753168ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-fkxg
STEP: Creating a pod to test atomic-volume-subpath
Aug 25 04:11:32.753: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fkxg" in namespace "provisioning-6758" to be "Succeeded or Failed"
Aug 25 04:11:32.860: INFO: Pod "pod-subpath-test-preprovisionedpv-fkxg": Phase="Pending", Reason="", readiness=false. Elapsed: 106.568058ms
Aug 25 04:11:34.965: INFO: Pod "pod-subpath-test-preprovisionedpv-fkxg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212091885s
Aug 25 04:11:37.070: INFO: Pod "pod-subpath-test-preprovisionedpv-fkxg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316373013s
Aug 25 04:11:39.177: INFO: Pod "pod-subpath-test-preprovisionedpv-fkxg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.423860732s
Aug 25 04:11:41.281: INFO: Pod "pod-subpath-test-preprovisionedpv-fkxg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.52804914s
Aug 25 04:11:43.385: INFO: Pod "pod-subpath-test-preprovisionedpv-fkxg": Phase="Running", Reason="", readiness=true. Elapsed: 10.63150642s
... skipping 2 lines ...
Aug 25 04:11:49.695: INFO: Pod "pod-subpath-test-preprovisionedpv-fkxg": Phase="Running", Reason="", readiness=true. Elapsed: 16.942093928s
Aug 25 04:11:51.808: INFO: Pod "pod-subpath-test-preprovisionedpv-fkxg": Phase="Running", Reason="", readiness=true. Elapsed: 19.054949172s
Aug 25 04:11:53.912: INFO: Pod "pod-subpath-test-preprovisionedpv-fkxg": Phase="Running", Reason="", readiness=true. Elapsed: 21.15838152s
Aug 25 04:11:56.015: INFO: Pod "pod-subpath-test-preprovisionedpv-fkxg": Phase="Running", Reason="", readiness=true. Elapsed: 23.262093342s
Aug 25 04:11:58.119: INFO: Pod "pod-subpath-test-preprovisionedpv-fkxg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.365588604s
STEP: Saw pod success
Aug 25 04:11:58.119: INFO: Pod "pod-subpath-test-preprovisionedpv-fkxg" satisfied condition "Succeeded or Failed"
Aug 25 04:11:58.222: INFO: Trying to get logs from node ip-172-20-38-132.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-fkxg container test-container-subpath-preprovisionedpv-fkxg: <nil>
STEP: delete the pod
Aug 25 04:11:58.437: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fkxg to disappear
Aug 25 04:11:58.540: INFO: Pod pod-subpath-test-preprovisionedpv-fkxg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fkxg
Aug 25 04:11:58.541: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fkxg" in namespace "provisioning-6758"
... skipping 51 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":5,"skipped":16,"failed":0}
[BeforeEach] [sig-api-machinery] API priority and fairness
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:12:02.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename apf
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 96 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:11:55.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:12:03.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9040" for this suite.


• [SLOW TEST:8.941 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":6,"skipped":66,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:04.094: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 62 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151

      Driver emptydir doesn't support ext4 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:178
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":15,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:04.119: INFO: Driver aws doesn't support ext3 -- skipping
... skipping 77 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:12:04.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-731" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":7,"skipped":49,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema","total":-1,"completed":5,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:11:52.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
Aug 25 04:11:52.587: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Aug 25 04:11:52.797: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4603" in namespace "provisioning-4603" to be "Succeeded or Failed"
Aug 25 04:11:52.905: INFO: Pod "hostpath-symlink-prep-provisioning-4603": Phase="Pending", Reason="", readiness=false. Elapsed: 107.469013ms
Aug 25 04:11:55.008: INFO: Pod "hostpath-symlink-prep-provisioning-4603": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.210876388s
STEP: Saw pod success
Aug 25 04:11:55.008: INFO: Pod "hostpath-symlink-prep-provisioning-4603" satisfied condition "Succeeded or Failed"
Aug 25 04:11:55.008: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4603" in namespace "provisioning-4603"
Aug 25 04:11:55.117: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4603" to be fully deleted
Aug 25 04:11:55.219: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-qmnz
STEP: Creating a pod to test subpath
Aug 25 04:11:55.323: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-qmnz" in namespace "provisioning-4603" to be "Succeeded or Failed"
Aug 25 04:11:55.427: INFO: Pod "pod-subpath-test-inlinevolume-qmnz": Phase="Pending", Reason="", readiness=false. Elapsed: 103.476791ms
Aug 25 04:11:57.632: INFO: Pod "pod-subpath-test-inlinevolume-qmnz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.308419479s
Aug 25 04:11:59.735: INFO: Pod "pod-subpath-test-inlinevolume-qmnz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.411751942s
Aug 25 04:12:01.839: INFO: Pod "pod-subpath-test-inlinevolume-qmnz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.515309269s
STEP: Saw pod success
Aug 25 04:12:01.839: INFO: Pod "pod-subpath-test-inlinevolume-qmnz" satisfied condition "Succeeded or Failed"
Aug 25 04:12:01.944: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod pod-subpath-test-inlinevolume-qmnz container test-container-volume-inlinevolume-qmnz: <nil>
STEP: delete the pod
Aug 25 04:12:02.168: INFO: Waiting for pod pod-subpath-test-inlinevolume-qmnz to disappear
Aug 25 04:12:02.271: INFO: Pod pod-subpath-test-inlinevolume-qmnz no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-qmnz
Aug 25 04:12:02.271: INFO: Deleting pod "pod-subpath-test-inlinevolume-qmnz" in namespace "provisioning-4603"
STEP: Deleting pod
Aug 25 04:12:02.374: INFO: Deleting pod "pod-subpath-test-inlinevolume-qmnz" in namespace "provisioning-4603"
Aug 25 04:12:02.581: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4603" in namespace "provisioning-4603" to be "Succeeded or Failed"
Aug 25 04:12:02.685: INFO: Pod "hostpath-symlink-prep-provisioning-4603": Phase="Pending", Reason="", readiness=false. Elapsed: 104.361428ms
Aug 25 04:12:04.789: INFO: Pod "hostpath-symlink-prep-provisioning-4603": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.208341352s
STEP: Saw pod success
Aug 25 04:12:04.789: INFO: Pod "hostpath-symlink-prep-provisioning-4603" satisfied condition "Succeeded or Failed"
Aug 25 04:12:04.789: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4603" in namespace "provisioning-4603"
Aug 25 04:12:04.913: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4603" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:12:05.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-4603" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:05.244: INFO: Only supported for providers [gce gke] (not aws)
... skipping 24 lines ...
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:12:04.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Aug 25 04:12:04.953: INFO: found topology map[failure-domain.beta.kubernetes.io/zone:eu-west-3a]
Aug 25 04:12:04.953: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Aug 25 04:12:04.953: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 165 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
Aug 25 04:12:00.996: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Aug 25 04:12:01.105: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-kzlr
STEP: Creating a pod to test subpath
Aug 25 04:12:01.220: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-kzlr" in namespace "provisioning-3067" to be "Succeeded or Failed"
Aug 25 04:12:01.324: INFO: Pod "pod-subpath-test-inlinevolume-kzlr": Phase="Pending", Reason="", readiness=false. Elapsed: 103.57299ms
Aug 25 04:12:03.428: INFO: Pod "pod-subpath-test-inlinevolume-kzlr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207725775s
Aug 25 04:12:05.533: INFO: Pod "pod-subpath-test-inlinevolume-kzlr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.31247888s
STEP: Saw pod success
Aug 25 04:12:05.533: INFO: Pod "pod-subpath-test-inlinevolume-kzlr" satisfied condition "Succeeded or Failed"
Aug 25 04:12:05.637: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod pod-subpath-test-inlinevolume-kzlr container test-container-volume-inlinevolume-kzlr: <nil>
STEP: delete the pod
Aug 25 04:12:05.856: INFO: Waiting for pod pod-subpath-test-inlinevolume-kzlr to disappear
Aug 25 04:12:05.960: INFO: Pod pod-subpath-test-inlinevolume-kzlr no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-kzlr
Aug 25 04:12:05.960: INFO: Deleting pod "pod-subpath-test-inlinevolume-kzlr" in namespace "provisioning-3067"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 13 lines ...
Aug 25 04:11:43.586: INFO: Creating new exec pod
Aug 25 04:11:49.102: INFO: Running '/tmp/kubectl940786868/kubectl --server=https://api.e2e-187541ca57-a9514.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8605 exec execpodvwkl9 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 25 04:11:50.223: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Aug 25 04:11:50.223: INFO: stdout: ""
Aug 25 04:11:50.224: INFO: Running '/tmp/kubectl940786868/kubectl --server=https://api.e2e-187541ca57-a9514.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8605 exec execpodvwkl9 -- /bin/sh -x -c nc -zv -t -w 2 100.66.186.77 80'
Aug 25 04:11:53.344: INFO: rc: 1
Aug 25 04:11:53.344: INFO: Service reachability failing with error: error running /tmp/kubectl940786868/kubectl --server=https://api.e2e-187541ca57-a9514.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8605 exec execpodvwkl9 -- /bin/sh -x -c nc -zv -t -w 2 100.66.186.77 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 100.66.186.77 80
nc: connect to 100.66.186.77 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Aug 25 04:11:54.344: INFO: Running '/tmp/kubectl940786868/kubectl --server=https://api.e2e-187541ca57-a9514.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8605 exec execpodvwkl9 -- /bin/sh -x -c nc -zv -t -w 2 100.66.186.77 80'
Aug 25 04:11:57.574: INFO: rc: 1
Aug 25 04:11:57.574: INFO: Service reachability failing with error: error running /tmp/kubectl940786868/kubectl --server=https://api.e2e-187541ca57-a9514.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8605 exec execpodvwkl9 -- /bin/sh -x -c nc -zv -t -w 2 100.66.186.77 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 100.66.186.77 80
nc: connect to 100.66.186.77 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Aug 25 04:11:58.344: INFO: Running '/tmp/kubectl940786868/kubectl --server=https://api.e2e-187541ca57-a9514.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8605 exec execpodvwkl9 -- /bin/sh -x -c nc -zv -t -w 2 100.66.186.77 80'
Aug 25 04:12:01.469: INFO: rc: 1
Aug 25 04:12:01.469: INFO: Service reachability failing with error: error running /tmp/kubectl940786868/kubectl --server=https://api.e2e-187541ca57-a9514.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8605 exec execpodvwkl9 -- /bin/sh -x -c nc -zv -t -w 2 100.66.186.77 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 100.66.186.77 80
nc: connect to 100.66.186.77 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Aug 25 04:12:02.344: INFO: Running '/tmp/kubectl940786868/kubectl --server=https://api.e2e-187541ca57-a9514.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8605 exec execpodvwkl9 -- /bin/sh -x -c nc -zv -t -w 2 100.66.186.77 80'
Aug 25 04:12:03.450: INFO: stderr: "+ nc -zv -t -w 2 100.66.186.77 80\nConnection to 100.66.186.77 80 port [tcp/http] succeeded!\n"
Aug 25 04:12:03.450: INFO: stdout: ""
Aug 25 04:12:03.450: INFO: Running '/tmp/kubectl940786868/kubectl --server=https://api.e2e-187541ca57-a9514.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8605 exec execpodvwkl9 -- /bin/sh -x -c nc -zv -t -w 2 172.20.37.233 32698'
... skipping 20 lines ...
• [SLOW TEST:31.763 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":4,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:08.284: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 171 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1312
    should update the label on a resource  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":6,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:12.537: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:12:12.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4268" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":5,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:12.914: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:457
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":4,"skipped":37,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:13.794: INFO: Only supported for providers [openstack] (not aws)
... skipping 143 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:12:15.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-464" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:16.019: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 38 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:12:16.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8954" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":5,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:16.674: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 127 lines ...
• [SLOW TEST:12.446 seconds]
[sig-auth] Certificates API [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should support building a client with a CSR
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:55
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:100
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
... skipping 49 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:169
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":2,"skipped":1,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 7 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 25 04:12:18.668: INFO: Successfully updated pod "pod-update-activedeadlineseconds-002eb6a4-6b1d-4156-9e86-362e1a2293bd"
Aug 25 04:12:18.668: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-002eb6a4-6b1d-4156-9e86-362e1a2293bd" in namespace "pods-4412" to be "terminated due to deadline exceeded"
Aug 25 04:12:18.771: INFO: Pod "pod-update-activedeadlineseconds-002eb6a4-6b1d-4156-9e86-362e1a2293bd": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 102.921063ms
Aug 25 04:12:18.771: INFO: Pod "pod-update-activedeadlineseconds-002eb6a4-6b1d-4156-9e86-362e1a2293bd" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:12:18.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4412" for this suite.


• [SLOW TEST:6.054 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 55 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":35,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:11:26.781: INFO: >>> kubeConfig: /root/.kube/config
... skipping 6 lines ...
Aug 25 04:11:27.296: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-4163-aws-scphxf7
STEP: creating a claim
Aug 25 04:11:27.399: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-vljx
STEP: Creating a pod to test subpath
Aug 25 04:11:27.710: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-vljx" in namespace "provisioning-4163" to be "Succeeded or Failed"
Aug 25 04:11:27.813: INFO: Pod "pod-subpath-test-dynamicpv-vljx": Phase="Pending", Reason="", readiness=false. Elapsed: 103.364431ms
Aug 25 04:11:29.916: INFO: Pod "pod-subpath-test-dynamicpv-vljx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206526452s
Aug 25 04:11:32.021: INFO: Pod "pod-subpath-test-dynamicpv-vljx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310753288s
Aug 25 04:11:34.125: INFO: Pod "pod-subpath-test-dynamicpv-vljx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414840845s
Aug 25 04:11:36.228: INFO: Pod "pod-subpath-test-dynamicpv-vljx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.517988551s
Aug 25 04:11:38.337: INFO: Pod "pod-subpath-test-dynamicpv-vljx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.62674971s
... skipping 3 lines ...
Aug 25 04:11:46.750: INFO: Pod "pod-subpath-test-dynamicpv-vljx": Phase="Pending", Reason="", readiness=false. Elapsed: 19.040561208s
Aug 25 04:11:48.854: INFO: Pod "pod-subpath-test-dynamicpv-vljx": Phase="Pending", Reason="", readiness=false. Elapsed: 21.143871191s
Aug 25 04:11:50.965: INFO: Pod "pod-subpath-test-dynamicpv-vljx": Phase="Pending", Reason="", readiness=false. Elapsed: 23.255577414s
Aug 25 04:11:53.071: INFO: Pod "pod-subpath-test-dynamicpv-vljx": Phase="Pending", Reason="", readiness=false. Elapsed: 25.361247243s
Aug 25 04:11:55.175: INFO: Pod "pod-subpath-test-dynamicpv-vljx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.464612291s
STEP: Saw pod success
Aug 25 04:11:55.175: INFO: Pod "pod-subpath-test-dynamicpv-vljx" satisfied condition "Succeeded or Failed"
Aug 25 04:11:55.277: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod pod-subpath-test-dynamicpv-vljx container test-container-volume-dynamicpv-vljx: <nil>
STEP: delete the pod
Aug 25 04:11:55.506: INFO: Waiting for pod pod-subpath-test-dynamicpv-vljx to disappear
Aug 25 04:11:55.612: INFO: Pod pod-subpath-test-dynamicpv-vljx no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-vljx
Aug 25 04:11:55.613: INFO: Deleting pod "pod-subpath-test-dynamicpv-vljx" in namespace "provisioning-4163"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:22.105: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1304
------------------------------
... skipping 97 lines ...
• [SLOW TEST:16.982 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":41,"failed":0}

SSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:12:22.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8838" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":7,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:22.884: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 69 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:265
------------------------------
... skipping 77 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:97
------------------------------
... skipping 10 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:236

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR","total":-1,"completed":8,"skipped":62,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:12:17.804: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
Aug 25 04:12:18.337: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Aug 25 04:12:18.442: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-pf56
STEP: Creating a pod to test subpath
Aug 25 04:12:18.549: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-pf56" in namespace "provisioning-6692" to be "Succeeded or Failed"
Aug 25 04:12:18.653: INFO: Pod "pod-subpath-test-inlinevolume-pf56": Phase="Pending", Reason="", readiness=false. Elapsed: 104.20823ms
Aug 25 04:12:20.757: INFO: Pod "pod-subpath-test-inlinevolume-pf56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208352883s
Aug 25 04:12:22.873: INFO: Pod "pod-subpath-test-inlinevolume-pf56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324387734s
Aug 25 04:12:24.977: INFO: Pod "pod-subpath-test-inlinevolume-pf56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.428318469s
STEP: Saw pod success
Aug 25 04:12:24.977: INFO: Pod "pod-subpath-test-inlinevolume-pf56" satisfied condition "Succeeded or Failed"
Aug 25 04:12:25.081: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod pod-subpath-test-inlinevolume-pf56 container test-container-subpath-inlinevolume-pf56: <nil>
STEP: delete the pod
Aug 25 04:12:25.300: INFO: Waiting for pod pod-subpath-test-inlinevolume-pf56 to disappear
Aug 25 04:12:25.404: INFO: Pod pod-subpath-test-inlinevolume-pf56 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-pf56
Aug 25 04:12:25.404: INFO: Deleting pod "pod-subpath-test-inlinevolume-pf56" in namespace "provisioning-6692"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":9,"skipped":62,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:25.845: INFO: Only supported for providers [gce gke] (not aws)
... skipping 247 lines ...
Aug 25 04:12:20.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 25 04:12:20.891: INFO: Waiting up to 5m0s for pod "pod-11c116a7-b014-41d3-ac1c-488247a868eb" in namespace "emptydir-5885" to be "Succeeded or Failed"
Aug 25 04:12:20.994: INFO: Pod "pod-11c116a7-b014-41d3-ac1c-488247a868eb": Phase="Pending", Reason="", readiness=false. Elapsed: 102.326914ms
Aug 25 04:12:23.096: INFO: Pod "pod-11c116a7-b014-41d3-ac1c-488247a868eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205124778s
Aug 25 04:12:25.199: INFO: Pod "pod-11c116a7-b014-41d3-ac1c-488247a868eb": Phase="Running", Reason="", readiness=true. Elapsed: 4.308017336s
Aug 25 04:12:27.303: INFO: Pod "pod-11c116a7-b014-41d3-ac1c-488247a868eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.411385922s
STEP: Saw pod success
Aug 25 04:12:27.303: INFO: Pod "pod-11c116a7-b014-41d3-ac1c-488247a868eb" satisfied condition "Succeeded or Failed"
Aug 25 04:12:27.405: INFO: Trying to get logs from node ip-172-20-38-132.eu-west-3.compute.internal pod pod-11c116a7-b014-41d3-ac1c-488247a868eb container test-container: <nil>
STEP: delete the pod
Aug 25 04:12:27.615: INFO: Waiting for pod pod-11c116a7-b014-41d3-ac1c-488247a868eb to disappear
Aug 25 04:12:27.718: INFO: Pod pod-11c116a7-b014-41d3-ac1c-488247a868eb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:7.658 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:27.937: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 138 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":3,"skipped":10,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:29.469: INFO: Only supported for providers [vsphere] (not aws)
... skipping 45 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-e5b9c49d-805d-422c-8c95-d364709fc7f2
STEP: Creating a pod to test consume configMaps
Aug 25 04:12:26.773: INFO: Waiting up to 5m0s for pod "pod-configmaps-303e46c8-16a6-437d-ad9d-7c48ef90b00b" in namespace "configmap-3042" to be "Succeeded or Failed"
Aug 25 04:12:26.877: INFO: Pod "pod-configmaps-303e46c8-16a6-437d-ad9d-7c48ef90b00b": Phase="Pending", Reason="", readiness=false. Elapsed: 103.96194ms
Aug 25 04:12:28.981: INFO: Pod "pod-configmaps-303e46c8-16a6-437d-ad9d-7c48ef90b00b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.208091472s
STEP: Saw pod success
Aug 25 04:12:28.981: INFO: Pod "pod-configmaps-303e46c8-16a6-437d-ad9d-7c48ef90b00b" satisfied condition "Succeeded or Failed"
Aug 25 04:12:29.085: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod pod-configmaps-303e46c8-16a6-437d-ad9d-7c48ef90b00b container agnhost-container: <nil>
STEP: delete the pod
Aug 25 04:12:29.299: INFO: Waiting for pod pod-configmaps-303e46c8-16a6-437d-ad9d-7c48ef90b00b to disappear
Aug 25 04:12:29.403: INFO: Pod pod-configmaps-303e46c8-16a6-437d-ad9d-7c48ef90b00b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:12:29.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3042" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":94,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:29.634: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 46 lines ...
• [SLOW TEST:6.624 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1911
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":8,"skipped":53,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:29.680: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 133 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data","total":-1,"completed":4,"skipped":33,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:34.293: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 83 lines ...
• [SLOW TEST:5.464 seconds]
[sig-api-machinery] Generated clientset
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/generated_clientset.go:105
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod","total":-1,"completed":4,"skipped":20,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:35.003: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 168 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:169
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":2,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:39.431: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Aug 25 04:12:30.322: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cacf6bcd-dc21-4a63-9d25-789b15653169" in namespace "projected-3386" to be "Succeeded or Failed"
Aug 25 04:12:30.425: INFO: Pod "downwardapi-volume-cacf6bcd-dc21-4a63-9d25-789b15653169": Phase="Pending", Reason="", readiness=false. Elapsed: 102.807743ms
Aug 25 04:12:32.528: INFO: Pod "downwardapi-volume-cacf6bcd-dc21-4a63-9d25-789b15653169": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206240392s
Aug 25 04:12:34.632: INFO: Pod "downwardapi-volume-cacf6bcd-dc21-4a63-9d25-789b15653169": Phase="Running", Reason="", readiness=true. Elapsed: 4.309574306s
Aug 25 04:12:36.735: INFO: Pod "downwardapi-volume-cacf6bcd-dc21-4a63-9d25-789b15653169": Phase="Running", Reason="", readiness=true. Elapsed: 6.412848157s
Aug 25 04:12:38.839: INFO: Pod "downwardapi-volume-cacf6bcd-dc21-4a63-9d25-789b15653169": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.517296816s
STEP: Saw pod success
Aug 25 04:12:38.839: INFO: Pod "downwardapi-volume-cacf6bcd-dc21-4a63-9d25-789b15653169" satisfied condition "Succeeded or Failed"
Aug 25 04:12:38.942: INFO: Trying to get logs from node ip-172-20-32-67.eu-west-3.compute.internal pod downwardapi-volume-cacf6bcd-dc21-4a63-9d25-789b15653169 container client-container: <nil>
STEP: delete the pod
Aug 25 04:12:39.153: INFO: Waiting for pod downwardapi-volume-cacf6bcd-dc21-4a63-9d25-789b15653169 to disappear
Aug 25 04:12:39.256: INFO: Pod downwardapi-volume-cacf6bcd-dc21-4a63-9d25-789b15653169 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":57,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:39.474: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 39 lines ...
Aug 25 04:12:16.871: INFO: PersistentVolumeClaim pvc-tqh7h found but phase is Pending instead of Bound.
Aug 25 04:12:18.975: INFO: PersistentVolumeClaim pvc-tqh7h found and phase=Bound (14.837953405s)
Aug 25 04:12:18.975: INFO: Waiting up to 3m0s for PersistentVolume aws-pwhxd to have phase Bound
Aug 25 04:12:19.079: INFO: PersistentVolume aws-pwhxd found and phase=Bound (104.284624ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-g5kj
STEP: Creating a pod to test exec-volume-test
Aug 25 04:12:19.395: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-g5kj" in namespace "volume-6408" to be "Succeeded or Failed"
Aug 25 04:12:19.500: INFO: Pod "exec-volume-test-preprovisionedpv-g5kj": Phase="Pending", Reason="", readiness=false. Elapsed: 104.504807ms
Aug 25 04:12:21.604: INFO: Pod "exec-volume-test-preprovisionedpv-g5kj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209033504s
Aug 25 04:12:23.714: INFO: Pod "exec-volume-test-preprovisionedpv-g5kj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319172362s
Aug 25 04:12:25.819: INFO: Pod "exec-volume-test-preprovisionedpv-g5kj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.423744327s
Aug 25 04:12:27.924: INFO: Pod "exec-volume-test-preprovisionedpv-g5kj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.528735903s
STEP: Saw pod success
Aug 25 04:12:27.924: INFO: Pod "exec-volume-test-preprovisionedpv-g5kj" satisfied condition "Succeeded or Failed"
Aug 25 04:12:28.028: INFO: Trying to get logs from node ip-172-20-32-67.eu-west-3.compute.internal pod exec-volume-test-preprovisionedpv-g5kj container exec-container-preprovisionedpv-g5kj: <nil>
STEP: delete the pod
Aug 25 04:12:28.312: INFO: Waiting for pod exec-volume-test-preprovisionedpv-g5kj to disappear
Aug 25 04:12:28.418: INFO: Pod exec-volume-test-preprovisionedpv-g5kj no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-g5kj
Aug 25 04:12:28.418: INFO: Deleting pod "exec-volume-test-preprovisionedpv-g5kj" in namespace "volume-6408"
STEP: Deleting pv and pvc
Aug 25 04:12:28.523: INFO: Deleting PersistentVolumeClaim "pvc-tqh7h"
Aug 25 04:12:28.629: INFO: Deleting PersistentVolume "aws-pwhxd"
Aug 25 04:12:28.937: INFO: Couldn't delete PD "aws://eu-west-3a/vol-0217f1390382addbf", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0217f1390382addbf is currently attached to i-062fe19422156de8d
	status code: 400, request id: 0caf363f-34ec-4d9c-bbd0-9dab1e90e78f
Aug 25 04:12:34.535: INFO: Couldn't delete PD "aws://eu-west-3a/vol-0217f1390382addbf", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0217f1390382addbf is currently attached to i-062fe19422156de8d
	status code: 400, request id: 877e723d-d5ef-44a5-af44-1084141f8152
Aug 25 04:12:40.102: INFO: Successfully deleted PD "aws://eu-west-3a/vol-0217f1390382addbf".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:12:40.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6408" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 23 lines ...
Aug 25 04:12:31.282: INFO: PersistentVolumeClaim pvc-4tnpl found but phase is Pending instead of Bound.
Aug 25 04:12:33.385: INFO: PersistentVolumeClaim pvc-4tnpl found and phase=Bound (12.73376765s)
Aug 25 04:12:33.385: INFO: Waiting up to 3m0s for PersistentVolume local-mvc47 to have phase Bound
Aug 25 04:12:33.489: INFO: PersistentVolume local-mvc47 found and phase=Bound (103.250962ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-z97z
STEP: Creating a pod to test subpath
Aug 25 04:12:33.798: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-z97z" in namespace "provisioning-9561" to be "Succeeded or Failed"
Aug 25 04:12:33.902: INFO: Pod "pod-subpath-test-preprovisionedpv-z97z": Phase="Pending", Reason="", readiness=false. Elapsed: 103.194201ms
Aug 25 04:12:36.005: INFO: Pod "pod-subpath-test-preprovisionedpv-z97z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206835732s
Aug 25 04:12:38.109: INFO: Pod "pod-subpath-test-preprovisionedpv-z97z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310695975s
Aug 25 04:12:40.213: INFO: Pod "pod-subpath-test-preprovisionedpv-z97z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414518152s
Aug 25 04:12:42.317: INFO: Pod "pod-subpath-test-preprovisionedpv-z97z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.518358447s
STEP: Saw pod success
Aug 25 04:12:42.317: INFO: Pod "pod-subpath-test-preprovisionedpv-z97z" satisfied condition "Succeeded or Failed"
Aug 25 04:12:42.420: INFO: Trying to get logs from node ip-172-20-38-132.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-z97z container test-container-subpath-preprovisionedpv-z97z: <nil>
STEP: delete the pod
Aug 25 04:12:42.637: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-z97z to disappear
Aug 25 04:12:42.739: INFO: Pod pod-subpath-test-preprovisionedpv-z97z no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-z97z
Aug 25 04:12:42.740: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-z97z" in namespace "provisioning-9561"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":8,"skipped":23,"failed":0}
[BeforeEach] [sig-windows] Windows volume mounts 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Aug 25 04:12:45.617: INFO: Only supported for node OS distro [windows] (not debian)
[AfterEach] [sig-windows] Windows volume mounts 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 40 lines ...
Aug 25 04:12:31.909: INFO: PersistentVolumeClaim pvc-kbg7p found but phase is Pending instead of Bound.
Aug 25 04:12:34.012: INFO: PersistentVolumeClaim pvc-kbg7p found and phase=Bound (6.412138634s)
Aug 25 04:12:34.012: INFO: Waiting up to 3m0s for PersistentVolume local-d4wnp to have phase Bound
Aug 25 04:12:34.120: INFO: PersistentVolume local-d4wnp found and phase=Bound (107.5712ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-t8sx
STEP: Creating a pod to test subpath
Aug 25 04:12:34.431: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-t8sx" in namespace "provisioning-1610" to be "Succeeded or Failed"
Aug 25 04:12:34.534: INFO: Pod "pod-subpath-test-preprovisionedpv-t8sx": Phase="Pending", Reason="", readiness=false. Elapsed: 103.174987ms
Aug 25 04:12:36.638: INFO: Pod "pod-subpath-test-preprovisionedpv-t8sx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206598769s
Aug 25 04:12:38.741: INFO: Pod "pod-subpath-test-preprovisionedpv-t8sx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309868987s
Aug 25 04:12:40.844: INFO: Pod "pod-subpath-test-preprovisionedpv-t8sx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413037088s
Aug 25 04:12:42.947: INFO: Pod "pod-subpath-test-preprovisionedpv-t8sx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.515912817s
STEP: Saw pod success
Aug 25 04:12:42.947: INFO: Pod "pod-subpath-test-preprovisionedpv-t8sx" satisfied condition "Succeeded or Failed"
Aug 25 04:12:43.051: INFO: Trying to get logs from node ip-172-20-32-67.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-t8sx container test-container-volume-preprovisionedpv-t8sx: <nil>
STEP: delete the pod
Aug 25 04:12:43.271: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-t8sx to disappear
Aug 25 04:12:43.374: INFO: Pod pod-subpath-test-preprovisionedpv-t8sx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-t8sx
Aug 25 04:12:43.374: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-t8sx" in namespace "provisioning-1610"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:12:46.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Aug 25 04:12:47.621: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-f1eb0465-3ec8-4a22-a911-8d7a4a5be66e" in namespace "security-context-test-1325" to be "Succeeded or Failed"
Aug 25 04:12:47.724: INFO: Pod "busybox-readonly-false-f1eb0465-3ec8-4a22-a911-8d7a4a5be66e": Phase="Pending", Reason="", readiness=false. Elapsed: 103.361091ms
Aug 25 04:12:49.827: INFO: Pod "busybox-readonly-false-f1eb0465-3ec8-4a22-a911-8d7a4a5be66e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.20623169s
Aug 25 04:12:49.827: INFO: Pod "busybox-readonly-false-f1eb0465-3ec8-4a22-a911-8d7a4a5be66e" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:12:49.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1325" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":20,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:50.070: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 155 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Aug 25 04:12:46.256: INFO: Waiting up to 5m0s for pod "downwardapi-volume-458397b5-0c9e-4053-ac6d-02feae06bce8" in namespace "projected-5745" to be "Succeeded or Failed"
Aug 25 04:12:46.364: INFO: Pod "downwardapi-volume-458397b5-0c9e-4053-ac6d-02feae06bce8": Phase="Pending", Reason="", readiness=false. Elapsed: 108.025303ms
Aug 25 04:12:48.468: INFO: Pod "downwardapi-volume-458397b5-0c9e-4053-ac6d-02feae06bce8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211882528s
Aug 25 04:12:50.571: INFO: Pod "downwardapi-volume-458397b5-0c9e-4053-ac6d-02feae06bce8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.314960504s
STEP: Saw pod success
Aug 25 04:12:50.571: INFO: Pod "downwardapi-volume-458397b5-0c9e-4053-ac6d-02feae06bce8" satisfied condition "Succeeded or Failed"
Aug 25 04:12:50.677: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod downwardapi-volume-458397b5-0c9e-4053-ac6d-02feae06bce8 container client-container: <nil>
STEP: delete the pod
Aug 25 04:12:50.896: INFO: Waiting for pod downwardapi-volume-458397b5-0c9e-4053-ac6d-02feae06bce8 to disappear
Aug 25 04:12:50.999: INFO: Pod downwardapi-volume-458397b5-0c9e-4053-ac6d-02feae06bce8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:5.591 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":11,"skipped":101,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:12:32.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":12,"skipped":101,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:53.290: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 137 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":7,"skipped":42,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:55.047: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 79 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:12:56.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1551" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","total":-1,"completed":8,"skipped":53,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:12:56.641: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 37 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":25,"failed":0}
[BeforeEach] [k8s.io] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:12:51.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:118
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Aug 25 04:12:51.856: INFO: Waiting up to 5m0s for pod "security-context-1cd8d92b-fd08-4232-b0a9-4dd4bce6a00f" in namespace "security-context-6316" to be "Succeeded or Failed"
Aug 25 04:12:51.959: INFO: Pod "security-context-1cd8d92b-fd08-4232-b0a9-4dd4bce6a00f": Phase="Pending", Reason="", readiness=false. Elapsed: 103.096107ms
Aug 25 04:12:54.068: INFO: Pod "security-context-1cd8d92b-fd08-4232-b0a9-4dd4bce6a00f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212826454s
Aug 25 04:12:56.172: INFO: Pod "security-context-1cd8d92b-fd08-4232-b0a9-4dd4bce6a00f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316382812s
Aug 25 04:12:58.275: INFO: Pod "security-context-1cd8d92b-fd08-4232-b0a9-4dd4bce6a00f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.419725644s
STEP: Saw pod success
Aug 25 04:12:58.275: INFO: Pod "security-context-1cd8d92b-fd08-4232-b0a9-4dd4bce6a00f" satisfied condition "Succeeded or Failed"
Aug 25 04:12:58.378: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod security-context-1cd8d92b-fd08-4232-b0a9-4dd4bce6a00f container test-container: <nil>
STEP: delete the pod
Aug 25 04:12:58.590: INFO: Waiting for pod security-context-1cd8d92b-fd08-4232-b0a9-4dd4bce6a00f to disappear
Aug 25 04:12:58.693: INFO: Pod security-context-1cd8d92b-fd08-4232-b0a9-4dd4bce6a00f no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:7.674 seconds]
[k8s.io] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:118
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":10,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
... skipping 55 lines ...
Aug 25 04:12:00.665: INFO: PersistentVolumeClaim csi-hostpath7452z found but phase is Pending instead of Bound.
Aug 25 04:12:02.769: INFO: PersistentVolumeClaim csi-hostpath7452z found but phase is Pending instead of Bound.
Aug 25 04:12:04.874: INFO: PersistentVolumeClaim csi-hostpath7452z found but phase is Pending instead of Bound.
Aug 25 04:12:06.978: INFO: PersistentVolumeClaim csi-hostpath7452z found and phase=Bound (10.626109856s)
STEP: Expanding non-expandable pvc
Aug 25 04:12:07.186: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Aug 25 04:12:07.395: INFO: Error updating pvc csi-hostpath7452z: persistentvolumeclaims "csi-hostpath7452z" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:12:09.604: INFO: Error updating pvc csi-hostpath7452z: persistentvolumeclaims "csi-hostpath7452z" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:12:11.607: INFO: Error updating pvc csi-hostpath7452z: persistentvolumeclaims "csi-hostpath7452z" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:12:13.604: INFO: Error updating pvc csi-hostpath7452z: persistentvolumeclaims "csi-hostpath7452z" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:12:15.604: INFO: Error updating pvc csi-hostpath7452z: persistentvolumeclaims "csi-hostpath7452z" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:12:17.604: INFO: Error updating pvc csi-hostpath7452z: persistentvolumeclaims "csi-hostpath7452z" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:12:19.604: INFO: Error updating pvc csi-hostpath7452z: persistentvolumeclaims "csi-hostpath7452z" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:12:21.604: INFO: Error updating pvc csi-hostpath7452z: persistentvolumeclaims "csi-hostpath7452z" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:12:23.604: INFO: Error updating pvc csi-hostpath7452z: persistentvolumeclaims "csi-hostpath7452z" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:12:25.604: INFO: Error updating pvc csi-hostpath7452z: persistentvolumeclaims "csi-hostpath7452z" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:12:27.606: INFO: Error updating pvc csi-hostpath7452z: persistentvolumeclaims "csi-hostpath7452z" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:12:29.604: INFO: Error updating pvc csi-hostpath7452z: persistentvolumeclaims "csi-hostpath7452z" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:12:31.605: INFO: Error updating pvc csi-hostpath7452z: persistentvolumeclaims "csi-hostpath7452z" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:12:33.608: INFO: Error updating pvc csi-hostpath7452z: persistentvolumeclaims "csi-hostpath7452z" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:12:35.604: INFO: Error updating pvc csi-hostpath7452z: persistentvolumeclaims "csi-hostpath7452z" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:12:37.605: INFO: Error updating pvc csi-hostpath7452z: persistentvolumeclaims "csi-hostpath7452z" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:12:37.829: INFO: Error updating pvc csi-hostpath7452z: persistentvolumeclaims "csi-hostpath7452z" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Aug 25 04:12:37.830: INFO: Deleting PersistentVolumeClaim "csi-hostpath7452z"
Aug 25 04:12:37.935: INFO: Waiting up to 5m0s for PersistentVolume pvc-c50cc198-cbd5-4a78-86a7-eb49e3a16c3e to get deleted
Aug 25 04:12:38.040: INFO: PersistentVolume pvc-c50cc198-cbd5-4a78-86a7-eb49e3a16c3e was removed
STEP: Deleting sc
STEP: deleting the test namespace: volume-expand-5971
... skipping 45 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:154
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":5,"skipped":71,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 59 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":5,"skipped":26,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:00.602: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 615 lines ...
• [SLOW TEST:14.026 seconds]
[sig-network] Service endpoints latency
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":5,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:04.174: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 43 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Aug 25 04:12:57.321: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6cf16be7-f27c-4f75-9245-1405ec2a458f" in namespace "projected-8996" to be "Succeeded or Failed"
Aug 25 04:12:57.425: INFO: Pod "downwardapi-volume-6cf16be7-f27c-4f75-9245-1405ec2a458f": Phase="Pending", Reason="", readiness=false. Elapsed: 103.410084ms
Aug 25 04:12:59.529: INFO: Pod "downwardapi-volume-6cf16be7-f27c-4f75-9245-1405ec2a458f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207690378s
Aug 25 04:13:01.637: INFO: Pod "downwardapi-volume-6cf16be7-f27c-4f75-9245-1405ec2a458f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315823466s
Aug 25 04:13:03.741: INFO: Pod "downwardapi-volume-6cf16be7-f27c-4f75-9245-1405ec2a458f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.42018363s
STEP: Saw pod success
Aug 25 04:13:03.741: INFO: Pod "downwardapi-volume-6cf16be7-f27c-4f75-9245-1405ec2a458f" satisfied condition "Succeeded or Failed"
Aug 25 04:13:03.845: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod downwardapi-volume-6cf16be7-f27c-4f75-9245-1405ec2a458f container client-container: <nil>
STEP: delete the pod
Aug 25 04:13:04.066: INFO: Waiting for pod downwardapi-volume-6cf16be7-f27c-4f75-9245-1405ec2a458f to disappear
Aug 25 04:13:04.170: INFO: Pod downwardapi-volume-6cf16be7-f27c-4f75-9245-1405ec2a458f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:7.691 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":66,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:04.389: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 46 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1304
------------------------------
... skipping 23 lines ...
• [SLOW TEST:48.001 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":6,"skipped":62,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:04.783: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 58 lines ...
• [SLOW TEST:113.467 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should replace jobs when ReplaceConcurrent
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:142
------------------------------
{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent","total":-1,"completed":2,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Aug 25 04:13:02.137: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9cbe7d8c-3be7-4463-bbc2-f393d61ec775" in namespace "projected-2500" to be "Succeeded or Failed"
Aug 25 04:13:02.241: INFO: Pod "downwardapi-volume-9cbe7d8c-3be7-4463-bbc2-f393d61ec775": Phase="Pending", Reason="", readiness=false. Elapsed: 103.263604ms
Aug 25 04:13:04.344: INFO: Pod "downwardapi-volume-9cbe7d8c-3be7-4463-bbc2-f393d61ec775": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206991997s
Aug 25 04:13:06.453: INFO: Pod "downwardapi-volume-9cbe7d8c-3be7-4463-bbc2-f393d61ec775": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.315210979s
STEP: Saw pod success
Aug 25 04:13:06.453: INFO: Pod "downwardapi-volume-9cbe7d8c-3be7-4463-bbc2-f393d61ec775" satisfied condition "Succeeded or Failed"
Aug 25 04:13:06.563: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod downwardapi-volume-9cbe7d8c-3be7-4463-bbc2-f393d61ec775 container client-container: <nil>
STEP: delete the pod
Aug 25 04:13:06.809: INFO: Waiting for pod downwardapi-volume-9cbe7d8c-3be7-4463-bbc2-f393d61ec775 to disappear
Aug 25 04:13:06.912: INFO: Pod downwardapi-volume-9cbe7d8c-3be7-4463-bbc2-f393d61ec775 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:5.629 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":62,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 40 lines ...
      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1094
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently","total":-1,"completed":4,"skipped":14,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:07.232: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 32 lines ...
Aug 25 04:12:28.526: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-7259-aws-schnp47
STEP: creating a claim
Aug 25 04:12:28.633: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-tlbh
STEP: Creating a pod to test exec-volume-test
Aug 25 04:12:28.945: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-tlbh" in namespace "volume-7259" to be "Succeeded or Failed"
Aug 25 04:12:29.048: INFO: Pod "exec-volume-test-dynamicpv-tlbh": Phase="Pending", Reason="", readiness=false. Elapsed: 102.456121ms
Aug 25 04:12:31.151: INFO: Pod "exec-volume-test-dynamicpv-tlbh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205506483s
Aug 25 04:12:33.254: INFO: Pod "exec-volume-test-dynamicpv-tlbh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308599162s
Aug 25 04:12:35.357: INFO: Pod "exec-volume-test-dynamicpv-tlbh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411273599s
Aug 25 04:12:37.460: INFO: Pod "exec-volume-test-dynamicpv-tlbh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514299809s
Aug 25 04:12:39.563: INFO: Pod "exec-volume-test-dynamicpv-tlbh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.61732193s
Aug 25 04:12:41.666: INFO: Pod "exec-volume-test-dynamicpv-tlbh": Phase="Pending", Reason="", readiness=false. Elapsed: 12.720251865s
Aug 25 04:12:43.769: INFO: Pod "exec-volume-test-dynamicpv-tlbh": Phase="Pending", Reason="", readiness=false. Elapsed: 14.823468219s
Aug 25 04:12:45.872: INFO: Pod "exec-volume-test-dynamicpv-tlbh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.926363913s
STEP: Saw pod success
Aug 25 04:12:45.872: INFO: Pod "exec-volume-test-dynamicpv-tlbh" satisfied condition "Succeeded or Failed"
Aug 25 04:12:45.975: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod exec-volume-test-dynamicpv-tlbh container exec-container-dynamicpv-tlbh: <nil>
STEP: delete the pod
Aug 25 04:12:46.200: INFO: Waiting for pod exec-volume-test-dynamicpv-tlbh to disappear
Aug 25 04:12:46.302: INFO: Pod exec-volume-test-dynamicpv-tlbh no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-tlbh
Aug 25 04:12:46.302: INFO: Deleting pod "exec-volume-test-dynamicpv-tlbh" in namespace "volume-7259"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":51,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:07.574: INFO: Only supported for providers [vsphere] (not aws)
... skipping 46 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:65
STEP: Creating a pod to test hostPath r/w
Aug 25 04:13:04.820: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8043" to be "Succeeded or Failed"
Aug 25 04:13:04.923: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 102.806492ms
Aug 25 04:13:07.027: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.206415219s
STEP: Saw pod success
Aug 25 04:13:07.027: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Aug 25 04:13:07.130: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Aug 25 04:13:07.350: INFO: Waiting for pod pod-host-path-test to disappear
Aug 25 04:13:07.452: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 42 lines ...
• [SLOW TEST:5.503 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":9,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":6,"skipped":37,"failed":0}
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:13:07.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename disruption
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
• [SLOW TEST:8.073 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update/patch PodDisruptionBudget status
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:115
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status","total":-1,"completed":7,"skipped":37,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:15.777: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 129 lines ...
• [SLOW TEST:15.893 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should support pod readiness gates [NodeFeature:PodReadinessGate]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:778
------------------------------
{"msg":"PASSED [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":5,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:17.766: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 183 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:881
    exhausted, late binding, no topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:934
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":3,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PV Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 72 lines ...
• [SLOW TEST:60.949 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":11,"skipped":26,"failed":0}

S
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":61,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:23.392: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 64 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266
      should not be able to pull from private registry without secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:388
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":6,"skipped":49,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:24.528: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 55 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:236

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":9,"skipped":74,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:24.566: INFO: Driver local doesn't support ntfs -- skipping
... skipping 130 lines ...
Aug 25 04:13:16.822: INFO: PersistentVolumeClaim pvc-5hpz8 found but phase is Pending instead of Bound.
Aug 25 04:13:18.927: INFO: PersistentVolumeClaim pvc-5hpz8 found and phase=Bound (12.741039198s)
Aug 25 04:13:18.927: INFO: Waiting up to 3m0s for PersistentVolume local-lgd85 to have phase Bound
Aug 25 04:13:19.033: INFO: PersistentVolume local-lgd85 found and phase=Bound (105.915674ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-5tf6
STEP: Creating a pod to test subpath
Aug 25 04:13:19.347: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5tf6" in namespace "provisioning-6184" to be "Succeeded or Failed"
Aug 25 04:13:19.465: INFO: Pod "pod-subpath-test-preprovisionedpv-5tf6": Phase="Pending", Reason="", readiness=false. Elapsed: 118.144488ms
Aug 25 04:13:21.570: INFO: Pod "pod-subpath-test-preprovisionedpv-5tf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223081703s
Aug 25 04:13:23.674: INFO: Pod "pod-subpath-test-preprovisionedpv-5tf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.327449789s
STEP: Saw pod success
Aug 25 04:13:23.674: INFO: Pod "pod-subpath-test-preprovisionedpv-5tf6" satisfied condition "Succeeded or Failed"
Aug 25 04:13:23.781: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-5tf6 container test-container-subpath-preprovisionedpv-5tf6: <nil>
STEP: delete the pod
Aug 25 04:13:23.999: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-5tf6 to disappear
Aug 25 04:13:24.104: INFO: Pod pod-subpath-test-preprovisionedpv-5tf6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-5tf6
Aug 25 04:13:24.104: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5tf6" in namespace "provisioning-6184"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":6,"skipped":74,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":7,"skipped":61,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:25.679: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 133 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134
      should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:171
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":12,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:29.191: INFO: Only supported for providers [openstack] (not aws)
... skipping 182 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297
    should scale a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":7,"skipped":81,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:34.765: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":10,"skipped":58,"failed":0}

SS
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":47,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:13:16.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":7,"skipped":47,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:37.598: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 288 lines ...
• [SLOW TEST:166.293 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not disrupt a cloud load-balancer's connectivity during rollout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:145
------------------------------
{"msg":"PASSED [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","total":-1,"completed":3,"skipped":78,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:41.830: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 35 lines ...
      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:833
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] health handlers should contain necessary checks","total":-1,"completed":6,"skipped":55,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:13:08.948: INFO: >>> kubeConfig: /root/.kube/config
... skipping 12 lines ...
Aug 25 04:13:16.918: INFO: PersistentVolumeClaim pvc-qh6wj found but phase is Pending instead of Bound.
Aug 25 04:13:19.021: INFO: PersistentVolumeClaim pvc-qh6wj found and phase=Bound (2.20571812s)
Aug 25 04:13:19.021: INFO: Waiting up to 3m0s for PersistentVolume local-bh2tv to have phase Bound
Aug 25 04:13:19.124: INFO: PersistentVolume local-bh2tv found and phase=Bound (102.829344ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-njdz
STEP: Creating a pod to test atomic-volume-subpath
Aug 25 04:13:19.433: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-njdz" in namespace "provisioning-3296" to be "Succeeded or Failed"
Aug 25 04:13:19.536: INFO: Pod "pod-subpath-test-preprovisionedpv-njdz": Phase="Pending", Reason="", readiness=false. Elapsed: 102.738333ms
Aug 25 04:13:21.639: INFO: Pod "pod-subpath-test-preprovisionedpv-njdz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205867906s
Aug 25 04:13:23.742: INFO: Pod "pod-subpath-test-preprovisionedpv-njdz": Phase="Running", Reason="", readiness=true. Elapsed: 4.308718763s
Aug 25 04:13:25.845: INFO: Pod "pod-subpath-test-preprovisionedpv-njdz": Phase="Running", Reason="", readiness=true. Elapsed: 6.411942437s
Aug 25 04:13:27.948: INFO: Pod "pod-subpath-test-preprovisionedpv-njdz": Phase="Running", Reason="", readiness=true. Elapsed: 8.51468103s
Aug 25 04:13:30.051: INFO: Pod "pod-subpath-test-preprovisionedpv-njdz": Phase="Running", Reason="", readiness=true. Elapsed: 10.617643689s
Aug 25 04:13:32.154: INFO: Pod "pod-subpath-test-preprovisionedpv-njdz": Phase="Running", Reason="", readiness=true. Elapsed: 12.72072445s
Aug 25 04:13:34.257: INFO: Pod "pod-subpath-test-preprovisionedpv-njdz": Phase="Running", Reason="", readiness=true. Elapsed: 14.82416031s
Aug 25 04:13:36.360: INFO: Pod "pod-subpath-test-preprovisionedpv-njdz": Phase="Running", Reason="", readiness=true. Elapsed: 16.927262215s
Aug 25 04:13:38.463: INFO: Pod "pod-subpath-test-preprovisionedpv-njdz": Phase="Running", Reason="", readiness=true. Elapsed: 19.030154754s
Aug 25 04:13:40.568: INFO: Pod "pod-subpath-test-preprovisionedpv-njdz": Phase="Running", Reason="", readiness=true. Elapsed: 21.13502715s
Aug 25 04:13:42.674: INFO: Pod "pod-subpath-test-preprovisionedpv-njdz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.240444372s
STEP: Saw pod success
Aug 25 04:13:42.674: INFO: Pod "pod-subpath-test-preprovisionedpv-njdz" satisfied condition "Succeeded or Failed"
Aug 25 04:13:42.776: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-njdz container test-container-subpath-preprovisionedpv-njdz: <nil>
STEP: delete the pod
Aug 25 04:13:42.990: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-njdz to disappear
Aug 25 04:13:43.092: INFO: Pod pod-subpath-test-preprovisionedpv-njdz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-njdz
Aug 25 04:13:43.092: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-njdz" in namespace "provisioning-3296"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":7,"skipped":55,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:44.541: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 78 lines ...
• [SLOW TEST:40.027 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":7,"skipped":64,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:44.846: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 80 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on 0.0.0.0
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":60,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":8,"skipped":73,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:47.335: INFO: Driver hostPath doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 43 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:91
STEP: Creating a pod to test downward API volume plugin
Aug 25 04:13:42.500: INFO: Waiting up to 5m0s for pod "metadata-volume-82388531-4f42-414d-b6ec-063514de30f0" in namespace "downward-api-9966" to be "Succeeded or Failed"
Aug 25 04:13:42.604: INFO: Pod "metadata-volume-82388531-4f42-414d-b6ec-063514de30f0": Phase="Pending", Reason="", readiness=false. Elapsed: 103.884735ms
Aug 25 04:13:44.706: INFO: Pod "metadata-volume-82388531-4f42-414d-b6ec-063514de30f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206275096s
Aug 25 04:13:46.809: INFO: Pod "metadata-volume-82388531-4f42-414d-b6ec-063514de30f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.308713613s
STEP: Saw pod success
Aug 25 04:13:46.809: INFO: Pod "metadata-volume-82388531-4f42-414d-b6ec-063514de30f0" satisfied condition "Succeeded or Failed"
Aug 25 04:13:46.911: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod metadata-volume-82388531-4f42-414d-b6ec-063514de30f0 container client-container: <nil>
STEP: delete the pod
Aug 25 04:13:47.121: INFO: Waiting for pod metadata-volume-82388531-4f42-414d-b6ec-063514de30f0 to disappear
Aug 25 04:13:47.223: INFO: Pod metadata-volume-82388531-4f42-414d-b6ec-063514de30f0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:5.556 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:91
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":86,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:47.486: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 46 lines ...
Aug 25 04:13:24.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to unmount after the subpath directory is deleted [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
Aug 25 04:13:25.132: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Aug 25 04:13:25.341: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8829" in namespace "provisioning-8829" to be "Succeeded or Failed"
Aug 25 04:13:25.446: INFO: Pod "hostpath-symlink-prep-provisioning-8829": Phase="Pending", Reason="", readiness=false. Elapsed: 104.871224ms
Aug 25 04:13:27.550: INFO: Pod "hostpath-symlink-prep-provisioning-8829": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208922033s
Aug 25 04:13:29.658: INFO: Pod "hostpath-symlink-prep-provisioning-8829": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.31665189s
STEP: Saw pod success
Aug 25 04:13:29.658: INFO: Pod "hostpath-symlink-prep-provisioning-8829" satisfied condition "Succeeded or Failed"
Aug 25 04:13:29.658: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8829" in namespace "provisioning-8829"
Aug 25 04:13:29.774: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8829" to be fully deleted
Aug 25 04:13:29.877: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-nc7p
Aug 25 04:13:32.187: INFO: Running '/tmp/kubectl940786868/kubectl --server=https://api.e2e-187541ca57-a9514.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=provisioning-8829 exec pod-subpath-test-inlinevolume-nc7p --container test-container-volume-inlinevolume-nc7p -- /bin/sh -c rm -r /test-volume/provisioning-8829'
Aug 25 04:13:33.327: INFO: stderr: ""
Aug 25 04:13:33.327: INFO: stdout: ""
STEP: Deleting pod pod-subpath-test-inlinevolume-nc7p
Aug 25 04:13:33.327: INFO: Deleting pod "pod-subpath-test-inlinevolume-nc7p" in namespace "provisioning-8829"
Aug 25 04:13:33.431: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-nc7p" to be fully deleted
STEP: Deleting pod
Aug 25 04:13:43.638: INFO: Deleting pod "pod-subpath-test-inlinevolume-nc7p" in namespace "provisioning-8829"
Aug 25 04:13:43.844: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8829" in namespace "provisioning-8829" to be "Succeeded or Failed"
Aug 25 04:13:43.948: INFO: Pod "hostpath-symlink-prep-provisioning-8829": Phase="Pending", Reason="", readiness=false. Elapsed: 103.850722ms
Aug 25 04:13:46.051: INFO: Pod "hostpath-symlink-prep-provisioning-8829": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207183209s
Aug 25 04:13:48.155: INFO: Pod "hostpath-symlink-prep-provisioning-8829": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.310675675s
STEP: Saw pod success
Aug 25 04:13:48.155: INFO: Pod "hostpath-symlink-prep-provisioning-8829" satisfied condition "Succeeded or Failed"
Aug 25 04:13:48.155: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8829" in namespace "provisioning-8829"
Aug 25 04:13:48.262: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8829" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:13:48.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-8829" for this suite.
... skipping 39 lines ...
• [SLOW TEST:104.620 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:223
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":6,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:48.761: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 116 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":12,"skipped":66,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:48.837: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 77 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:13:49.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5394" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":7,"skipped":34,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
... skipping 100 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":7,"skipped":83,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:51.870: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 43 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Aug 25 04:13:51.306: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c84fade-2cd1-4bf3-be4b-e493765bccaa" in namespace "projected-6992" to be "Succeeded or Failed"
Aug 25 04:13:51.410: INFO: Pod "downwardapi-volume-7c84fade-2cd1-4bf3-be4b-e493765bccaa": Phase="Pending", Reason="", readiness=false. Elapsed: 103.529983ms
Aug 25 04:13:53.514: INFO: Pod "downwardapi-volume-7c84fade-2cd1-4bf3-be4b-e493765bccaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.207244642s
STEP: Saw pod success
Aug 25 04:13:53.514: INFO: Pod "downwardapi-volume-7c84fade-2cd1-4bf3-be4b-e493765bccaa" satisfied condition "Succeeded or Failed"
Aug 25 04:13:53.617: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod downwardapi-volume-7c84fade-2cd1-4bf3-be4b-e493765bccaa container client-container: <nil>
STEP: delete the pod
Aug 25 04:13:53.832: INFO: Waiting for pod downwardapi-volume-7c84fade-2cd1-4bf3-be4b-e493765bccaa to disappear
Aug 25 04:13:53.936: INFO: Pod downwardapi-volume-7c84fade-2cd1-4bf3-be4b-e493765bccaa no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 116 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:179
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume","total":-1,"completed":7,"skipped":16,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:57.243: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 101 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":106,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:13:20.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume
STEP: Waiting for a default service account to be provisioned in namespace
[It] should store data
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
Aug 25 04:13:21.359: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Aug 25 04:13:21.570: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-4375" in namespace "volume-4375" to be "Succeeded or Failed"
Aug 25 04:13:21.674: INFO: Pod "hostpath-symlink-prep-volume-4375": Phase="Pending", Reason="", readiness=false. Elapsed: 103.750567ms
Aug 25 04:13:23.780: INFO: Pod "hostpath-symlink-prep-volume-4375": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.209850779s
STEP: Saw pod success
Aug 25 04:13:23.780: INFO: Pod "hostpath-symlink-prep-volume-4375" satisfied condition "Succeeded or Failed"
Aug 25 04:13:23.780: INFO: Deleting pod "hostpath-symlink-prep-volume-4375" in namespace "volume-4375"
Aug 25 04:13:23.890: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-4375" to be fully deleted
Aug 25 04:13:23.993: INFO: Creating resource for inline volume
STEP: starting hostpathsymlink-injector
STEP: Writing text file contents in the container.
Aug 25 04:13:26.306: INFO: Running '/tmp/kubectl940786868/kubectl --server=https://api.e2e-187541ca57-a9514.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-4375 exec hostpathsymlink-injector --namespace=volume-4375 -- /bin/sh -c echo 'Hello from hostPathSymlink from namespace volume-4375' > /opt/0/index.html'
... skipping 40 lines ...
Aug 25 04:13:51.675: INFO: Pod hostpathsymlink-client still exists
Aug 25 04:13:53.571: INFO: Waiting for pod hostpathsymlink-client to disappear
Aug 25 04:13:53.675: INFO: Pod hostpathsymlink-client still exists
Aug 25 04:13:55.571: INFO: Waiting for pod hostpathsymlink-client to disappear
Aug 25 04:13:55.675: INFO: Pod hostpathsymlink-client no longer exists
STEP: cleaning the environment after hostpathsymlink
Aug 25 04:13:55.782: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-4375" in namespace "volume-4375" to be "Succeeded or Failed"
Aug 25 04:13:55.886: INFO: Pod "hostpath-symlink-prep-volume-4375": Phase="Pending", Reason="", readiness=false. Elapsed: 103.836017ms
Aug 25 04:13:57.991: INFO: Pod "hostpath-symlink-prep-volume-4375": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.20903696s
STEP: Saw pod success
Aug 25 04:13:57.991: INFO: Pod "hostpath-symlink-prep-volume-4375" satisfied condition "Succeeded or Failed"
Aug 25 04:13:57.991: INFO: Deleting pod "hostpath-symlink-prep-volume-4375" in namespace "volume-4375"
Aug 25 04:13:58.099: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-4375" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:13:58.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4375" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":14,"skipped":106,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:58.443: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 181 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
Aug 25 04:13:52.424: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Aug 25 04:13:52.424: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-hxm8
STEP: Creating a pod to test subpath
Aug 25 04:13:52.531: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-hxm8" in namespace "provisioning-5152" to be "Succeeded or Failed"
Aug 25 04:13:52.635: INFO: Pod "pod-subpath-test-inlinevolume-hxm8": Phase="Pending", Reason="", readiness=false. Elapsed: 104.616099ms
Aug 25 04:13:54.740: INFO: Pod "pod-subpath-test-inlinevolume-hxm8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209077168s
Aug 25 04:13:56.844: INFO: Pod "pod-subpath-test-inlinevolume-hxm8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.313584725s
Aug 25 04:13:58.950: INFO: Pod "pod-subpath-test-inlinevolume-hxm8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.418851011s
STEP: Saw pod success
Aug 25 04:13:58.950: INFO: Pod "pod-subpath-test-inlinevolume-hxm8" satisfied condition "Succeeded or Failed"
Aug 25 04:13:59.054: INFO: Trying to get logs from node ip-172-20-38-132.eu-west-3.compute.internal pod pod-subpath-test-inlinevolume-hxm8 container test-container-volume-inlinevolume-hxm8: <nil>
STEP: delete the pod
Aug 25 04:13:59.274: INFO: Waiting for pod pod-subpath-test-inlinevolume-hxm8 to disappear
Aug 25 04:13:59.378: INFO: Pod pod-subpath-test-inlinevolume-hxm8 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-hxm8
Aug 25 04:13:59.378: INFO: Deleting pod "pod-subpath-test-inlinevolume-hxm8" in namespace "provisioning-5152"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":8,"skipped":87,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:13:59.813: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 56 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1570
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:13:54.157: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
Aug 25 04:13:54.675: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Aug 25 04:13:54.779: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-pfjm
STEP: Creating a pod to test subpath
Aug 25 04:13:54.886: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-pfjm" in namespace "provisioning-2471" to be "Succeeded or Failed"
Aug 25 04:13:54.989: INFO: Pod "pod-subpath-test-inlinevolume-pfjm": Phase="Pending", Reason="", readiness=false. Elapsed: 103.304232ms
Aug 25 04:13:57.093: INFO: Pod "pod-subpath-test-inlinevolume-pfjm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207082722s
Aug 25 04:13:59.197: INFO: Pod "pod-subpath-test-inlinevolume-pfjm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.310994646s
STEP: Saw pod success
Aug 25 04:13:59.197: INFO: Pod "pod-subpath-test-inlinevolume-pfjm" satisfied condition "Succeeded or Failed"
Aug 25 04:13:59.300: INFO: Trying to get logs from node ip-172-20-32-67.eu-west-3.compute.internal pod pod-subpath-test-inlinevolume-pfjm container test-container-subpath-inlinevolume-pfjm: <nil>
STEP: delete the pod
Aug 25 04:13:59.522: INFO: Waiting for pod pod-subpath-test-inlinevolume-pfjm to disappear
Aug 25 04:13:59.624: INFO: Pod pod-subpath-test-inlinevolume-pfjm no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-pfjm
Aug 25 04:13:59.625: INFO: Deleting pod "pod-subpath-test-inlinevolume-pfjm" in namespace "provisioning-2471"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:14:00.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1910" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:14:00.563: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 74 lines ...
Aug 25 04:13:29.740: INFO: Creating resource for dynamic PV
Aug 25 04:13:29.740: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-989-aws-sctjt74
STEP: creating a claim
STEP: Expanding non-expandable pvc
Aug 25 04:13:30.051: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Aug 25 04:13:30.258: INFO: Error updating pvc awszdfzt: PersistentVolumeClaim "awszdfzt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-989-aws-sctjt74",
  	... // 2 identical fields
  }

Aug 25 04:13:32.465: INFO: Error updating pvc awszdfzt: PersistentVolumeClaim "awszdfzt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-989-aws-sctjt74",
  	... // 2 identical fields
  }

Aug 25 04:13:34.465: INFO: Error updating pvc awszdfzt: PersistentVolumeClaim "awszdfzt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-989-aws-sctjt74",
  	... // 2 identical fields
  }

Aug 25 04:13:36.465: INFO: Error updating pvc awszdfzt: PersistentVolumeClaim "awszdfzt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-989-aws-sctjt74",
  	... // 2 identical fields
  }

Aug 25 04:13:38.473: INFO: Error updating pvc awszdfzt: PersistentVolumeClaim "awszdfzt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-989-aws-sctjt74",
  	... // 2 identical fields
  }

Aug 25 04:13:40.476: INFO: Error updating pvc awszdfzt: PersistentVolumeClaim "awszdfzt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-989-aws-sctjt74",
  	... // 2 identical fields
  }

Aug 25 04:13:42.465: INFO: Error updating pvc awszdfzt: PersistentVolumeClaim "awszdfzt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-989-aws-sctjt74",
  	... // 2 identical fields
  }

Aug 25 04:13:44.467: INFO: Error updating pvc awszdfzt: PersistentVolumeClaim "awszdfzt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-989-aws-sctjt74",
  	... // 2 identical fields
  }

Aug 25 04:13:46.467: INFO: Error updating pvc awszdfzt: PersistentVolumeClaim "awszdfzt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-989-aws-sctjt74",
  	... // 2 identical fields
  }

Aug 25 04:13:48.465: INFO: Error updating pvc awszdfzt: PersistentVolumeClaim "awszdfzt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-989-aws-sctjt74",
  	... // 2 identical fields
  }

Aug 25 04:13:50.469: INFO: Error updating pvc awszdfzt: PersistentVolumeClaim "awszdfzt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-989-aws-sctjt74",
  	... // 2 identical fields
  }

Aug 25 04:13:52.465: INFO: Error updating pvc awszdfzt: PersistentVolumeClaim "awszdfzt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-989-aws-sctjt74",
  	... // 2 identical fields
  }

Aug 25 04:13:54.465: INFO: Error updating pvc awszdfzt: PersistentVolumeClaim "awszdfzt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-989-aws-sctjt74",
  	... // 2 identical fields
  }

Aug 25 04:13:56.464: INFO: Error updating pvc awszdfzt: PersistentVolumeClaim "awszdfzt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-989-aws-sctjt74",
  	... // 2 identical fields
  }

Aug 25 04:13:58.465: INFO: Error updating pvc awszdfzt: PersistentVolumeClaim "awszdfzt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-989-aws-sctjt74",
  	... // 2 identical fields
  }

Aug 25 04:14:00.466: INFO: Error updating pvc awszdfzt: PersistentVolumeClaim "awszdfzt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-989-aws-sctjt74",
  	... // 2 identical fields
  }

Aug 25 04:14:00.673: INFO: Error updating pvc awszdfzt: PersistentVolumeClaim "awszdfzt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:154
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":13,"skipped":42,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 84 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:14:01.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-6966" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":10,"skipped":47,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:14:01.757: INFO: Only supported for providers [gce gke] (not aws)
... skipping 76 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:14:01.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4876" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":15,"skipped":122,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:14:01.948: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 33 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151

      Driver csi-hostpath doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":10,"skipped":81,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:13:48.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 59 lines ...
• [SLOW TEST:61.919 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":5,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:14:09.142: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 47 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-downwardapi-479t
STEP: Creating a pod to test atomic-volume-subpath
Aug 25 04:13:45.697: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-479t" in namespace "subpath-2837" to be "Succeeded or Failed"
Aug 25 04:13:45.802: INFO: Pod "pod-subpath-test-downwardapi-479t": Phase="Pending", Reason="", readiness=false. Elapsed: 104.11295ms
Aug 25 04:13:47.906: INFO: Pod "pod-subpath-test-downwardapi-479t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208358077s
Aug 25 04:13:50.012: INFO: Pod "pod-subpath-test-downwardapi-479t": Phase="Running", Reason="", readiness=true. Elapsed: 4.314305104s
Aug 25 04:13:52.116: INFO: Pod "pod-subpath-test-downwardapi-479t": Phase="Running", Reason="", readiness=true. Elapsed: 6.418556162s
Aug 25 04:13:54.222: INFO: Pod "pod-subpath-test-downwardapi-479t": Phase="Running", Reason="", readiness=true. Elapsed: 8.524709477s
Aug 25 04:13:56.326: INFO: Pod "pod-subpath-test-downwardapi-479t": Phase="Running", Reason="", readiness=true. Elapsed: 10.628918704s
Aug 25 04:13:58.430: INFO: Pod "pod-subpath-test-downwardapi-479t": Phase="Running", Reason="", readiness=true. Elapsed: 12.732863965s
Aug 25 04:14:00.535: INFO: Pod "pod-subpath-test-downwardapi-479t": Phase="Running", Reason="", readiness=true. Elapsed: 14.837019225s
Aug 25 04:14:02.639: INFO: Pod "pod-subpath-test-downwardapi-479t": Phase="Running", Reason="", readiness=true. Elapsed: 16.941187848s
Aug 25 04:14:04.743: INFO: Pod "pod-subpath-test-downwardapi-479t": Phase="Running", Reason="", readiness=true. Elapsed: 19.045291987s
Aug 25 04:14:06.847: INFO: Pod "pod-subpath-test-downwardapi-479t": Phase="Running", Reason="", readiness=true. Elapsed: 21.149758434s
Aug 25 04:14:08.951: INFO: Pod "pod-subpath-test-downwardapi-479t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.253796972s
STEP: Saw pod success
Aug 25 04:14:08.951: INFO: Pod "pod-subpath-test-downwardapi-479t" satisfied condition "Succeeded or Failed"
Aug 25 04:14:09.055: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod pod-subpath-test-downwardapi-479t container test-container-subpath-downwardapi-479t: <nil>
STEP: delete the pod
Aug 25 04:14:09.269: INFO: Waiting for pod pod-subpath-test-downwardapi-479t to disappear
Aug 25 04:14:09.372: INFO: Pod pod-subpath-test-downwardapi-479t no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-479t
Aug 25 04:14:09.372: INFO: Deleting pod "pod-subpath-test-downwardapi-479t" in namespace "subpath-2837"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":67,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:14:09.708: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 43 lines ...
Aug 25 04:14:01.849: INFO: PersistentVolumeClaim pvc-npmff found but phase is Pending instead of Bound.
Aug 25 04:14:03.957: INFO: PersistentVolumeClaim pvc-npmff found and phase=Bound (12.737258988s)
Aug 25 04:14:03.957: INFO: Waiting up to 3m0s for PersistentVolume local-8bdg5 to have phase Bound
Aug 25 04:14:04.062: INFO: PersistentVolume local-8bdg5 found and phase=Bound (105.08991ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-nd9d
STEP: Creating a pod to test subpath
Aug 25 04:14:04.378: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nd9d" in namespace "provisioning-1257" to be "Succeeded or Failed"
Aug 25 04:14:04.483: INFO: Pod "pod-subpath-test-preprovisionedpv-nd9d": Phase="Pending", Reason="", readiness=false. Elapsed: 105.18193ms
Aug 25 04:14:06.588: INFO: Pod "pod-subpath-test-preprovisionedpv-nd9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209625057s
Aug 25 04:14:08.692: INFO: Pod "pod-subpath-test-preprovisionedpv-nd9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.314065287s
STEP: Saw pod success
Aug 25 04:14:08.692: INFO: Pod "pod-subpath-test-preprovisionedpv-nd9d" satisfied condition "Succeeded or Failed"
Aug 25 04:14:08.797: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-nd9d container test-container-subpath-preprovisionedpv-nd9d: <nil>
STEP: delete the pod
Aug 25 04:14:09.013: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nd9d to disappear
Aug 25 04:14:09.117: INFO: Pod pod-subpath-test-preprovisionedpv-nd9d no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nd9d
Aug 25 04:14:09.117: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nd9d" in namespace "provisioning-1257"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":9,"skipped":74,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:14:10.592: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 34 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.","total":-1,"completed":11,"skipped":81,"failed":0}
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:14:07.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test substitution in volume subpath
Aug 25 04:14:08.560: INFO: Waiting up to 5m0s for pod "var-expansion-cf816ea9-0d85-4d43-b7fd-53a13217787e" in namespace "var-expansion-4802" to be "Succeeded or Failed"
Aug 25 04:14:08.663: INFO: Pod "var-expansion-cf816ea9-0d85-4d43-b7fd-53a13217787e": Phase="Pending", Reason="", readiness=false. Elapsed: 103.171596ms
Aug 25 04:14:10.770: INFO: Pod "var-expansion-cf816ea9-0d85-4d43-b7fd-53a13217787e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.210389994s
STEP: Saw pod success
Aug 25 04:14:10.770: INFO: Pod "var-expansion-cf816ea9-0d85-4d43-b7fd-53a13217787e" satisfied condition "Succeeded or Failed"
Aug 25 04:14:10.873: INFO: Trying to get logs from node ip-172-20-38-132.eu-west-3.compute.internal pod var-expansion-cf816ea9-0d85-4d43-b7fd-53a13217787e container dapi-container: <nil>
STEP: delete the pod
Aug 25 04:14:11.091: INFO: Waiting for pod var-expansion-cf816ea9-0d85-4d43-b7fd-53a13217787e to disappear
Aug 25 04:14:11.193: INFO: Pod var-expansion-cf816ea9-0d85-4d43-b7fd-53a13217787e no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:14:11.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4802" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":-1,"completed":12,"skipped":81,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:14:11.429: INFO: Driver hostPath doesn't support ext4 -- skipping
... skipping 44 lines ...
Aug 25 04:14:01.149: INFO: PersistentVolumeClaim pvc-dbjjd found but phase is Pending instead of Bound.
Aug 25 04:14:03.252: INFO: PersistentVolumeClaim pvc-dbjjd found and phase=Bound (14.830517834s)
Aug 25 04:14:03.252: INFO: Waiting up to 3m0s for PersistentVolume local-xvb87 to have phase Bound
Aug 25 04:14:03.357: INFO: PersistentVolume local-xvb87 found and phase=Bound (104.351668ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-2tnj
STEP: Creating a pod to test subpath
Aug 25 04:14:03.666: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-2tnj" in namespace "provisioning-306" to be "Succeeded or Failed"
Aug 25 04:14:03.772: INFO: Pod "pod-subpath-test-preprovisionedpv-2tnj": Phase="Pending", Reason="", readiness=false. Elapsed: 106.154702ms
Aug 25 04:14:05.876: INFO: Pod "pod-subpath-test-preprovisionedpv-2tnj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209606014s
Aug 25 04:14:07.979: INFO: Pod "pod-subpath-test-preprovisionedpv-2tnj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.312655658s
STEP: Saw pod success
Aug 25 04:14:07.979: INFO: Pod "pod-subpath-test-preprovisionedpv-2tnj" satisfied condition "Succeeded or Failed"
Aug 25 04:14:08.082: INFO: Trying to get logs from node ip-172-20-36-72.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-2tnj container test-container-subpath-preprovisionedpv-2tnj: <nil>
STEP: delete the pod
Aug 25 04:14:08.320: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-2tnj to disappear
Aug 25 04:14:08.422: INFO: Pod pod-subpath-test-preprovisionedpv-2tnj no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-2tnj
Aug 25 04:14:08.422: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-2tnj" in namespace "provisioning-306"
STEP: Creating pod pod-subpath-test-preprovisionedpv-2tnj
STEP: Creating a pod to test subpath
Aug 25 04:14:08.629: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-2tnj" in namespace "provisioning-306" to be "Succeeded or Failed"
Aug 25 04:14:08.731: INFO: Pod "pod-subpath-test-preprovisionedpv-2tnj": Phase="Pending", Reason="", readiness=false. Elapsed: 102.432839ms
Aug 25 04:14:10.835: INFO: Pod "pod-subpath-test-preprovisionedpv-2tnj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.205597892s
STEP: Saw pod success
Aug 25 04:14:10.835: INFO: Pod "pod-subpath-test-preprovisionedpv-2tnj" satisfied condition "Succeeded or Failed"
Aug 25 04:14:10.937: INFO: Trying to get logs from node ip-172-20-36-72.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-2tnj container test-container-subpath-preprovisionedpv-2tnj: <nil>
STEP: delete the pod
Aug 25 04:14:11.154: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-2tnj to disappear
Aug 25 04:14:11.257: INFO: Pod pod-subpath-test-preprovisionedpv-2tnj no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-2tnj
Aug 25 04:14:11.257: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-2tnj" in namespace "provisioning-306"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:391
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":8,"skipped":61,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:14:12.713: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 49 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1570
------------------------------
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:14:13.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9552" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":87,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:14:13.855: INFO: Only supported for providers [gce gke] (not aws)
... skipping 74 lines ...
Aug 25 04:13:16.759: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathqcvjg] to have phase Bound
Aug 25 04:13:16.863: INFO: PersistentVolumeClaim csi-hostpathqcvjg found but phase is Pending instead of Bound.
Aug 25 04:13:18.966: INFO: PersistentVolumeClaim csi-hostpathqcvjg found but phase is Pending instead of Bound.
Aug 25 04:13:21.071: INFO: PersistentVolumeClaim csi-hostpathqcvjg found and phase=Bound (4.311227404s)
STEP: Expanding non-expandable pvc
Aug 25 04:13:21.278: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Aug 25 04:13:21.485: INFO: Error updating pvc csi-hostpathqcvjg: persistentvolumeclaims "csi-hostpathqcvjg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:13:23.692: INFO: Error updating pvc csi-hostpathqcvjg: persistentvolumeclaims "csi-hostpathqcvjg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:13:25.692: INFO: Error updating pvc csi-hostpathqcvjg: persistentvolumeclaims "csi-hostpathqcvjg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:13:27.692: INFO: Error updating pvc csi-hostpathqcvjg: persistentvolumeclaims "csi-hostpathqcvjg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:13:29.696: INFO: Error updating pvc csi-hostpathqcvjg: persistentvolumeclaims "csi-hostpathqcvjg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:13:31.692: INFO: Error updating pvc csi-hostpathqcvjg: persistentvolumeclaims "csi-hostpathqcvjg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:13:33.692: INFO: Error updating pvc csi-hostpathqcvjg: persistentvolumeclaims "csi-hostpathqcvjg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:13:35.692: INFO: Error updating pvc csi-hostpathqcvjg: persistentvolumeclaims "csi-hostpathqcvjg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:13:37.693: INFO: Error updating pvc csi-hostpathqcvjg: persistentvolumeclaims "csi-hostpathqcvjg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:13:39.692: INFO: Error updating pvc csi-hostpathqcvjg: persistentvolumeclaims "csi-hostpathqcvjg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:13:41.692: INFO: Error updating pvc csi-hostpathqcvjg: persistentvolumeclaims "csi-hostpathqcvjg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:13:43.692: INFO: Error updating pvc csi-hostpathqcvjg: persistentvolumeclaims "csi-hostpathqcvjg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:13:45.692: INFO: Error updating pvc csi-hostpathqcvjg: persistentvolumeclaims "csi-hostpathqcvjg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:13:47.693: INFO: Error updating pvc csi-hostpathqcvjg: persistentvolumeclaims "csi-hostpathqcvjg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:13:49.700: INFO: Error updating pvc csi-hostpathqcvjg: persistentvolumeclaims "csi-hostpathqcvjg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:13:51.693: INFO: Error updating pvc csi-hostpathqcvjg: persistentvolumeclaims "csi-hostpathqcvjg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 25 04:13:51.900: INFO: Error updating pvc csi-hostpathqcvjg: persistentvolumeclaims "csi-hostpathqcvjg" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Aug 25 04:13:51.900: INFO: Deleting PersistentVolumeClaim "csi-hostpathqcvjg"
Aug 25 04:13:52.004: INFO: Waiting up to 5m0s for PersistentVolume pvc-6ab7e625-926b-4838-9aa9-8f3ea9a36005 to get deleted
Aug 25 04:13:52.107: INFO: PersistentVolume pvc-6ab7e625-926b-4838-9aa9-8f3ea9a36005 was removed
STEP: Deleting sc
STEP: deleting the test namespace: volume-expand-4519
... skipping 161 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:14:16.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9245" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":11,"skipped":90,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:14:16.851: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 36 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  when scheduling a busybox command in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:41
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":72,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:14:18.047: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 86 lines ...
Aug 25 04:13:51.759: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:13:52.481: INFO: Exec stderr: ""
Aug 25 04:13:54.796: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-4510"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-4510"/host; echo host > "/var/lib/kubelet/mount-propagation-4510"/host/file] Namespace:mount-propagation-4510 PodName:hostexec-ip-172-20-36-72.eu-west-3.compute.internal-h5c5v ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Aug 25 04:13:54.796: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:13:55.604: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4510 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:13:55.604: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:13:56.331: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Aug 25 04:13:56.435: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4510 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:13:56.435: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:13:57.129: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Aug 25 04:13:57.233: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4510 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:13:57.233: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:13:57.930: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Aug 25 04:13:58.035: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4510 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:13:58.035: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:13:58.731: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Aug 25 04:13:58.836: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4510 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:13:58.836: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:13:59.539: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Aug 25 04:13:59.646: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4510 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:13:59.646: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:14:00.408: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Aug 25 04:14:00.513: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4510 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:14:00.513: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:14:01.284: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Aug 25 04:14:01.389: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4510 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:14:01.389: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:14:02.121: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Aug 25 04:14:02.225: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4510 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:14:02.225: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:14:03.004: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Aug 25 04:14:03.109: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4510 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:14:03.109: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:14:03.857: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Aug 25 04:14:03.962: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4510 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:14:03.962: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:14:04.690: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Aug 25 04:14:04.795: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4510 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:14:04.795: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:14:05.539: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Aug 25 04:14:05.648: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4510 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:14:05.649: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:14:06.331: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Aug 25 04:14:06.436: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4510 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:14:06.436: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:14:07.148: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Aug 25 04:14:07.253: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4510 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:14:07.253: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:14:08.003: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Aug 25 04:14:08.107: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4510 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:14:08.107: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:14:08.821: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Aug 25 04:14:08.925: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4510 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:14:08.925: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:14:09.639: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Aug 25 04:14:09.744: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4510 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:14:09.744: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:14:10.443: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Aug 25 04:14:10.548: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4510 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:14:10.548: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:14:11.263: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Aug 25 04:14:11.367: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4510 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 25 04:14:11.367: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:14:12.073: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Aug 25 04:14:12.073: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-4510"/master/file` = master] Namespace:mount-propagation-4510 PodName:hostexec-ip-172-20-36-72.eu-west-3.compute.internal-h5c5v ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Aug 25 04:14:12.073: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:14:12.788: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-4510"/slave/file] Namespace:mount-propagation-4510 PodName:hostexec-ip-172-20-36-72.eu-west-3.compute.internal-h5c5v ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Aug 25 04:14:12.789: INFO: >>> kubeConfig: /root/.kube/config
Aug 25 04:14:13.493: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-4510"/host] Namespace:mount-propagation-4510 PodName:hostexec-ip-172-20-36-72.eu-west-3.compute.internal-h5c5v ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Aug 25 04:14:13.493: INFO: >>> kubeConfig: /root/.kube/config
... skipping 21 lines ...
• [SLOW TEST:52.939 seconds]
[k8s.io] [sig-node] Mount propagation
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should propagate mounts to the host
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":8,"skipped":67,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:14:18.666: INFO: Driver hostPathSymlink doesn't support ntfs -- skipping
... skipping 171 lines ...
Aug 25 04:14:15.800: INFO: PersistentVolumeClaim pvc-7tfkx found but phase is Pending instead of Bound.
Aug 25 04:14:17.904: INFO: PersistentVolumeClaim pvc-7tfkx found and phase=Bound (4.311491261s)
Aug 25 04:14:17.904: INFO: Waiting up to 3m0s for PersistentVolume local-4wfsz to have phase Bound
Aug 25 04:14:18.008: INFO: PersistentVolume local-4wfsz found and phase=Bound (103.618579ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-mfx5
STEP: Creating a pod to test subpath
Aug 25 04:14:18.320: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mfx5" in namespace "provisioning-1141" to be "Succeeded or Failed"
Aug 25 04:14:18.423: INFO: Pod "pod-subpath-test-preprovisionedpv-mfx5": Phase="Pending", Reason="", readiness=false. Elapsed: 103.594962ms
Aug 25 04:14:20.527: INFO: Pod "pod-subpath-test-preprovisionedpv-mfx5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.207647948s
STEP: Saw pod success
Aug 25 04:14:20.528: INFO: Pod "pod-subpath-test-preprovisionedpv-mfx5" satisfied condition "Succeeded or Failed"
Aug 25 04:14:20.632: INFO: Trying to get logs from node ip-172-20-36-72.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-mfx5 container test-container-volume-preprovisionedpv-mfx5: <nil>
STEP: delete the pod
Aug 25 04:14:20.848: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mfx5 to disappear
Aug 25 04:14:20.952: INFO: Pod pod-subpath-test-preprovisionedpv-mfx5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mfx5
Aug 25 04:14:20.952: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mfx5" in namespace "provisioning-1141"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":9,"skipped":72,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:14:16.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:110
STEP: Creating configMap with name configmap-test-volume-map-2fc617d1-09ca-461c-a717-6e0d01a88494
STEP: Creating a pod to test consume configMaps
Aug 25 04:14:17.593: INFO: Waiting up to 5m0s for pod "pod-configmaps-03a36e07-0210-4cac-84fc-fcda576af593" in namespace "configmap-7618" to be "Succeeded or Failed"
Aug 25 04:14:17.701: INFO: Pod "pod-configmaps-03a36e07-0210-4cac-84fc-fcda576af593": Phase="Pending", Reason="", readiness=false. Elapsed: 107.892764ms
Aug 25 04:14:19.806: INFO: Pod "pod-configmaps-03a36e07-0210-4cac-84fc-fcda576af593": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212684852s
Aug 25 04:14:21.910: INFO: Pod "pod-configmaps-03a36e07-0210-4cac-84fc-fcda576af593": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.317268153s
STEP: Saw pod success
Aug 25 04:14:21.910: INFO: Pod "pod-configmaps-03a36e07-0210-4cac-84fc-fcda576af593" satisfied condition "Succeeded or Failed"
Aug 25 04:14:22.016: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod pod-configmaps-03a36e07-0210-4cac-84fc-fcda576af593 container agnhost-container: <nil>
STEP: delete the pod
Aug 25 04:14:22.232: INFO: Waiting for pod pod-configmaps-03a36e07-0210-4cac-84fc-fcda576af593 to disappear
Aug 25 04:14:22.336: INFO: Pod pod-configmaps-03a36e07-0210-4cac-84fc-fcda576af593 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:5.689 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:110
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":12,"skipped":92,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:14:22.565: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
... skipping 120 lines ...
STEP: Deleting pod hostexec-ip-172-20-38-132.eu-west-3.compute.internal-snrh5 in namespace volumemode-8508
Aug 25 04:14:11.038: INFO: Deleting pod "pod-1a983f4b-7d0f-4aac-be20-dcbb874524f4" in namespace "volumemode-8508"
Aug 25 04:14:11.143: INFO: Wait up to 5m0s for pod "pod-1a983f4b-7d0f-4aac-be20-dcbb874524f4" to be fully deleted
STEP: Deleting pv and pvc
Aug 25 04:14:13.352: INFO: Deleting PersistentVolumeClaim "pvc-58rrd"
Aug 25 04:14:13.458: INFO: Deleting PersistentVolume "aws-vtsqs"
Aug 25 04:14:13.809: INFO: Couldn't delete PD "aws://eu-west-3a/vol-028b0dedea940b60a", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-028b0dedea940b60a is currently attached to i-092c3f7849093a565
	status code: 400, request id: b5383ce4-f2b7-4d42-b827-1b072427baf4
Aug 25 04:14:19.415: INFO: Couldn't delete PD "aws://eu-west-3a/vol-028b0dedea940b60a", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-028b0dedea940b60a is currently attached to i-092c3f7849093a565
	status code: 400, request id: 590963d2-cf3f-40d0-b20e-4c0c46fcca1f
Aug 25 04:14:25.001: INFO: Successfully deleted PD "aws://eu-west-3a/vol-028b0dedea940b60a".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:14:25.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumemode-8508" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":9,"skipped":93,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:38.260 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that PVC in active use by a pod is not removed immediately
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:124
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":5,"skipped":108,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:14:27.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5054" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":10,"skipped":73,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:14:27.301: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 37 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
SSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":4,"skipped":10,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:14:14.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 90 lines ...
• [SLOW TEST:7.454 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":13,"skipped":102,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":5,"skipped":10,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:14:29.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Aug 25 04:14:30.348: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78a99134-6446-48fb-8386-54146d50408e" in namespace "downward-api-2636" to be "Succeeded or Failed"
Aug 25 04:14:30.452: INFO: Pod "downwardapi-volume-78a99134-6446-48fb-8386-54146d50408e": Phase="Pending", Reason="", readiness=false. Elapsed: 103.660462ms
Aug 25 04:14:32.556: INFO: Pod "downwardapi-volume-78a99134-6446-48fb-8386-54146d50408e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.207564382s
STEP: Saw pod success
Aug 25 04:14:32.556: INFO: Pod "downwardapi-volume-78a99134-6446-48fb-8386-54146d50408e" satisfied condition "Succeeded or Failed"
Aug 25 04:14:32.659: INFO: Trying to get logs from node ip-172-20-38-132.eu-west-3.compute.internal pod downwardapi-volume-78a99134-6446-48fb-8386-54146d50408e container client-container: <nil>
STEP: delete the pod
Aug 25 04:14:32.873: INFO: Waiting for pod downwardapi-volume-78a99134-6446-48fb-8386-54146d50408e to disappear
Aug 25 04:14:32.976: INFO: Pod downwardapi-volume-78a99134-6446-48fb-8386-54146d50408e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:14:32.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2636" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:14:33.206: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 79 lines ...
      Driver csi-hostpath doesn't support ntfs -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:178
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":13,"skipped":70,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:14:01.348: INFO: >>> kubeConfig: /root/.kube/config
... skipping 6 lines ...
Aug 25 04:14:01.868: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-1799-aws-scr7thh
STEP: creating a claim
Aug 25 04:14:01.971: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-zt4t
STEP: Creating a pod to test subpath
Aug 25 04:14:02.290: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-zt4t" in namespace "provisioning-1799" to be "Succeeded or Failed"
Aug 25 04:14:02.393: INFO: Pod "pod-subpath-test-dynamicpv-zt4t": Phase="Pending", Reason="", readiness=false. Elapsed: 102.784936ms
Aug 25 04:14:04.497: INFO: Pod "pod-subpath-test-dynamicpv-zt4t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206038629s
Aug 25 04:14:06.600: INFO: Pod "pod-subpath-test-dynamicpv-zt4t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309957446s
Aug 25 04:14:08.703: INFO: Pod "pod-subpath-test-dynamicpv-zt4t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413019257s
Aug 25 04:14:10.807: INFO: Pod "pod-subpath-test-dynamicpv-zt4t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.516228061s
Aug 25 04:14:12.910: INFO: Pod "pod-subpath-test-dynamicpv-zt4t": Phase="Pending", Reason="", readiness=false. Elapsed: 10.619249805s
Aug 25 04:14:15.013: INFO: Pod "pod-subpath-test-dynamicpv-zt4t": Phase="Pending", Reason="", readiness=false. Elapsed: 12.722412461s
Aug 25 04:14:17.118: INFO: Pod "pod-subpath-test-dynamicpv-zt4t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.827409347s
STEP: Saw pod success
Aug 25 04:14:17.118: INFO: Pod "pod-subpath-test-dynamicpv-zt4t" satisfied condition "Succeeded or Failed"
Aug 25 04:14:17.221: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod pod-subpath-test-dynamicpv-zt4t container test-container-volume-dynamicpv-zt4t: <nil>
STEP: delete the pod
Aug 25 04:14:17.438: INFO: Waiting for pod pod-subpath-test-dynamicpv-zt4t to disappear
Aug 25 04:14:17.541: INFO: Pod pod-subpath-test-dynamicpv-zt4t no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-zt4t
Aug 25 04:14:17.541: INFO: Deleting pod "pod-subpath-test-dynamicpv-zt4t" in namespace "provisioning-1799"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":14,"skipped":70,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:14:33.820: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 165 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:502
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes","total":-1,"completed":4,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:14:39.119: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 18 lines ...
Aug 25 04:14:33.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 25 04:14:34.494: INFO: Waiting up to 5m0s for pod "pod-877cbb00-66ed-44ff-99e0-60ffe69e9891" in namespace "emptydir-8928" to be "Succeeded or Failed"
Aug 25 04:14:34.598: INFO: Pod "pod-877cbb00-66ed-44ff-99e0-60ffe69e9891": Phase="Pending", Reason="", readiness=false. Elapsed: 103.102473ms
Aug 25 04:14:36.701: INFO: Pod "pod-877cbb00-66ed-44ff-99e0-60ffe69e9891": Phase="Running", Reason="", readiness=true. Elapsed: 2.206432413s
Aug 25 04:14:38.805: INFO: Pod "pod-877cbb00-66ed-44ff-99e0-60ffe69e9891": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.310946494s
STEP: Saw pod success
Aug 25 04:14:38.806: INFO: Pod "pod-877cbb00-66ed-44ff-99e0-60ffe69e9891" satisfied condition "Succeeded or Failed"
Aug 25 04:14:38.909: INFO: Trying to get logs from node ip-172-20-32-67.eu-west-3.compute.internal pod pod-877cbb00-66ed-44ff-99e0-60ffe69e9891 container test-container: <nil>
STEP: delete the pod
Aug 25 04:14:39.120: INFO: Waiting for pod pod-877cbb00-66ed-44ff-99e0-60ffe69e9891 to disappear
Aug 25 04:14:39.223: INFO: Pod pod-877cbb00-66ed-44ff-99e0-60ffe69e9891 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:5.556 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":83,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 55738 lines ...






led updated: 1 ports\nI0825 04:20:51.353452       1 service.go:390] Adding new service port \"services-9512/service-proxy-toggled\" at 100.65.200.134:80/TCP\nI0825 04:20:51.353799       1 proxier.go:871] Syncing iptables rules\nI0825 04:20:51.394553       1 proxier.go:826] syncProxyRules took 41.260875ms\nI0825 04:20:52.394923       1 proxier.go:871] Syncing iptables rules\nI0825 04:20:52.418603       1 proxier.go:826] syncProxyRules took 23.932687ms\nI0825 04:20:52.973128       1 proxier.go:871] Syncing iptables rules\nI0825 04:20:53.041388       1 proxier.go:826] syncProxyRules took 68.520905ms\nI0825 04:20:54.043311       1 proxier.go:871] Syncing iptables rules\nI0825 04:20:54.116490       1 proxier.go:826] syncProxyRules took 73.520653ms\nI0825 04:20:55.116886       1 proxier.go:871] Syncing iptables rules\nI0825 04:20:55.144966       1 proxier.go:826] syncProxyRules took 28.358202ms\nI0825 04:21:02.730818       1 service.go:275] Service ephemeral-7241-6804/csi-hostpath-attacher updated: 0 ports\nI0825 04:21:02.731214       1 service.go:415] Removing service port \"ephemeral-7241-6804/csi-hostpath-attacher:dummy\"\nI0825 04:21:02.731649       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:02.790253       1 proxier.go:826] syncProxyRules took 59.177134ms\nI0825 04:21:02.790544       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:02.816574       1 proxier.go:826] syncProxyRules took 26.289694ms\nI0825 04:21:03.070136       1 service.go:275] Service ephemeral-7241-6804/csi-hostpathplugin updated: 0 ports\nI0825 04:21:03.291582       1 service.go:275] Service ephemeral-7241-6804/csi-hostpath-provisioner updated: 0 ports\nI0825 04:21:03.510881       1 service.go:275] Service ephemeral-7241-6804/csi-hostpath-resizer updated: 0 ports\nI0825 04:21:03.730449       1 service.go:275] Service ephemeral-7241-6804/csi-hostpath-snapshotter updated: 0 ports\nI0825 04:21:03.731286       1 service.go:415] Removing service port \"ephemeral-7241-6804/csi-hostpathplugin:dummy\"\nI0825 04:21:03.731443       1 service.go:415] Removing service port \"ephemeral-7241-6804/csi-hostpath-provisioner:dummy\"\nI0825 04:21:03.731459       1 service.go:415] Removing service port \"ephemeral-7241-6804/csi-hostpath-resizer:dummy\"\nI0825 04:21:03.731517       1 service.go:415] Removing service port \"ephemeral-7241-6804/csi-hostpath-snapshotter:dummy\"\nI0825 04:21:03.731674       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:03.761878       1 proxier.go:826] syncProxyRules took 30.771454ms\nI0825 04:21:04.762190       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:04.786273       1 proxier.go:826] syncProxyRules took 24.317761ms\nI0825 04:21:12.837279       1 service.go:275] Service provisioning-1562-2531/csi-hostpath-attacher updated: 1 ports\nI0825 04:21:12.837828       1 service.go:390] Adding new service port \"provisioning-1562-2531/csi-hostpath-attacher:dummy\" at 100.65.73.190:12345/TCP\nI0825 04:21:12.838257       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:12.898569       1 proxier.go:826] syncProxyRules took 61.253158ms\nI0825 04:21:12.898966       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:12.935215       1 proxier.go:826] syncProxyRules took 36.613404ms\nI0825 04:21:13.151841       1 service.go:275] Service provisioning-1562-2531/csi-hostpathplugin updated: 1 ports\nI0825 04:21:13.362635       1 service.go:275] Service provisioning-1562-2531/csi-hostpath-provisioner updated: 1 ports\nI0825 04:21:13.580257       1 service.go:275] Service provisioning-1562-2531/csi-hostpath-resizer updated: 1 ports\nI0825 04:21:13.792656       1 service.go:275] Service provisioning-1562-2531/csi-hostpath-snapshotter updated: 1 ports\nI0825 04:21:13.833918       1 service.go:275] Service webhook-5337/e2e-test-webhook updated: 1 ports\nI0825 04:21:13.840072       1 service.go:390] Adding new service port \"provisioning-1562-2531/csi-hostpath-resizer:dummy\" at 100.65.48.81:12345/TCP\nI0825 04:21:13.840100       1 service.go:390] Adding new service port \"provisioning-1562-2531/csi-hostpath-snapshotter:dummy\" at 100.65.20.229:12345/TCP\nI0825 04:21:13.840336       1 service.go:390] Adding new service port \"webhook-5337/e2e-test-webhook\" at 100.70.26.14:8443/TCP\nI0825 04:21:13.840353       1 service.go:390] Adding new service port \"provisioning-1562-2531/csi-hostpathplugin:dummy\" at 100.67.68.81:12345/TCP\nI0825 04:21:13.840412       1 service.go:390] Adding new service port \"provisioning-1562-2531/csi-hostpath-provisioner:dummy\" at 100.65.243.0:12345/TCP\nI0825 04:21:13.840555       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:13.874269       1 proxier.go:826] syncProxyRules took 34.361693ms\nI0825 04:21:16.259718       1 service.go:275] Service services-9512/service-proxy-toggled updated: 0 ports\nI0825 04:21:16.260131       1 service.go:415] Removing service port \"services-9512/service-proxy-toggled\"\nI0825 04:21:16.260352       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:16.294677       1 proxier.go:826] syncProxyRules took 34.748056ms\nI0825 04:21:16.295077       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:16.324451       1 proxier.go:826] syncProxyRules took 29.605556ms\nI0825 04:21:17.324889       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:17.348130       1 proxier.go:826] syncProxyRules took 23.535429ms\nI0825 04:21:18.348869       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:18.391963       1 proxier.go:826] syncProxyRules took 43.451956ms\nI0825 04:21:19.378817       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:19.409508       1 proxier.go:826] syncProxyRules took 30.972811ms\nI0825 04:21:20.410012       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:20.435684       1 proxier.go:826] syncProxyRules took 25.891039ms\nI0825 04:21:22.140639       1 service.go:275] Service services-9512/service-proxy-toggled updated: 1 ports\nI0825 04:21:22.141002       1 service.go:390] Adding new service port \"services-9512/service-proxy-toggled\" at 100.65.200.134:80/TCP\nI0825 04:21:22.141258       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:22.209385       1 proxier.go:826] syncProxyRules took 68.566607ms\nI0825 04:21:23.209832       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:23.251038       1 proxier.go:826] syncProxyRules took 41.513466ms\nI0825 04:21:29.412941       1 service.go:275] Service webhook-5337/e2e-test-webhook updated: 0 ports\nI0825 04:21:29.413251       1 service.go:415] Removing service port \"webhook-5337/e2e-test-webhook\"\nI0825 04:21:29.413933       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:29.449240       1 proxier.go:826] syncProxyRules took 36.18025ms\nI0825 04:21:29.449528       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:29.475280       1 proxier.go:826] syncProxyRules took 26.008563ms\nI0825 04:21:33.770435       1 service.go:275] Service provisioning-1576-1838/csi-hostpath-attacher updated: 1 ports\nI0825 04:21:33.770608       1 service.go:390] Adding new service port \"provisioning-1576-1838/csi-hostpath-attacher:dummy\" at 100.70.110.212:12345/TCP\nI0825 04:21:33.770746       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:33.803092       1 proxier.go:826] syncProxyRules took 32.624847ms\nI0825 04:21:33.803447       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:33.833020       1 proxier.go:826] syncProxyRules took 29.894606ms\nI0825 04:21:34.090633       1 service.go:275] Service provisioning-1576-1838/csi-hostpathplugin updated: 1 ports\nI0825 04:21:34.305842       1 service.go:275] Service provisioning-1576-1838/csi-hostpath-provisioner updated: 1 ports\nI0825 04:21:34.518957       1 service.go:275] Service provisioning-1576-1838/csi-hostpath-resizer updated: 1 ports\nI0825 04:21:34.732018       1 service.go:275] Service provisioning-1576-1838/csi-hostpath-snapshotter updated: 1 ports\nI0825 04:21:34.833414       1 service.go:390] Adding new service port \"provisioning-1576-1838/csi-hostpath-resizer:dummy\" at 100.66.167.22:12345/TCP\nI0825 04:21:34.833444       1 service.go:390] Adding new service port \"provisioning-1576-1838/csi-hostpath-snapshotter:dummy\" at 100.64.169.13:12345/TCP\nI0825 04:21:34.833455       1 service.go:390] Adding new service port \"provisioning-1576-1838/csi-hostpathplugin:dummy\" at 100.71.156.238:12345/TCP\nI0825 04:21:34.833465       1 service.go:390] Adding new service port \"provisioning-1576-1838/csi-hostpath-provisioner:dummy\" at 100.66.148.79:12345/TCP\nI0825 04:21:34.833606       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:34.894291       1 proxier.go:826] syncProxyRules took 61.084312ms\nI0825 04:21:36.012384       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:36.060690       1 proxier.go:826] syncProxyRules took 48.589959ms\nI0825 04:21:36.311465       1 service.go:275] Service services-2907/service-headless-toggled updated: 0 ports\nI0825 04:21:36.903676       1 service.go:415] Removing service port \"services-2907/service-headless-toggled\"\nI0825 04:21:36.904259       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:36.957552       1 proxier.go:826] syncProxyRules took 54.015466ms\nI0825 04:21:37.283297       1 service.go:275] Service kubectl-3301/agnhost-primary updated: 1 ports\nI0825 04:21:37.958131       1 service.go:390] Adding new service port \"kubectl-3301/agnhost-primary\" at 100.70.200.139:6379/TCP\nI0825 04:21:37.958359       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:37.992789       1 proxier.go:826] syncProxyRules took 34.788229ms\nI0825 04:21:38.993151       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:39.018912       1 proxier.go:826] syncProxyRules took 26.005856ms\nI0825 04:21:39.903459       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:39.931866       1 proxier.go:826] syncProxyRules took 28.631999ms\nI0825 04:21:41.406916       1 service.go:275] Service services-249/sourceip-test updated: 1 ports\nI0825 04:21:41.407419       1 service.go:390] Adding new service port \"services-249/sourceip-test\" at 100.68.2.170:8080/TCP\nI0825 04:21:41.407735       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:41.444532       1 proxier.go:826] syncProxyRules took 37.275226ms\nI0825 04:21:42.445044       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:42.506096       1 proxier.go:826] syncProxyRules took 61.431225ms\nI0825 04:21:42.898065       1 service.go:275] Service services-9512/service-proxy-toggled updated: 0 ports\nI0825 04:21:42.898269       1 service.go:415] Removing service port \"services-9512/service-proxy-toggled\"\nI0825 04:21:42.898476       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:42.938898       1 proxier.go:826] syncProxyRules took 40.794459ms\nI0825 04:21:43.939498       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:44.017003       1 proxier.go:826] syncProxyRules took 77.932034ms\nI0825 04:21:44.250641       1 service.go:275] Service kubectl-3301/agnhost-primary updated: 0 ports\nI0825 04:21:45.019626       1 service.go:415] Removing service port \"kubectl-3301/agnhost-primary\"\nI0825 04:21:45.019886       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:45.100319       1 proxier.go:826] syncProxyRules took 80.84253ms\nI0825 04:21:46.027145       1 service.go:275] Service volume-7993-5153/csi-hostpath-attacher updated: 1 ports\nI0825 04:21:46.027544       1 service.go:390] Adding new service port \"volume-7993-5153/csi-hostpath-attacher:dummy\" at 100.64.4.72:12345/TCP\nI0825 04:21:46.027821       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:46.055199       1 proxier.go:826] syncProxyRules took 27.83015ms\nI0825 04:21:46.348408       1 service.go:275] Service volume-7993-5153/csi-hostpathplugin updated: 1 ports\nI0825 04:21:46.561978       1 service.go:275] Service volume-7993-5153/csi-hostpath-provisioner updated: 1 ports\nI0825 04:21:46.775826       1 service.go:275] Service volume-7993-5153/csi-hostpath-resizer updated: 1 ports\nI0825 04:21:46.776135       1 service.go:390] Adding new service port \"volume-7993-5153/csi-hostpath-provisioner:dummy\" at 100.70.20.133:12345/TCP\nI0825 04:21:46.776211       1 service.go:390] Adding new service port \"volume-7993-5153/csi-hostpath-resizer:dummy\" at 100.69.150.205:12345/TCP\nI0825 04:21:46.776244       1 service.go:390] Adding new service port \"volume-7993-5153/csi-hostpathplugin:dummy\" at 100.66.207.90:12345/TCP\nI0825 04:21:46.776450       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:46.805022       1 proxier.go:826] syncProxyRules took 29.056791ms\nI0825 04:21:46.994980       1 service.go:275] Service volume-7993-5153/csi-hostpath-snapshotter updated: 1 ports\nI0825 04:21:47.805446       1 service.go:390] Adding new service port \"volume-7993-5153/csi-hostpath-snapshotter:dummy\" at 100.69.142.10:12345/TCP\nI0825 04:21:47.806036       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:47.859081       1 proxier.go:826] syncProxyRules took 53.807982ms\nI0825 04:21:49.958281       1 service.go:275] Service services-3349/affinity-clusterip-timeout updated: 0 ports\nI0825 04:21:49.958573       1 service.go:415] Removing service port \"services-3349/affinity-clusterip-timeout\"\nI0825 04:21:49.958812       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:50.025012       1 proxier.go:826] syncProxyRules took 66.690387ms\nI0825 04:21:50.025393       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:50.072914       1 proxier.go:826] syncProxyRules took 47.792385ms\nI0825 04:21:51.762545       1 service.go:275] Service volume-expand-8827-6698/csi-hostpath-attacher updated: 0 ports\nI0825 04:21:51.762855       1 service.go:415] Removing service port \"volume-expand-8827-6698/csi-hostpath-attacher:dummy\"\nI0825 04:21:51.763073       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:51.801419       1 proxier.go:826] syncProxyRules took 38.762553ms\nI0825 04:21:52.092298       1 service.go:275] Service volume-expand-8827-6698/csi-hostpathplugin updated: 0 ports\nI0825 04:21:52.092733       1 service.go:415] Removing service port \"volume-expand-8827-6698/csi-hostpathplugin:dummy\"\nI0825 04:21:52.093172       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:52.126583       1 proxier.go:826] syncProxyRules took 34.247127ms\nI0825 04:21:52.316346       1 service.go:275] Service volume-expand-8827-6698/csi-hostpath-provisioner updated: 0 ports\nI0825 04:21:52.347150       1 service.go:275] Service provisioning-1562-2531/csi-hostpath-attacher updated: 0 ports\nI0825 04:21:52.545368       1 service.go:275] Service volume-expand-8827-6698/csi-hostpath-resizer updated: 0 ports\nI0825 04:21:52.673640       1 service.go:275] Service provisioning-1562-2531/csi-hostpathplugin updated: 0 ports\nI0825 04:21:52.784309       1 service.go:275] Service volume-expand-8827-6698/csi-hostpath-snapshotter updated: 0 ports\nI0825 04:21:52.910833       1 service.go:275] Service provisioning-1562-2531/csi-hostpath-provisioner updated: 0 ports\nI0825 04:21:53.036802       1 service.go:415] Removing service port \"volume-expand-8827-6698/csi-hostpath-provisioner:dummy\"\nI0825 04:21:53.036936       1 service.go:415] Removing service port \"provisioning-1562-2531/csi-hostpath-attacher:dummy\"\nI0825 04:21:53.036952       1 service.go:415] Removing service port \"volume-expand-8827-6698/csi-hostpath-resizer:dummy\"\nI0825 04:21:53.038264       1 service.go:415] Removing service port \"provisioning-1562-2531/csi-hostpathplugin:dummy\"\nI0825 04:21:53.038289       1 service.go:415] Removing service port \"volume-expand-8827-6698/csi-hostpath-snapshotter:dummy\"\nI0825 04:21:53.038299       1 service.go:415] Removing service port \"provisioning-1562-2531/csi-hostpath-provisioner:dummy\"\nI0825 04:21:53.038537       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:53.071298       1 proxier.go:826] syncProxyRules took 34.666259ms\nI0825 04:21:53.145182       1 service.go:275] Service provisioning-1562-2531/csi-hostpath-resizer updated: 0 ports\nI0825 04:21:53.362525       1 service.go:275] Service provisioning-1562-2531/csi-hostpath-snapshotter updated: 0 ports\nI0825 04:21:54.071617       1 service.go:415] Removing service port \"provisioning-1562-2531/csi-hostpath-resizer:dummy\"\nI0825 04:21:54.071644       1 service.go:415] Removing service port \"provisioning-1562-2531/csi-hostpath-snapshotter:dummy\"\nI0825 04:21:54.071825       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:54.098347       1 proxier.go:826] syncProxyRules took 26.891536ms\nI0825 04:21:55.638221       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:55.663370       1 proxier.go:826] syncProxyRules took 25.715342ms\nI0825 04:21:56.633872       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:56.676007       1 proxier.go:826] syncProxyRules took 42.338764ms\nI0825 04:21:57.037865       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:57.086903       1 proxier.go:826] syncProxyRules took 49.29994ms\nI0825 04:21:58.087312       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:58.117821       1 proxier.go:826] syncProxyRules took 30.793525ms\nI0825 04:21:58.211741       1 service.go:275] Service provisioning-4622-5316/csi-hostpath-attacher updated: 1 ports\nI0825 04:21:58.527752       1 service.go:275] Service provisioning-4622-5316/csi-hostpathplugin updated: 1 ports\nI0825 04:21:58.744906       1 service.go:275] Service provisioning-4622-5316/csi-hostpath-provisioner updated: 1 ports\nI0825 04:21:58.966577       1 service.go:275] Service provisioning-4622-5316/csi-hostpath-resizer updated: 1 ports\nI0825 04:21:58.967374       1 service.go:390] Adding new service port \"provisioning-4622-5316/csi-hostpath-attacher:dummy\" at 100.66.40.191:12345/TCP\nI0825 04:21:58.967784       1 service.go:390] Adding new service port \"provisioning-4622-5316/csi-hostpathplugin:dummy\" at 100.64.106.116:12345/TCP\nI0825 04:21:58.967904       1 service.go:390] Adding new service port \"provisioning-4622-5316/csi-hostpath-provisioner:dummy\" at 100.71.77.18:12345/TCP\nI0825 04:21:58.967998       1 service.go:390] Adding new service port \"provisioning-4622-5316/csi-hostpath-resizer:dummy\" at 100.69.169.185:12345/TCP\nI0825 04:21:58.968218       1 proxier.go:871] Syncing iptables rules\nI0825 04:21:59.007604       1 proxier.go:826] syncProxyRules took 40.383124ms\nI0825 04:21:59.185503       1 service.go:275] Service provisioning-4622-5316/csi-hostpath-snapshotter updated: 1 ports\nI0825 04:22:00.007870       1 service.go:390] Adding new service port \"provisioning-4622-5316/csi-hostpath-snapshotter:dummy\" at 100.64.64.106:12345/TCP\nI0825 04:22:00.008103       1 proxier.go:871] Syncing iptables rules\nI0825 04:22:00.055411       1 proxier.go:826] syncProxyRules took 47.686437ms\nI0825 04:22:01.056471       1 proxier.go:871] Syncing iptables rules\nI0825 04:22:01.091823       1 proxier.go:826] syncProxyRules took 35.672562ms\nI0825 04:22:02.003912       1 service.go:275] Service provisioning-1576-1838/csi-hostpath-attacher updated: 0 ports\nI0825 04:22:02.004244       1 service.go:415] Removing service port \"provisioning-1576-1838/csi-hostpath-attacher:dummy\"\nI0825 04:22:02.004943       1 proxier.go:871] Syncing iptables rules\nI0825 04:22:02.061528       1 proxier.go:826] syncProxyRules took 57.436761ms\nI0825 04:22:02.353314       1 service.go:275] Service provisioning-1576-1838/csi-hostpathplugin updated: 0 ports\nI0825 04:22:02.575890       1 service.go:275] Service provisioning-1576-1838/csi-hostpath-provisioner updated: 0 ports\nI0825 04:22:02.798909       1 service.go:275] Service provisioning-1576-1838/csi-hostpath-resizer updated: 0 ports\nI0825 04:22:03.022770       1 service.go:275] Service provisioning-1576-1838/csi-hostpath-snapshotter updated: 0 ports\nI0825 04:22:03.023364       1 service.go:415] Removing service port \"provisioning-1576-1838/csi-hostpathplugin:dummy\"\nI0825 04:22:03.023507       1 service.go:415] Removing service port \"provisioning-1576-1838/csi-hostpath-provisioner:dummy\"\nI0825 04:22:03.023524       1 service.go:415] Removing service port \"provisioning-1576-1838/csi-hostpath-resizer:dummy\"\nI0825 04:22:03.023533       1 service.go:415] Removing service port \"provisioning-1576-1838/csi-hostpath-snapshotter:dummy\"\nI0825 04:22:03.023724       1 proxier.go:871] Syncing iptables rules\nI0825 04:22:03.060549       1 proxier.go:826] syncProxyRules took 37.333792ms\nI0825 04:22:04.061175       1 proxier.go:871] Syncing iptables rules\nI0825 04:22:04.094587       1 proxier.go:826] syncProxyRules took 33.779101ms\nI0825 04:22:24.273695       1 service.go:275] Service provisioning-4622-5316/csi-hostpath-attacher updated: 0 ports\nI0825 04:22:24.274059       1 service.go:415] Removing service port \"provisioning-4622-5316/csi-hostpath-attacher:dummy\"\nI0825 04:22:24.274199       1 proxier.go:871] Syncing iptables rules\nI0825 04:22:24.317023       1 proxier.go:826] syncProxyRules took 43.289046ms\nI0825 04:22:24.317408       1 proxier.go:871] Syncing iptables rules\nI0825 04:22:24.362229       1 proxier.go:826] syncProxyRules took 45.109205ms\nI0825 04:22:24.617707       1 service.go:275] Service provisioning-4622-5316/csi-hostpathplugin updated: 0 ports\nI0825 04:22:24.852473       1 service.go:275] Service provisioning-4622-5316/csi-hostpath-provisioner updated: 0 ports\nI0825 04:22:25.078327       1 service.go:275] Service provisioning-4622-5316/csi-hostpath-resizer updated: 0 ports\nI0825 04:22:25.306077       1 service.go:275] Service provisioning-4622-5316/csi-hostpath-snapshotter updated: 0 ports\nI0825 04:22:25.306729       1 service.go:415] Removing service port \"provisioning-4622-5316/csi-hostpath-resizer:dummy\"\nI0825 04:22:25.306833       1 service.go:415] Removing service port \"provisioning-4622-5316/csi-hostpath-snapshotter:dummy\"\nI0825 04:22:25.306919       1 service.go:415] Removing service port \"provisioning-4622-5316/csi-hostpathplugin:dummy\"\nI0825 04:22:25.307731       1 service.go:415] Removing service port \"provisioning-4622-5316/csi-hostpath-provisioner:dummy\"\nI0825 04:22:25.307978       1 proxier.go:871] Syncing iptables rules\nI0825 04:22:25.370872       1 proxier.go:826] syncProxyRules took 64.40966ms\nI0825 04:22:26.371336       1 proxier.go:871] Syncing iptables rules\nI0825 04:22:26.397910       1 proxier.go:826] syncProxyRules took 26.882004ms\nI0825 04:22:32.713401       1 service.go:275] Service dns-12/test-service-2 updated: 1 ports\nI0825 04:22:32.713834       1 service.go:390] Adding new service port \"dns-12/test-service-2:http\" at 100.71.90.163:80/TCP\nI0825 04:22:32.714249       1 proxier.go:871] Syncing iptables rules\nI0825 04:22:32.739248       1 proxier.go:826] syncProxyRules took 25.569367ms\nI0825 04:22:32.739476       1 proxier.go:871] Syncing iptables rules\nI0825 04:22:32.763240       1 proxier.go:826] syncProxyRules took 23.964693ms\nI0825 04:22:34.806778       1 proxier.go:871] Syncing iptables rules\nI0825 04:22:34.833293       1 proxier.go:826] syncProxyRules took 26.71949ms\nI0825 04:22:41.877325       1 service.go:275] Service webhook-8251/e2e-test-webhook updated: 1 ports\nI0825 04:22:41.877656       1 service.go:390] Adding new service port \"webhook-8251/e2e-test-webhook\" at 100.69.91.253:8443/TCP\nI0825 04:22:41.877949       1 proxier.go:871] Syncing iptables rules\nI0825 04:22:41.903146       1 proxier.go:826] syncProxyRules took 25.784086ms\nI0825 04:22:41.903445       1 proxier.go:871] Syncing iptables rules\nI0825 04:22:41.936367       1 proxier.go:826] syncProxyRules took 33.190459ms\nI0825 04:22:44.694962       1 service.go:275] Service webhook-8251/e2e-test-webhook updated: 0 ports\nI0825 04:22:44.695586       1 service.go:415] Removing service port \"webhook-8251/e2e-test-webhook\"\nI0825 04:22:44.695812       1 proxier.go:871] Syncing iptables rules\nI0825 04:22:44.731966       1 proxier.go:826] syncProxyRules took 36.532792ms\nI0825 04:22:45.157519       1 proxier.go:871] Syncing iptables rules\nI0825 04:22:45.197249       1 proxier.go:826] syncProxyRules took 39.320351ms\nI0825 04:22:57.049623       1 service.go:275] Service services-4305/hairpin-test updated: 1 ports\nI0825 04:22:57.050384       1 service.go:390] Adding new service port \"services-4305/hairpin-test\" at 100.66.194.190:8080/TCP\nI0825 04:22:57.050577       1 proxier.go:871] Syncing iptables rules\nI0825 04:22:57.076547       1 proxier.go:826] syncProxyRules took 26.30421ms\nI0825 04:22:57.076884       1 proxier.go:871] Syncing iptables rules\nI0825 04:22:57.099316       1 proxier.go:826] syncProxyRules took 22.739705ms\nI0825 04:22:58.893153       1 service.go:275] Service services-7809/externalname-service updated: 1 ports\nI0825 04:22:58.893468       1 service.go:390] Adding new service port \"services-7809/externalname-service:http\" at 100.64.41.44:80/TCP\nI0825 04:22:58.893688       1 proxier.go:871] Syncing iptables rules\nI0825 04:22:58.947632       1 proxier.go:826] syncProxyRules took 54.354837ms\nI0825 04:22:59.948254       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:00.004183       1 proxier.go:826] syncProxyRules took 56.168966ms\nI0825 04:23:00.981398       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:01.006321       1 proxier.go:826] syncProxyRules took 25.12657ms\nI0825 04:23:04.769243       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:04.805662       1 proxier.go:826] syncProxyRules took 36.709414ms\nI0825 04:23:06.588540       1 service.go:275] Service volumemode-928-9345/csi-hostpath-attacher updated: 1 ports\nI0825 04:23:06.589392       1 service.go:390] Adding new service port \"volumemode-928-9345/csi-hostpath-attacher:dummy\" at 100.65.120.120:12345/TCP\nI0825 04:23:06.589551       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:06.622840       1 proxier.go:826] syncProxyRules took 33.613697ms\nI0825 04:23:06.623294       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:06.652275       1 proxier.go:826] syncProxyRules took 29.242226ms\nI0825 04:23:06.907164       1 service.go:275] Service volumemode-928-9345/csi-hostpathplugin updated: 1 ports\nI0825 04:23:07.119862       1 service.go:275] Service volumemode-928-9345/csi-hostpath-provisioner updated: 1 ports\nI0825 04:23:07.332935       1 service.go:275] Service volumemode-928-9345/csi-hostpath-resizer updated: 1 ports\nI0825 04:23:07.551596       1 service.go:275] Service volumemode-928-9345/csi-hostpath-snapshotter updated: 1 ports\nI0825 04:23:07.654268       1 service.go:390] Adding new service port \"volumemode-928-9345/csi-hostpathplugin:dummy\" at 100.66.53.129:12345/TCP\nI0825 04:23:07.654375       1 service.go:390] Adding new service port \"volumemode-928-9345/csi-hostpath-provisioner:dummy\" at 100.69.171.94:12345/TCP\nI0825 04:23:07.654431       1 service.go:390] Adding new service port \"volumemode-928-9345/csi-hostpath-resizer:dummy\" at 100.67.214.253:12345/TCP\nI0825 04:23:07.654475       1 service.go:390] Adding new service port \"volumemode-928-9345/csi-hostpath-snapshotter:dummy\" at 100.66.48.99:12345/TCP\nI0825 04:23:07.654668       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:07.692443       1 proxier.go:826] syncProxyRules took 38.383528ms\nI0825 04:23:09.975450       1 service.go:275] Service services-7190/tolerate-unready updated: 1 ports\nI0825 04:23:09.975714       1 service.go:390] Adding new service port \"services-7190/tolerate-unready:http\" at 100.67.141.32:80/TCP\nI0825 04:23:09.975839       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:10.003735       1 proxier.go:826] syncProxyRules took 28.252625ms\nI0825 04:23:10.003974       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:10.053059       1 proxier.go:826] syncProxyRules took 49.293949ms\nI0825 04:23:10.095656       1 service.go:275] Service volume-7993-5153/csi-hostpath-attacher updated: 0 ports\nI0825 04:23:10.427879       1 service.go:275] Service volume-7993-5153/csi-hostpathplugin updated: 0 ports\nI0825 04:23:10.652150       1 service.go:275] Service volume-7993-5153/csi-hostpath-provisioner updated: 0 ports\nI0825 04:23:10.872881       1 service.go:275] Service volume-7993-5153/csi-hostpath-resizer updated: 0 ports\nI0825 04:23:11.053390       1 service.go:415] Removing service port \"volume-7993-5153/csi-hostpath-attacher:dummy\"\nI0825 04:23:11.053419       1 service.go:415] Removing service port \"volume-7993-5153/csi-hostpathplugin:dummy\"\nI0825 04:23:11.053427       1 service.go:415] Removing service port \"volume-7993-5153/csi-hostpath-provisioner:dummy\"\nI0825 04:23:11.053436       1 service.go:415] Removing service port \"volume-7993-5153/csi-hostpath-resizer:dummy\"\nI0825 04:23:11.053663       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:11.078113       1 proxier.go:826] syncProxyRules took 24.848465ms\nI0825 04:23:11.093962       1 service.go:275] Service volume-7993-5153/csi-hostpath-snapshotter updated: 0 ports\nI0825 04:23:12.078503       1 service.go:415] Removing service port \"volume-7993-5153/csi-hostpath-snapshotter:dummy\"\nI0825 04:23:12.078716       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:12.100120       1 proxier.go:826] syncProxyRules took 21.762283ms\nI0825 04:23:13.100462       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:13.125950       1 proxier.go:826] syncProxyRules took 25.733564ms\nI0825 04:23:14.126558       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:14.167254       1 proxier.go:826] syncProxyRules took 41.010862ms\nI0825 04:23:15.164500       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:15.188386       1 proxier.go:826] syncProxyRules took 24.088621ms\nI0825 04:23:15.560468       1 service.go:275] Service services-4305/hairpin-test updated: 0 ports\nI0825 04:23:16.188627       1 service.go:415] Removing service port \"services-4305/hairpin-test\"\nI0825 04:23:16.188922       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:16.219546       1 proxier.go:826] syncProxyRules took 31.076802ms\nI0825 04:23:19.997529       1 service.go:275] Service services-8019/affinity-nodeport updated: 1 ports\nI0825 04:23:19.997839       1 service.go:390] Adding new service port \"services-8019/affinity-nodeport\" at 100.64.4.183:80/TCP\nI0825 04:23:19.998010       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:20.021098       1 proxier.go:1715] Opened local port \"nodePort for services-8019/affinity-nodeport\" (:30391/tcp)\nI0825 04:23:20.025087       1 proxier.go:826] syncProxyRules took 27.523087ms\nI0825 04:23:20.025429       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:20.049362       1 proxier.go:826] syncProxyRules took 24.24597ms\nI0825 04:23:21.811090       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:21.846213       1 proxier.go:826] syncProxyRules took 35.412465ms\nI0825 04:23:22.101094       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:22.124138       1 proxier.go:826] syncProxyRules took 23.343873ms\nI0825 04:23:22.144974       1 service.go:275] Service services-6751/nodeport-test updated: 1 ports\nI0825 04:23:23.124392       1 service.go:390] Adding new service port \"services-6751/nodeport-test:http\" at 100.65.18.6:80/TCP\nI0825 04:23:23.124580       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:23.145320       1 proxier.go:1715] Opened local port \"nodePort for services-6751/nodeport-test:http\" (:31426/tcp)\nI0825 04:23:23.161064       1 proxier.go:826] syncProxyRules took 36.818007ms\nI0825 04:23:24.106922       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:24.138088       1 proxier.go:826] syncProxyRules took 32.217247ms\nI0825 04:23:25.138773       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:25.161570       1 proxier.go:826] syncProxyRules took 23.10671ms\nI0825 04:23:29.057023       1 service.go:275] Service services-7809/externalname-service updated: 0 ports\nI0825 04:23:29.057780       1 service.go:415] Removing service port \"services-7809/externalname-service:http\"\nI0825 04:23:29.058082       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:29.093614       1 proxier.go:826] syncProxyRules took 35.990207ms\nI0825 04:23:29.093968       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:29.148337       1 proxier.go:826] syncProxyRules took 54.634094ms\nI0825 04:23:31.732854       1 service.go:275] Service ephemeral-3599-3217/csi-hostpath-attacher updated: 1 ports\nI0825 04:23:31.733164       1 service.go:390] Adding new service port \"ephemeral-3599-3217/csi-hostpath-attacher:dummy\" at 100.65.87.124:12345/TCP\nI0825 04:23:31.733389       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:31.829537       1 proxier.go:826] syncProxyRules took 96.561745ms\nI0825 04:23:31.833437       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:31.909432       1 proxier.go:826] syncProxyRules took 76.576964ms\nI0825 04:23:32.071848       1 service.go:275] Service ephemeral-3599-3217/csi-hostpathplugin updated: 1 ports\nI0825 04:23:32.286985       1 service.go:275] Service ephemeral-3599-3217/csi-hostpath-provisioner updated: 1 ports\nI0825 04:23:32.500965       1 service.go:275] Service ephemeral-3599-3217/csi-hostpath-resizer updated: 1 ports\nI0825 04:23:32.714716       1 service.go:275] Service ephemeral-3599-3217/csi-hostpath-snapshotter updated: 1 ports\nI0825 04:23:32.909958       1 service.go:390] Adding new service port \"ephemeral-3599-3217/csi-hostpath-snapshotter:dummy\" at 100.69.223.50:12345/TCP\nI0825 04:23:32.910073       1 service.go:390] Adding new service port \"ephemeral-3599-3217/csi-hostpathplugin:dummy\" at 100.66.241.191:12345/TCP\nI0825 04:23:32.910103       1 service.go:390] Adding new service port \"ephemeral-3599-3217/csi-hostpath-provisioner:dummy\" at 100.66.48.98:12345/TCP\nI0825 04:23:32.910169       1 service.go:390] Adding new service port \"ephemeral-3599-3217/csi-hostpath-resizer:dummy\" at 100.70.230.139:12345/TCP\nI0825 04:23:32.910477       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:32.939318       1 proxier.go:826] syncProxyRules took 29.563259ms\nI0825 04:23:36.943140       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:36.972186       1 proxier.go:826] syncProxyRules took 29.327304ms\nI0825 04:23:37.950202       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:37.988453       1 proxier.go:826] syncProxyRules took 38.536158ms\nI0825 04:23:38.597897       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:38.627964       1 proxier.go:826] syncProxyRules took 30.290036ms\nI0825 04:23:39.628398       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:39.652430       1 proxier.go:826] syncProxyRules took 24.340538ms\nI0825 04:23:41.550396       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:41.584805       1 proxier.go:826] syncProxyRules took 34.683769ms\nI0825 04:23:42.354267       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:42.378258       1 proxier.go:826] syncProxyRules took 24.283643ms\nI0825 04:23:42.642961       1 service.go:275] Service volumemode-928-9345/csi-hostpath-attacher updated: 0 ports\nI0825 04:23:42.643355       1 service.go:415] Removing service port \"volumemode-928-9345/csi-hostpath-attacher:dummy\"\nI0825 04:23:42.643678       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:42.676143       1 proxier.go:826] syncProxyRules took 32.938386ms\nI0825 04:23:42.972243       1 service.go:275] Service volumemode-928-9345/csi-hostpathplugin updated: 0 ports\nI0825 04:23:43.191941       1 service.go:275] Service volumemode-928-9345/csi-hostpath-provisioner updated: 0 ports\nI0825 04:23:43.413852       1 service.go:275] Service volumemode-928-9345/csi-hostpath-resizer updated: 0 ports\nI0825 04:23:43.633514       1 service.go:275] Service volumemode-928-9345/csi-hostpath-snapshotter updated: 0 ports\nI0825 04:23:43.633874       1 service.go:415] Removing service port \"volumemode-928-9345/csi-hostpath-snapshotter:dummy\"\nI0825 04:23:43.633958       1 service.go:415] Removing service port \"volumemode-928-9345/csi-hostpathplugin:dummy\"\nI0825 04:23:43.633984       1 service.go:415] Removing service port \"volumemode-928-9345/csi-hostpath-provisioner:dummy\"\nI0825 04:23:43.634038       1 service.go:415] Removing service port \"volumemode-928-9345/csi-hostpath-resizer:dummy\"\nI0825 04:23:43.634282       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:43.679185       1 proxier.go:826] syncProxyRules took 45.461681ms\nI0825 04:23:44.679514       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:44.721531       1 proxier.go:826] syncProxyRules took 42.234481ms\nI0825 04:23:45.156410       1 service.go:275] Service services-6751/nodeport-test updated: 0 ports\nI0825 04:23:45.721817       1 service.go:415] Removing service port \"services-6751/nodeport-test:http\"\nI0825 04:23:45.722184       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:45.753779       1 proxier.go:826] syncProxyRules took 32.131438ms\nI0825 04:23:48.841477       1 service.go:275] Service services-8019/affinity-nodeport updated: 0 ports\nI0825 04:23:48.841748       1 service.go:415] Removing service port \"services-8019/affinity-nodeport\"\nI0825 04:23:48.841900       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:48.872171       1 proxier.go:826] syncProxyRules took 30.660409ms\nI0825 04:23:48.872414       1 proxier.go:871] Syncing iptables rules\nI0825 04:23:48.898422       1 proxier.go:826] syncProxyRules took 26.222518ms\nI0825 04:24:03.961875       1 service.go:275] Service services-4946/affinity-nodeport-transition updated: 1 ports\nI0825 04:24:03.962210       1 service.go:390] Adding new service port \"services-4946/affinity-nodeport-transition\" at 100.64.131.20:80/TCP\nI0825 04:24:03.962743       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:03.985762       1 proxier.go:1715] Opened local port \"nodePort for services-4946/affinity-nodeport-transition\" (:31367/tcp)\nI0825 04:24:03.989512       1 proxier.go:826] syncProxyRules took 27.470485ms\nI0825 04:24:03.989744       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:04.012150       1 proxier.go:826] syncProxyRules took 22.610983ms\nI0825 04:24:05.333772       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:05.359910       1 proxier.go:826] syncProxyRules took 26.348078ms\nI0825 04:24:08.917689       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:08.943473       1 proxier.go:826] syncProxyRules took 26.077336ms\nI0825 04:24:11.063137       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:11.094150       1 proxier.go:826] syncProxyRules took 31.23855ms\nI0825 04:24:13.646553       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:13.676424       1 proxier.go:826] syncProxyRules took 30.158243ms\nI0825 04:24:13.748835       1 service.go:275] Service services-249/sourceip-test updated: 0 ports\nI0825 04:24:13.749229       1 service.go:415] Removing service port \"services-249/sourceip-test\"\nI0825 04:24:13.750369       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:13.779531       1 proxier.go:826] syncProxyRules took 30.451414ms\nI0825 04:24:14.780212       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:14.818275       1 proxier.go:826] syncProxyRules took 38.349364ms\nI0825 04:24:14.850404       1 service.go:275] Service webhook-9598/e2e-test-webhook updated: 1 ports\nI0825 04:24:15.818557       1 service.go:390] Adding new service port \"webhook-9598/e2e-test-webhook\" at 100.64.138.182:8443/TCP\nI0825 04:24:15.818756       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:15.851272       1 proxier.go:826] syncProxyRules took 32.862432ms\nI0825 04:24:17.524954       1 service.go:275] Service webhook-9598/e2e-test-webhook updated: 0 ports\nI0825 04:24:17.525345       1 service.go:415] Removing service port \"webhook-9598/e2e-test-webhook\"\nI0825 04:24:17.525615       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:17.604508       1 proxier.go:826] syncProxyRules took 79.304226ms\nI0825 04:24:18.605379       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:18.638053       1 proxier.go:826] syncProxyRules took 33.277647ms\nI0825 04:24:29.273291       1 service.go:275] Service ephemeral-3599-3217/csi-hostpath-attacher updated: 0 ports\nI0825 04:24:29.274290       1 service.go:415] Removing service port \"ephemeral-3599-3217/csi-hostpath-attacher:dummy\"\nI0825 04:24:29.274679       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:29.303562       1 proxier.go:826] syncProxyRules took 30.236225ms\nI0825 04:24:29.303810       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:29.324634       1 proxier.go:826] syncProxyRules took 21.046311ms\nI0825 04:24:29.601371       1 service.go:275] Service ephemeral-3599-3217/csi-hostpathplugin updated: 0 ports\nI0825 04:24:29.819913       1 service.go:275] Service ephemeral-3599-3217/csi-hostpath-provisioner updated: 0 ports\nI0825 04:24:30.043191       1 service.go:275] Service ephemeral-3599-3217/csi-hostpath-resizer updated: 0 ports\nI0825 04:24:30.269515       1 service.go:275] Service ephemeral-3599-3217/csi-hostpath-snapshotter updated: 0 ports\nI0825 04:24:30.282643       1 service.go:415] Removing service port \"ephemeral-3599-3217/csi-hostpathplugin:dummy\"\nI0825 04:24:30.282665       1 service.go:415] Removing service port \"ephemeral-3599-3217/csi-hostpath-provisioner:dummy\"\nI0825 04:24:30.282673       1 service.go:415] Removing service port \"ephemeral-3599-3217/csi-hostpath-resizer:dummy\"\nI0825 04:24:30.282681       1 service.go:415] Removing service port \"ephemeral-3599-3217/csi-hostpath-snapshotter:dummy\"\nI0825 04:24:30.282896       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:30.306441       1 proxier.go:826] syncProxyRules took 23.914936ms\nI0825 04:24:32.058603       1 service.go:275] Service services-4946/affinity-nodeport-transition updated: 1 ports\nI0825 04:24:32.058972       1 service.go:392] Updating existing service port \"services-4946/affinity-nodeport-transition\" at 100.64.131.20:80/TCP\nI0825 04:24:32.059299       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:32.137204       1 proxier.go:826] syncProxyRules took 78.392883ms\nI0825 04:24:33.510113       1 service.go:275] Service services-4946/affinity-nodeport-transition updated: 1 ports\nI0825 04:24:33.510472       1 service.go:392] Updating existing service port \"services-4946/affinity-nodeport-transition\" at 100.64.131.20:80/TCP\nI0825 04:24:33.510746       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:33.536613       1 proxier.go:826] syncProxyRules took 26.296233ms\nI0825 04:24:35.262108       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:35.288089       1 proxier.go:826] syncProxyRules took 26.284451ms\nI0825 04:24:35.288394       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:35.319577       1 proxier.go:826] syncProxyRules took 31.459745ms\nI0825 04:24:44.049564       1 service.go:275] Service services-4946/affinity-nodeport-transition updated: 0 ports\nI0825 04:24:44.050260       1 service.go:415] Removing service port \"services-4946/affinity-nodeport-transition\"\nI0825 04:24:44.050401       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:44.081245       1 proxier.go:826] syncProxyRules took 31.11099ms\nI0825 04:24:44.081469       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:44.102147       1 proxier.go:826] syncProxyRules took 20.874992ms\nI0825 04:24:44.263837       1 service.go:275] Service webhook-7346/e2e-test-webhook updated: 1 ports\nI0825 04:24:45.102420       1 service.go:390] Adding new service port \"webhook-7346/e2e-test-webhook\" at 100.70.99.53:8443/TCP\nI0825 04:24:45.102569       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:45.125306       1 proxier.go:826] syncProxyRules took 23.046463ms\nI0825 04:24:47.071119       1 service.go:275] Service webhook-7346/e2e-test-webhook updated: 0 ports\nI0825 04:24:47.071426       1 service.go:415] Removing service port \"webhook-7346/e2e-test-webhook\"\nI0825 04:24:47.071651       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:47.115527       1 proxier.go:826] syncProxyRules took 44.285909ms\nI0825 04:24:47.117255       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:47.149483       1 proxier.go:826] syncProxyRules took 33.921308ms\nI0825 04:24:53.163169       1 service.go:275] Service webhook-1762/e2e-test-webhook updated: 1 ports\nI0825 04:24:53.163577       1 service.go:390] Adding new service port \"webhook-1762/e2e-test-webhook\" at 100.69.206.55:8443/TCP\nI0825 04:24:53.164639       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:53.189711       1 proxier.go:826] syncProxyRules took 26.31204ms\nI0825 04:24:53.190092       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:53.211932       1 proxier.go:826] syncProxyRules took 22.190856ms\nI0825 04:24:55.840965       1 service.go:275] Service webhook-1762/e2e-test-webhook updated: 0 ports\nI0825 04:24:55.841297       1 service.go:415] Removing service port \"webhook-1762/e2e-test-webhook\"\nI0825 04:24:55.841562       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:55.883401       1 proxier.go:826] syncProxyRules took 42.4013ms\nI0825 04:24:55.883785       1 proxier.go:871] Syncing iptables rules\nI0825 04:24:55.920247       1 proxier.go:826] syncProxyRules took 36.764259ms\nI0825 04:25:00.949648       1 service.go:275] Service provisioning-7689-156/csi-hostpath-attacher updated: 1 ports\nI0825 04:25:00.950236       1 service.go:390] Adding new service port \"provisioning-7689-156/csi-hostpath-attacher:dummy\" at 100.70.52.42:12345/TCP\nI0825 04:25:00.950402       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:00.979819       1 proxier.go:826] syncProxyRules took 30.13393ms\nI0825 04:25:00.980097       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:01.001708       1 proxier.go:826] syncProxyRules took 21.861315ms\nI0825 04:25:01.278529       1 service.go:275] Service provisioning-7689-156/csi-hostpathplugin updated: 1 ports\nI0825 04:25:01.385841       1 service.go:275] Service dns-9988/test-service-2 updated: 1 ports\nI0825 04:25:01.500572       1 service.go:275] Service provisioning-7689-156/csi-hostpath-provisioner updated: 1 ports\nI0825 04:25:01.725525       1 service.go:275] Service provisioning-7689-156/csi-hostpath-resizer updated: 1 ports\nI0825 04:25:01.936075       1 service.go:275] Service provisioning-7689-156/csi-hostpath-snapshotter updated: 1 ports\nI0825 04:25:02.001955       1 service.go:390] Adding new service port \"provisioning-7689-156/csi-hostpathplugin:dummy\" at 100.64.150.112:12345/TCP\nI0825 04:25:02.001980       1 service.go:390] Adding new service port \"dns-9988/test-service-2:http\" at 100.70.160.98:80/TCP\nI0825 04:25:02.001991       1 service.go:390] Adding new service port \"provisioning-7689-156/csi-hostpath-provisioner:dummy\" at 100.66.239.159:12345/TCP\nI0825 04:25:02.002000       1 service.go:390] Adding new service port \"provisioning-7689-156/csi-hostpath-resizer:dummy\" at 100.66.172.56:12345/TCP\nI0825 04:25:02.002011       1 service.go:390] Adding new service port \"provisioning-7689-156/csi-hostpath-snapshotter:dummy\" at 100.69.242.31:12345/TCP\nI0825 04:25:02.002204       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:02.026694       1 proxier.go:826] syncProxyRules took 24.88995ms\nI0825 04:25:03.404074       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:03.431835       1 proxier.go:826] syncProxyRules took 27.995681ms\nI0825 04:25:04.087926       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:04.112493       1 proxier.go:826] syncProxyRules took 24.870497ms\nI0825 04:25:05.112999       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:05.173959       1 proxier.go:826] syncProxyRules took 61.318221ms\nI0825 04:25:06.690588       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:06.717134       1 proxier.go:826] syncProxyRules took 26.831988ms\nI0825 04:25:15.878347       1 service.go:275] Service provisioning-4472-374/csi-hostpath-attacher updated: 1 ports\nI0825 04:25:15.878981       1 service.go:390] Adding new service port \"provisioning-4472-374/csi-hostpath-attacher:dummy\" at 100.65.67.38:12345/TCP\nI0825 04:25:15.879258       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:15.914076       1 proxier.go:826] syncProxyRules took 35.28561ms\nI0825 04:25:15.914677       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:15.955261       1 proxier.go:826] syncProxyRules took 41.155765ms\nI0825 04:25:16.191662       1 service.go:275] Service provisioning-4472-374/csi-hostpathplugin updated: 1 ports\nI0825 04:25:16.405619       1 service.go:275] Service provisioning-4472-374/csi-hostpath-provisioner updated: 1 ports\nI0825 04:25:16.666143       1 service.go:275] Service provisioning-4472-374/csi-hostpath-resizer updated: 1 ports\nI0825 04:25:16.959039       1 service.go:390] Adding new service port \"provisioning-4472-374/csi-hostpathplugin:dummy\" at 100.66.133.144:12345/TCP\nI0825 04:25:16.959295       1 service.go:390] Adding new service port \"provisioning-4472-374/csi-hostpath-provisioner:dummy\" at 100.70.92.162:12345/TCP\nI0825 04:25:16.959421       1 service.go:390] Adding new service port \"provisioning-4472-374/csi-hostpath-resizer:dummy\" at 100.68.85.205:12345/TCP\nI0825 04:25:16.959649       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:16.966735       1 service.go:275] Service provisioning-4472-374/csi-hostpath-snapshotter updated: 1 ports\nI0825 04:25:17.039448       1 proxier.go:826] syncProxyRules took 80.574333ms\nI0825 04:25:18.040007       1 service.go:390] Adding new service port \"provisioning-4472-374/csi-hostpath-snapshotter:dummy\" at 100.66.228.60:12345/TCP\nI0825 04:25:18.040258       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:18.062767       1 proxier.go:826] syncProxyRules took 22.902997ms\nI0825 04:25:23.665245       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:23.718183       1 proxier.go:826] syncProxyRules took 53.23103ms\nI0825 04:25:24.463886       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:24.491716       1 proxier.go:826] syncProxyRules took 28.112291ms\nI0825 04:25:26.307338       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:26.377751       1 proxier.go:826] syncProxyRules took 70.707838ms\nI0825 04:25:26.861238       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:26.886973       1 proxier.go:826] syncProxyRules took 26.029004ms\nI0825 04:25:29.462172       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:29.485083       1 proxier.go:826] syncProxyRules took 23.191585ms\nI0825 04:25:30.963178       1 service.go:275] Service provisioning-4008-2969/csi-hostpath-attacher updated: 1 ports\nI0825 04:25:30.964035       1 service.go:390] Adding new service port \"provisioning-4008-2969/csi-hostpath-attacher:dummy\" at 100.67.19.55:12345/TCP\nI0825 04:25:30.964256       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:30.997654       1 proxier.go:826] syncProxyRules took 33.784206ms\nI0825 04:25:30.997978       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:31.021137       1 proxier.go:826] syncProxyRules took 23.451812ms\nI0825 04:25:31.281784       1 service.go:275] Service provisioning-4008-2969/csi-hostpathplugin updated: 1 ports\nI0825 04:25:31.498689       1 service.go:275] Service provisioning-4008-2969/csi-hostpath-provisioner updated: 1 ports\nI0825 04:25:31.715297       1 service.go:275] Service provisioning-4008-2969/csi-hostpath-resizer updated: 1 ports\nI0825 04:25:31.931167       1 service.go:275] Service provisioning-4008-2969/csi-hostpath-snapshotter updated: 1 ports\nI0825 04:25:32.021524       1 service.go:390] Adding new service port \"provisioning-4008-2969/csi-hostpathplugin:dummy\" at 100.64.13.222:12345/TCP\nI0825 04:25:32.021672       1 service.go:390] Adding new service port \"provisioning-4008-2969/csi-hostpath-provisioner:dummy\" at 100.71.93.58:12345/TCP\nI0825 04:25:32.021738       1 service.go:390] Adding new service port \"provisioning-4008-2969/csi-hostpath-resizer:dummy\" at 100.70.231.164:12345/TCP\nI0825 04:25:32.021799       1 service.go:390] Adding new service port \"provisioning-4008-2969/csi-hostpath-snapshotter:dummy\" at 100.70.166.70:12345/TCP\nI0825 04:25:32.022012       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:32.068572       1 proxier.go:826] syncProxyRules took 47.221961ms\nI0825 04:25:33.980026       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:34.006939       1 proxier.go:826] syncProxyRules took 27.119666ms\nI0825 04:25:35.001236       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:35.077698       1 proxier.go:826] syncProxyRules took 76.747272ms\nI0825 04:25:37.780949       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:37.823276       1 proxier.go:826] syncProxyRules took 42.668587ms\nI0825 04:25:38.178766       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:38.213354       1 proxier.go:826] syncProxyRules took 34.884074ms\nI0825 04:25:39.213701       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:39.246834       1 proxier.go:826] syncProxyRules took 33.3634ms\nI0825 04:25:58.898310       1 service.go:275] Service provisioning-4472-374/csi-hostpath-attacher updated: 0 ports\nI0825 04:25:58.898521       1 service.go:415] Removing service port \"provisioning-4472-374/csi-hostpath-attacher:dummy\"\nI0825 04:25:58.898804       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:58.939228       1 proxier.go:826] syncProxyRules took 40.880124ms\nI0825 04:25:59.238565       1 service.go:275] Service provisioning-4472-374/csi-hostpathplugin updated: 0 ports\nI0825 04:25:59.239255       1 service.go:415] Removing service port \"provisioning-4472-374/csi-hostpathplugin:dummy\"\nI0825 04:25:59.239607       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:59.268541       1 proxier.go:826] syncProxyRules took 29.939463ms\nI0825 04:25:59.464394       1 service.go:275] Service provisioning-4472-374/csi-hostpath-provisioner updated: 0 ports\nI0825 04:25:59.712178       1 service.go:275] Service provisioning-4472-374/csi-hostpath-resizer updated: 0 ports\nI0825 04:25:59.933776       1 service.go:275] Service provisioning-4472-374/csi-hostpath-snapshotter updated: 0 ports\nI0825 04:25:59.934465       1 service.go:415] Removing service port \"provisioning-4472-374/csi-hostpath-snapshotter:dummy\"\nI0825 04:25:59.934603       1 service.go:415] Removing service port \"provisioning-4472-374/csi-hostpath-provisioner:dummy\"\nI0825 04:25:59.934834       1 service.go:415] Removing service port \"provisioning-4472-374/csi-hostpath-resizer:dummy\"\nI0825 04:25:59.935179       1 proxier.go:871] Syncing iptables rules\nI0825 04:25:59.977913       1 proxier.go:826] syncProxyRules took 43.596429ms\nI0825 04:26:00.978506       1 proxier.go:871] Syncing iptables rules\nI0825 04:26:01.003609       1 proxier.go:826] syncProxyRules took 25.423637ms\nI0825 04:26:03.942613       1 service.go:275] Service services-8400/no-pods updated: 1 ports\nI0825 04:26:03.942914       1 service.go:390] Adding new service port \"services-8400/no-pods\" at 100.67.139.95:80/TCP\nI0825 04:26:03.943137       1 proxier.go:871] Syncing iptables rules\nI0825 04:26:03.969360       1 proxier.go:826] syncProxyRules took 26.643139ms\nI0825 04:26:03.969667       1 proxier.go:871] Syncing iptables rules\nI0825 04:26:03.995770       1 proxier.go:826] syncProxyRules took 26.35478ms\nI0825 04:26:09.903658       1 service.go:275] Service volume-expand-2507-7881/csi-hostpath-attacher updated: 1 ports\nI0825 04:26:09.904054       1 service.go:390] Adding new service port \"volume-expand-2507-7881/csi-hostpath-attacher:dummy\" at 100.64.177.180:12345/TCP\nI0825 04:26:09.904474       1 proxier.go:871] Syncing iptables rules\nI0825 04:26:09.947005       1 proxier.go:826] syncProxyRules took 43.10235ms\nI0825 04:26:09.947299       1 proxier.go:871] Syncing iptables rules\nI0825 04:26:09.990322       1 proxier.go:826] syncProxyRules took 43.279338ms\nI0825 04:26:10.020012       1 service.go:275] Service services-3853/affinity-nodeport-timeout updated: 1 ports\nI0825 04:26:10.226497       1 service.go:275] Service volume-expand-2507-7881/csi-hostpathplugin updated: 1 ports\nI0825 04:26:10.437501       1 service.go:275] Service volume-expand-2507-7881/csi-hostpath-provisioner updated: 1 ports\nI0825 04:26:10.647192       1 service.go:275] Service volume-expand-2507-7881/csi-hostpath-resizer updated: 1 ports\nI0825 04:26:10.862747       1 service.go:275] Service volume-expand-2507-7881/csi-hostpath-snapshotter updated: 1 ports\nI0825 04:26:10.990752       1 service.go:390] Adding new service port \"services-3853/affinity-nodeport-timeout\" at 100.70.58.117:80/TCP\nI0825 04:26:10.990778       1 service.go:390] Adding new service port \"volume-expand-2507-7881/csi-hostpathplugin:dummy\" at 100.70.208.203:12345/TCP\nI0825 04:26:10.990807       1 service.go:390] Adding new service port \"volume-expand-2507-7881/csi-hostpath-provisioner:dummy\" at 100.64.151.31:12345/TCP\nI0825 04:26:10.990825       1 service.go:390] Adding new service port \"volume-expand-2507-7881/csi-hostpath-resizer:dummy\" at 100.69.190.232:12345/TCP\nI0825 04:26:10.990841       1 service.go:390] Adding new service port \"volume-expand-2507-7881/csi-hostpath-snapshotter:dummy\" at 100.68.160.230:12345/TCP\nI0825 04:26:10.991002       1 proxier.go:871] Syncing iptables rules\nI0825 04:26:11.013510       1 proxier.go:1715] Opened local port \"nodePort for services-3853/affinity-nodeport-timeout\" (:30321/tcp)\nI0825 04:26:11.017716       1 proxier.go:826] syncProxyRules took 27.104607ms\nI0825 04:26:12.068223       1 proxier.go:871] Syncing iptables rules\nI0825 04:26:12.111367       1 proxier.go:826] syncProxyRules took 43.493964ms\nI0825 04:26:13.111886       1 proxier.go:871] Syncing iptables rules\nI0825 04:26:13.152108       1 proxier.go:826] syncProxyRules took 40.538134ms\nI0825 04:26:13.481378       1 service.go:275] Service provisioning-4008-2969/csi-hostpath-attacher updated: 0 ports\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-44-96.eu-west-3.compute.internal ====\n==== START logs for container kube-scheduler of pod kube-system/kube-scheduler-ip-172-20-44-96.eu-west-3.compute.internal ====\nI0825 04:05:16.155856       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI0825 04:05:16.155988       1 flags.go:59] FLAG: --address=\"0.0.0.0\"\nI0825 04:05:16.156000       1 flags.go:59] FLAG: --algorithm-provider=\"\"\nI0825 04:05:16.156004       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI0825 04:05:16.156008       1 flags.go:59] FLAG: --authentication-kubeconfig=\"\"\nI0825 04:05:16.156012       1 flags.go:59] FLAG: --authentication-skip-lookup=\"false\"\nI0825 04:05:16.156021       1 flags.go:59] FLAG: --authentication-token-webhook-cache-ttl=\"10s\"\nI0825 04:05:16.156028       1 flags.go:59] FLAG: --authentication-tolerate-lookup-failure=\"true\"\nI0825 04:05:16.156032       1 flags.go:59] FLAG: --authorization-always-allow-paths=\"[/healthz]\"\nI0825 04:05:16.156041       1 flags.go:59] FLAG: --authorization-kubeconfig=\"\"\nI0825 04:05:16.156045       1 flags.go:59] FLAG: --authorization-webhook-cache-authorized-ttl=\"10s\"\nI0825 04:05:16.156049       1 flags.go:59] FLAG: --authorization-webhook-cache-unauthorized-ttl=\"10s\"\nI0825 04:05:16.156053       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI0825 04:05:16.156061       1 flags.go:59] FLAG: --cert-dir=\"\"\nI0825 04:05:16.156065       1 flags.go:59] FLAG: --client-ca-file=\"\"\nI0825 04:05:16.156069       1 flags.go:59] FLAG: --config=\"/var/lib/kube-scheduler/config.yaml\"\nI0825 04:05:16.156073       1 flags.go:59] FLAG: --contention-profiling=\"true\"\nI0825 04:05:16.156077       1 flags.go:59] FLAG: --experimental-logging-sanitization=\"false\"\nI0825 04:05:16.156081       1 flags.go:59] FLAG: --feature-gates=\"\"\nI0825 04:05:16.156087       1 flags.go:59] FLAG: --hard-pod-affinity-symmetric-weight=\"1\"\nI0825 04:05:16.156093       1 flags.go:59] FLAG: --help=\"false\"\nI0825 04:05:16.156097       1 flags.go:59] FLAG: --http2-max-streams-per-connection=\"0\"\nI0825 04:05:16.156101       1 flags.go:59] FLAG: --kube-api-burst=\"100\"\nI0825 04:05:16.156105       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0825 04:05:16.156110       1 flags.go:59] FLAG: --kube-api-qps=\"50\"\nI0825 04:05:16.156116       1 flags.go:59] FLAG: --kubeconfig=\"\"\nI0825 04:05:16.156120       1 flags.go:59] FLAG: --leader-elect=\"true\"\nI0825 04:05:16.156124       1 flags.go:59] FLAG: --leader-elect-lease-duration=\"15s\"\nI0825 04:05:16.156128       1 flags.go:59] FLAG: --leader-elect-renew-deadline=\"10s\"\nI0825 04:05:16.156132       1 flags.go:59] FLAG: --leader-elect-resource-lock=\"leases\"\nI0825 04:05:16.156136       1 flags.go:59] FLAG: --leader-elect-resource-name=\"kube-scheduler\"\nI0825 04:05:16.156140       1 flags.go:59] FLAG: --leader-elect-resource-namespace=\"kube-system\"\nI0825 04:05:16.156147       1 flags.go:59] FLAG: --leader-elect-retry-period=\"2s\"\nI0825 04:05:16.156151       1 flags.go:59] FLAG: --lock-object-name=\"kube-scheduler\"\nI0825 04:05:16.156155       1 flags.go:59] FLAG: --lock-object-namespace=\"kube-system\"\nI0825 04:05:16.156159       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI0825 04:05:16.156166       1 flags.go:59] FLAG: --log-dir=\"\"\nI0825 04:05:16.156170       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-scheduler.log\"\nI0825 04:05:16.156175       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI0825 04:05:16.156179       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI0825 04:05:16.156183       1 flags.go:59] FLAG: --logging-format=\"text\"\nI0825 04:05:16.156187       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI0825 04:05:16.156191       1 flags.go:59] FLAG: --master=\"\"\nI0825 04:05:16.156194       1 flags.go:59] FLAG: --one-output=\"false\"\nI0825 04:05:16.156198       1 flags.go:59] FLAG: --permit-port-sharing=\"false\"\nI0825 04:05:16.156202       1 flags.go:59] FLAG: --policy-config-file=\"\"\nI0825 04:05:16.156206       1 flags.go:59] FLAG: --policy-configmap=\"\"\nI0825 04:05:16.156209       1 flags.go:59] FLAG: --policy-configmap-namespace=\"kube-system\"\nI0825 04:05:16.156213       1 flags.go:59] FLAG: --port=\"10251\"\nI0825 04:05:16.156218       1 flags.go:59] FLAG: --profiling=\"true\"\nI0825 04:05:16.156222       1 flags.go:59] FLAG: --requestheader-allowed-names=\"[]\"\nI0825 04:05:16.156232       1 flags.go:59] FLAG: --requestheader-client-ca-file=\"\"\nI0825 04:05:16.156237       1 flags.go:59] FLAG: --requestheader-extra-headers-prefix=\"[x-remote-extra-]\"\nI0825 04:05:16.156246       1 flags.go:59] FLAG: --requestheader-group-headers=\"[x-remote-group]\"\nI0825 04:05:16.156254       1 flags.go:59] FLAG: --requestheader-username-headers=\"[x-remote-user]\"\nI0825 04:05:16.156260       1 flags.go:59] FLAG: --scheduler-name=\"default-scheduler\"\nI0825 04:05:16.156265       1 flags.go:59] FLAG: --secure-port=\"10259\"\nI0825 04:05:16.156269       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI0825 04:05:16.156273       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI0825 04:05:16.156277       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI0825 04:05:16.156280       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI0825 04:05:16.156284       1 flags.go:59] FLAG: --tls-cert-file=\"\"\nI0825 04:05:16.156288       1 flags.go:59] FLAG: --tls-cipher-suites=\"[]\"\nI0825 04:05:16.156295       1 flags.go:59] FLAG: --tls-min-version=\"\"\nI0825 04:05:16.156299       1 flags.go:59] FLAG: --tls-private-key-file=\"\"\nI0825 04:05:16.156303       1 flags.go:59] FLAG: --tls-sni-cert-key=\"[]\"\nI0825 04:05:16.156308       1 flags.go:59] FLAG: --use-legacy-policy-config=\"false\"\nI0825 04:05:16.156312       1 flags.go:59] FLAG: --v=\"2\"\nI0825 04:05:16.156316       1 flags.go:59] FLAG: --version=\"false\"\nI0825 04:05:16.156322       1 flags.go:59] FLAG: --vmodule=\"\"\nI0825 04:05:16.156331       1 flags.go:59] FLAG: --write-config-to=\"\"\nI0825 04:05:16.577053       1 serving.go:331] Generated self-signed cert in-memory\nW0825 04:05:16.957969       1 authentication.go:308] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.\nW0825 04:05:16.957987       1 authentication.go:332] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.\nW0825 04:05:16.958002       1 authorization.go:176] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.\nI0825 04:05:16.962184       1 factory.go:188] Creating scheduler from algorithm provider 'DefaultProvider'\nI0825 04:05:16.969380       1 configfile.go:72] Using component config:\napiVersion: kubescheduler.config.k8s.io/v1beta1\nclientConnection:\n  acceptContentTypes: \"\"\n  burst: 100\n  contentType: application/vnd.kubernetes.protobuf\n  kubeconfig: /var/lib/kube-scheduler/kubeconfig\n  qps: 50\nenableContentionProfiling: true\nenableProfiling: true\nhealthzBindAddress: 0.0.0.0:10251\nkind: KubeSchedulerConfiguration\nleaderElection:\n  leaderElect: true\n  leaseDuration: 15s\n  renewDeadline: 10s\n  resourceLock: leases\n  resourceName: kube-scheduler\n  resourceNamespace: kube-system\n  retryPeriod: 2s\nmetricsBindAddress: 0.0.0.0:10251\nparallelism: 16\npercentageOfNodesToScore: 0\npodInitialBackoffSeconds: 1\npodMaxBackoffSeconds: 10\nprofiles:\n- pluginConfig:\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: DefaultPreemptionArgs\n      minCandidateNodesAbsolute: 100\n      minCandidateNodesPercentage: 10\n    name: DefaultPreemption\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      hardPodAffinityWeight: 1\n      kind: InterPodAffinityArgs\n    name: InterPodAffinity\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: NodeAffinityArgs\n    name: NodeAffinity\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: NodeResourcesFitArgs\n    name: NodeResourcesFit\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: NodeResourcesLeastAllocatedArgs\n      resources:\n      - name: cpu\n        weight: 1\n      - name: memory\n        weight: 1\n    name: NodeResourcesLeastAllocated\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      defaultingType: System\n      kind: PodTopologySpreadArgs\n    name: PodTopologySpread\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      bindTimeoutSeconds: 600\n      kind: VolumeBindingArgs\n    name: VolumeBinding\n  plugins:\n    bind:\n      enabled:\n      - name: DefaultBinder\n        weight: 0\n    filter:\n      enabled:\n      - name: NodeUnschedulable\n        weight: 0\n      - name: NodeName\n        weight: 0\n      - name: TaintToleration\n        weight: 0\n      - name: NodeAffinity\n        weight: 0\n      - name: NodePorts\n        weight: 0\n      - name: NodeResourcesFit\n        weight: 0\n      - name: VolumeRestrictions\n        weight: 0\n      - name: EBSLimits\n        weight: 0\n      - name: GCEPDLimits\n        weight: 0\n      - name: NodeVolumeLimits\n        weight: 0\n      - name: AzureDiskLimits\n        weight: 0\n      - name: VolumeBinding\n        weight: 0\n      - name: VolumeZone\n        weight: 0\n      - name: PodTopologySpread\n        weight: 0\n      - name: InterPodAffinity\n        weight: 0\n    permit: {}\n    postBind: {}\n    postFilter:\n      enabled:\n      - name: DefaultPreemption\n        weight: 0\n    preBind:\n      enabled:\n      - name: VolumeBinding\n        weight: 0\n    preFilter:\n      enabled:\n      - name: NodeResourcesFit\n        weight: 0\n      - name: NodePorts\n        weight: 0\n      - name: PodTopologySpread\n        weight: 0\n      - name: InterPodAffinity\n        weight: 0\n      - name: VolumeBinding\n        weight: 0\n    preScore:\n      enabled:\n      - name: InterPodAffinity\n        weight: 0\n      - name: PodTopologySpread\n        weight: 0\n      - name: TaintToleration\n        weight: 0\n    queueSort:\n      enabled:\n      - name: PrioritySort\n        weight: 0\n    reserve:\n      enabled:\n      - name: VolumeBinding\n        weight: 0\n    score:\n      enabled:\n      - name: NodeResourcesBalancedAllocation\n        weight: 1\n      - name: ImageLocality\n        weight: 1\n      - name: InterPodAffinity\n        weight: 1\n      - name: NodeResourcesLeastAllocated\n        weight: 1\n      - name: NodeAffinity\n        weight: 1\n      - name: NodePreferAvoidPods\n        weight: 10000\n      - name: PodTopologySpread\n        weight: 2\n      - name: TaintToleration\n        weight: 1\n  schedulerName: default-scheduler\n\nI0825 04:05:16.969400       1 server.go:138] Starting Kubernetes Scheduler version v1.20.10\nW0825 04:05:16.970615       1 authorization.go:47] Authorization is disabled\nW0825 04:05:16.970625       1 authentication.go:40] Authentication is disabled\nI0825 04:05:16.970633       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251\nI0825 04:05:16.972254       1 tlsconfig.go:200] loaded serving cert [\"Generated self signed cert\"]: \"localhost@1629864316\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1629864316\" (2021-08-25 03:05:16 +0000 UTC to 2022-08-25 03:05:16 +0000 UTC (now=2021-08-25 04:05:16.972231833 +0000 UTC))\nI0825 04:05:16.972481       1 named_certificates.go:53] loaded SNI cert [0/\"self-signed loopback\"]: \"apiserver-loopback-client@1629864316\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1629864316\" (2021-08-25 03:05:16 +0000 UTC to 2022-08-25 03:05:16 +0000 UTC (now=2021-08-25 04:05:16.972469101 +0000 UTC))\nI0825 04:05:16.972503       1 secure_serving.go:197] Serving securely on [::]:10259\nI0825 04:05:16.972610       1 tlsconfig.go:240] Starting DynamicServingCertificateController\nI0825 04:05:16.972855       1 reflector.go:219] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134\nI0825 04:05:16.973157       1 reflector.go:219] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134\nI0825 04:05:16.973396       1 reflector.go:219] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:134\nI0825 04:05:16.973618       1 reflector.go:219] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:134\nI0825 04:05:16.973919       1 reflector.go:219] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134\nI0825 04:05:16.974176       1 reflector.go:219] Starting reflector *v1.CSINode (0s) from k8s.io/client-go/informers/factory.go:134\nI0825 04:05:16.974429       1 reflector.go:219] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134\nI0825 04:05:16.974649       1 reflector.go:219] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:134\nI0825 04:05:16.974875       1 reflector.go:219] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134\nI0825 04:05:16.975089       1 reflector.go:219] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134\nI0825 04:05:16.975346       1 reflector.go:219] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:134\nE0825 04:05:16.975886       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:16.975978       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:16.976060       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:16.976142       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:16.976217       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:16.976288       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:16.976359       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:16.976434       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:16.976508       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:16.976581       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:16.976651       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:17.789270       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:17.898002       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:17.909001       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:18.023677       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:18.140794       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:18.170684       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:18.199946       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:18.389155       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:18.462712       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:18.536840       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:18.553869       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:19.717478       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:19.868189       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:20.420198       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:20.642114       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:20.653399       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:20.714275       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:21.000939       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:21.090012       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:21.115462       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:21.200171       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:21.696400       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:23.751952       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:24.693213       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:24.805343       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:24.957303       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:25.378253       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:25.418715       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:25.508319       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:25.682045       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:26.421402       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:26.555364       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:26.696176       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:31.690812       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:32.004071       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:32.924238       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:33.135041       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:34.248109       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:35.141968       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0825 04:05:35.259715       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nI0825 04:05:45.587497       1 trace.go:205] Trace[1330711146]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (25-Aug-2021 04:05:35.586) (total time: 10000ms):\nTrace[1330711146]: [10.000682953s] [10.000682953s] END\nE0825 04:05:45.587518       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0825 04:05:46.805427       1 trace.go:205] Trace[775473005]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (25-Aug-2021 04:05:36.804) (total time: 10000ms):\nTrace[775473005]: [10.00060432s] [10.00060432s] END\nE0825 04:05:46.805450       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0825 04:05:47.769167       1 trace.go:205] Trace[1505348867]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (25-Aug-2021 04:05:37.768) (total time: 10000ms):\nTrace[1505348867]: [10.000579292s] [10.000579292s] END\nE0825 04:05:47.769194       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0825 04:05:48.394750       1 trace.go:205] Trace[1316686299]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (25-Aug-2021 04:05:38.394) (total time: 10000ms):\nTrace[1316686299]: [10.00057434s] [10.00057434s] END\nE0825 04:05:48.394771       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0825 04:05:55.666554       1 trace.go:205] Trace[294261354]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (25-Aug-2021 04:05:45.665) (total time: 10000ms):\nTrace[294261354]: [10.000582736s] [10.000582736s] END\nE0825 04:05:55.666578       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nE0825 04:05:59.621166       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope\nE0825 04:05:59.622141       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope\nI0825 04:05:59.622458       1 trace.go:205] Trace[50813801]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (25-Aug-2021 04:05:48.422) (total time: 11200ms):\nTrace[50813801]: [11.200306605s] [11.200306605s] END\nE0825 04:05:59.622475       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope\nI0825 04:05:59.622661       1 trace.go:205] Trace[120488219]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (25-Aug-2021 04:05:49.078) (total time: 10544ms):\nTrace[120488219]: [10.544475889s] [10.544475889s] END\nE0825 04:05:59.622683       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope\nI0825 04:05:59.622942       1 trace.go:205] Trace[1389102223]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (25-Aug-2021 04:05:47.943) (total time: 11679ms):\nTrace[1389102223]: [11.679778524s] [11.679778524s] END\nE0825 04:05:59.622957       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope\nE0825 04:05:59.623093       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope\nI0825 04:06:03.855505       1 node_tree.go:65] Added node \"ip-172-20-44-96.eu-west-3.compute.internal\" in group \"eu-west-3:\\x00:eu-west-3a\" to NodeTree\nI0825 04:06:46.473041       1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-scheduler...\nI0825 04:06:46.479418       1 leaderelection.go:253] successfully acquired lease kube-system/kube-scheduler\nI0825 04:06:46.480334       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-swfmx\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0825 04:06:46.497443       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/kube-flannel-ds-gkpwd\" node=\"ip-172-20-44-96.eu-west-3.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0825 04:06:46.509933       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/dns-controller-7474b747c6-w282g\" node=\"ip-172-20-44-96.eu-west-3.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0825 04:06:46.519671       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-5489b75945-wxzfq\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0825 04:06:46.547285       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-swfmx\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0825 04:06:46.548053       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-5489b75945-wxzfq\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0825 04:06:49.480326       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-swfmx\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0825 04:06:53.545869       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-5489b75945-wxzfq\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0825 04:06:53.546611       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-swfmx\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0825 04:06:59.872959       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-5489b75945-wxzfq\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0825 04:06:59.890350       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/kops-controller-6f475\" node=\"ip-172-20-44-96.eu-west-3.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0825 04:07:02.481551       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-swfmx\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0825 04:07:08.482022       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-5489b75945-wxzfq\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0825 04:07:23.724792       1 node_tree.go:65] Added node \"ip-172-20-36-72.eu-west-3.compute.internal\" in group \"eu-west-3:\\x00:eu-west-3a\" to NodeTree\nI0825 04:07:23.725121       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-swfmx\" err=\"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0825 04:07:23.747077       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-5489b75945-wxzfq\" err=\"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0825 04:07:23.761847       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/kube-flannel-ds-4f8d9\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=2 feasibleNodes=1\nI0825 04:07:24.149034       1 node_tree.go:65] Added node \"ip-172-20-38-132.eu-west-3.compute.internal\" in group \"eu-west-3:\\x00:eu-west-3a\" to NodeTree\nI0825 04:07:24.176431       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/kube-flannel-ds-q4bc4\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=3 feasibleNodes=1\nI0825 04:07:29.687748       1 node_tree.go:65] Added node \"ip-172-20-32-67.eu-west-3.compute.internal\" in group \"eu-west-3:\\x00:eu-west-3a\" to NodeTree\nI0825 04:07:29.719465       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/kube-flannel-ds-qmrf7\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=4 feasibleNodes=1\nI0825 04:07:31.683569       1 node_tree.go:65] Added node \"ip-172-20-37-233.eu-west-3.compute.internal\" in group \"eu-west-3:\\x00:eu-west-3a\" to NodeTree\nI0825 04:07:31.706725       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/kube-flannel-ds-79rvz\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:07:34.491312       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/coredns-5489b75945-wxzfq\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=2\nI0825 04:07:34.492018       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-swfmx\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=2\nI0825 04:07:37.732699       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/coredns-5489b75945-f586w\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=2\nI0825 04:10:14.261277       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7770/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-l857t\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:14.411283       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-361/termination-message-containerbb7b888d-6759-4cb3-bad4-a66c874e5391\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:14.436262       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-5414/pod-handle-http-request\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:14.448087       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-6389/terminate-cmd-rpae6b0f819-b7da-4a32-a75d-8d178e76603d\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:14.465677       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-5272/downwardapi-volume-bd3c8b49-60d9-40d0-b988-5be0cfb777f6\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:14.477831       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-3410/nfs-server\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:14.550809       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-916/pod-projected-configmaps-cc5da15d-97f3-4818-9dc1-f22a3a67a988\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:14.784402       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4021/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-cr5xr\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:14.800844       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5916/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-mlrkg\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:14.909189       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-1517/startup-7bfe64f1-f42b-4e81-9fef-2cfb21e82354\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:15.209791       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7460/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:15.297869       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6392/exec-volume-test-inlinevolume-zkmz\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:15.314800       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7460/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:15.421689       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7460/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:15.524848       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7460/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:15.568243       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6820/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:15.661422       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-589/downwardapi-volume-674ff96f-3f2a-4867-b8fb-2b6de3dcabec\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:15.671462       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6820/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:15.764651       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-9322/pod-handle-http-request\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:15.765396       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3181/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-qpf5k\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:15.776195       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6820/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:15.879529       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6820/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:16.066071       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2581/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-f7qrc\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:16.720211       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-5833/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-ckgxg\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:16.873608       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-test-2331/bin-false2a6ba20f-24e7-4e5c-9952-0e1f6318664c\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:17.163348       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9805/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:17.266779       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9805/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:17.371919       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9805/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:17.474880       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9805/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:17.764331       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8701/update-demo-nautilus-vdmkn\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:17.770432       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8701/update-demo-nautilus-lr26b\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:18.033389       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-4808/sample-webhook-deployment-6bd9446d55-4wdtx\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:19.007762       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3092-1661/csi-hostpath-attacher-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:19.326955       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3092-1661/csi-hostpathplugin-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:19.535034       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3092-1661/csi-hostpath-provisioner-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:19.749847       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3092-1661/csi-hostpath-resizer-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:19.787390       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2667-3982/csi-mockplugin-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:19.975933       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3092-1661/csi-hostpath-snapshotter-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:20.868819       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-5414/pod-with-prestop-http-hook\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:22.177344       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-9322/pod-with-poststart-http-hook\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:22.578794       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5916/pod-dbbfa737-c1dd-4c2a-b9e0-0b0f41bdcb5b\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:23.047086       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-5326/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:23.151766       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-5326/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:23.255461       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-5326/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:23.359251       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-5326/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:23.609031       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-3919/pod-configmaps-98d39201-c0ea-4674-b3b0-b0869138d502\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:24.561463       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4021/pod-feed5bfe-6428-409e-9734-6bd86c775a26\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:24.610261       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-509/aws-injector\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:25.062150       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7827/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-c2bl4\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:26.880572       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-3410/pvc-tester-lqg72\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:27.513612       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-test-605/bin-false647bfe5d-70b6-4612-9db5-0caea973cd91\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:27.696259       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-1293/busybox-051a8a63-9d1e-4820-b26b-a64d889c1566\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:28.618232       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7213/pod-subpath-test-inlinevolume-zbbs\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:29.534328       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-3410/pvc-tester-kflbn\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:32.575923       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-115/busybox-user-65534-8febc304-fcc4-4e12-9e80-8df957288102\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:33.269610       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-5833/pod-7e9f2a82-ddc7-4b55-921f-25e5dacd08e2\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:33.483857       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3181/pod-subpath-test-preprovisionedpv-jgcp\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:33.890112       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-4122/simpletest.rc-w9lcw\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:33.899189       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-4122/simpletest.rc-thfdt\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:34.436318       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"prestop-1313/server\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:34.442349       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7770/pod-9aa1a2bf-6a7a-4481-a40e-f272400f546a\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:35.440776       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3092/pod-subpath-test-dynamicpv-rzf5\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:35.478358       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7827/pod-549a5a82-4ce1-4886-bdb4-5b5012126b9d\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:35.842124       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-3069/test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:36.959276       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7770/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-67t95\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:37.503117       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2667/pvc-volume-tester-hs5q8\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:37.791218       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-5833/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-nj84s\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:37.881609       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"mounted-volume-expand-4208/deployment-f8d78de3-23ec-4327-9b7d-6ae05a4a3162-6658b8974cs9kxh\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:38.852788       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"prestop-1313/tester\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:39.279277       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9202/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:39.385136       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9202/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:39.490266       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9202/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:39.594599       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9202/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:40.658235       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-3069/test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:41.163532       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-5441/sample-webhook-deployment-6bd9446d55-95bkg\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:41.879285       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3723/pod-subpath-test-dynamicpv-pff9\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:42.916804       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3181/pod-subpath-test-preprovisionedpv-jgcp\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:43.126628       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"subpath-2978/pod-subpath-test-configmap-n7gr\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:43.385578       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-3069/test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:44.812289       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7271/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-dnddx\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:45.038614       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-6389/terminate-cmd-rpofcdc99fb8-8d4a-416b-a961-a23a627557f5\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:46.501125       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-5326/test-container-pod\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:46.568472       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7460/test-container-pod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:46.603032       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-5326/host-test-container-pod\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:46.813909       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-1793/simpletest.rc-j24z9\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:46.817083       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-1793/simpletest.rc-dzdlc\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:46.821538       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-1793/simpletest.rc-477v7\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:46.854781       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-1793/simpletest.rc-46c4h\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:46.854998       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-1793/simpletest.rc-w4kkd\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:46.855353       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-1793/simpletest.rc-h876s\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:46.855602       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-1793/simpletest.rc-pxtw9\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:46.871558       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-1793/simpletest.rc-6dspn\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:46.883634       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-1793/simpletest.rc-d7zzd\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:46.885376       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-1793/simpletest.rc-fhgr7\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:47.251003       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-5454/downward-api-1a345b6c-4127-495e-adff-798059196975\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:48.213117       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2581/pod-subpath-test-preprovisionedpv-29r2\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:48.214864       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-3069/test-pod-46ccfd41-1bc0-44f6-9786-eb52b9b2de77\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:48.505164       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-4686/downward-api-4f4e80e6-954d-47d5-a278-6a19b2987647\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:48.609794       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9805/test-container-pod\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:49.017801       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6820/test-container-pod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:49.121507       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6820/host-test-container-pod\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:51.065543       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-3341/pod-configmaps-c08a08ec-fcec-45eb-b7dc-d5d0b101b06f\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:52.081598       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-7568/pod-projected-secrets-a78b49c0-5526-4d7e-9e9d-54e139c9da13\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:52.420930       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-6389/terminate-cmd-rpnbbdb3a42-5bb4-4065-a1d5-4f349b41ee6c\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:53.111351       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"mounted-volume-expand-4208/deployment-f8d78de3-23ec-4327-9b7d-6ae05a4a3162-6658b8974c7bcxw\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:54.137721       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-1911/pod-secrets-3c43aa69-c9c3-41cc-b07c-b894f318ddce\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:56.046032       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-6259/pod-projected-configmaps-48b906c9-a249-498a-8580-2bd23df9e7ed\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:56.212658       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-6551/test-rolling-update-with-lb-5b74d4d4b5-lxrxn\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:56.222771       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-6551/test-rolling-update-with-lb-5b74d4d4b5-r2q5b\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=3\nI0825 04:10:56.232094       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-6551/test-rolling-update-with-lb-5b74d4d4b5-tzfzz\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=2\nI0825 04:10:57.034006       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-1118/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:57.136089       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-1118/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:57.174009       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-509/aws-client\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:10:57.240565       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-1118/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:57.343363       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-1118/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:57.726985       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7113-1408/csi-mockplugin-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:10:57.934756       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7113-1408/csi-mockplugin-resizer-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:00.109683       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"topology-4282/pod-687277f8-12e0-446a-ae80-47949d3922f4\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0825 04:11:00.117285       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"topology-4282/pod-687277f8-12e0-446a-ae80-47949d3922f4\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0825 04:11:00.130175       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2941/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-v7dmg\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:01.033794       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-434/pvc-volume-tester-writer-v5pv6\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:01.847034       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"sysctl-7915/sysctl-51cc1d0f-297a-4941-800f-bd5aa51c42e7\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:02.507976       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"topology-4282/pod-687277f8-12e0-446a-ae80-47949d3922f4\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0825 04:11:02.754859       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9202/test-container-pod\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:03.436193       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7271/exec-volume-test-preprovisionedpv-bb2l\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:03.978292       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"tables-9028/pod-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:05.029213       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-8828/pod-projected-secrets-7e0538ad-e3b5-4745-b9eb-db77c2cc6a3b\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:05.570072       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-975/concurrent-1629864660-kw9zp\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:05.804329       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2941/pod-be46b997-e860-4165-a1ca-9a3958353b46\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:06.164930       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-3849/pod-0629f22f-f1d7-47b2-a927-eb584afce945\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:06.513011       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"topology-4282/pod-687277f8-12e0-446a-ae80-47949d3922f4\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:07.268877       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-7435/termination-message-containerbd444c8d-710b-4e59-bdf2-05ae2d58729b\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:07.499183       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6263/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-2mkk5\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:08.298178       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6888/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-8bbfv\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:08.780014       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"var-expansion-2389/var-expansion-4046d1a7-79d0-4802-9159-d7ca92e4d37d\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:10.715965       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6758/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-pgmqn\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:11.131741       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6263/pod-53f1b18f-cc3c-46a2-9b38-f2bc587f0edb\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:11.327574       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-9609/aws-injector\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:11.663452       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9597-9995/csi-mockplugin-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:11.834968       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2941/pod-7b3c9407-d7e7-48b1-be9a-42d89bde5924\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:11.868495       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9597-9995/csi-mockplugin-attacher-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:15.244907       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9908/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-95hz9\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:15.523270       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7113/pvc-volume-tester-8xmlw\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:15.784279       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-7538/pfpod\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:16.863312       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-2267/pod-hostip-781d16b7-1d46-452b-850a-01e8ffeab463\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:18.001965       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9597/pvc-volume-tester-cd9pn\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:18.257527       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6888/pod-subpath-test-preprovisionedpv-shd9\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:18.586096       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-1118/test-container-pod\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:18.693220       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-1118/host-test-container-pod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:20.239339       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5246/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-9hhh4\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:23.206120       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6888/pod-subpath-test-preprovisionedpv-shd9\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:24.885363       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9908/pod-fdf7bb6f-cd10-455b-a821-0f0fa5667250\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:25.167654       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6263/pod-9c96286f-2113-47ee-bee4-ed734993cc86\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:25.323347       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-434/pvc-volume-tester-reader-8dh2b\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:25.732627       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-6729/alpine-nnp-true-521cf9e6-c375-41cc-9147-bd43c04179a0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:25.856341       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4634/pod-subpath-test-inlinevolume-chc9\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:27.069519       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5636-8830/csi-hostpath-attacher-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:27.397561       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5636-8830/csi-hostpathplugin-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:27.595698       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5636-8830/csi-hostpath-provisioner-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:27.815109       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5636-8830/csi-hostpath-resizer-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:28.030851       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5636-8830/csi-hostpath-snapshotter-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:28.826600       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-7880/pod-ed5a1d0d-77c5-49cf-bea5-c13e06d83a75\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:28.980771       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"proxy-1656/proxy-service-w5kv2-bzltw\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:30.867626       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5636/pod-0ada6e07-96d5-4ead-b031-143513fb4ba3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:31.605013       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6381/httpd\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:31.807578       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7688/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-t8699\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:32.706803       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6758/pod-subpath-test-preprovisionedpv-fkxg\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:32.972130       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3228/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:33.076272       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3228/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:33.188039       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3228/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:33.297226       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3228/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:33.673633       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4163/pod-subpath-test-dynamicpv-vljx\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:34.415936       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5246/pod-subpath-test-preprovisionedpv-hgbc\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:34.938359       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9908/pod-b71abb0a-120c-41d1-82a2-1e95c3199cde\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:36.785471       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-9609/aws-client\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:37.405550       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8605/externalname-service-94kfc\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:37.405746       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8605/externalname-service-thqkw\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:37.429787       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-3849/pod-877da89b-b9d3-448b-9f56-0b0397b5cd2a\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:41.380456       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-577/sample-webhook-deployment-6bd9446d55-8fxq5\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:43.643268       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8605/execpodvwkl9\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:45.079686       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7872/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-dd5t9\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:45.479611       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-3186/security-context-44796083-4be0-459b-81a0-8553b4a1258b\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:46.005777       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5636/pod-6f22f4ca-0f81-4f47-9a31-1b37c8369440\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:46.053158       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3632/test-new-deployment-dd94f59b7-qss79\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:48.124535       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7688/pod-190bc3c3-0184-4b26-9601-5fd27cf286da\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:48.891982       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-6262/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:48.992532       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-6262/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:49.098114       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-6262/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:49.201321       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-6262/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:50.653638       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7688/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-pbm4f\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:51.211230       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6381/success\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:51.368289       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8431/e2e-test-httpd-pod\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:51.498519       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7872/pod-20f304e9-885b-4df6-ba6e-34659e4253d8\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:52.304336       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-5414/dns-test-8352862c-cf9c-488d-a052-fa7461f48053\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:54.463384       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3228/test-container-pod\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:54.494516       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6381/failure-1\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:55.013016       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5971-235/csi-hostpath-attacher-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:55.276934       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4603/pod-subpath-test-inlinevolume-qmnz\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:55.340063       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5971-235/csi-hostpathplugin-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:55.558635       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5971-235/csi-hostpath-provisioner-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:55.566624       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7872/pod-c0c8c713-4a5d-47bc-b46a-5716c61cb333\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:55.716615       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-9040/fail-once-local-lfbf6\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:55.731306       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-9040/fail-once-local-fdpdz\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:55.766329       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5971-235/csi-hostpath-resizer-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:55.993648       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5971-235/csi-hostpath-snapshotter-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:57.563001       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6381/failure-2\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:58.641570       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-2904-5672/csi-hostpath-attacher-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:58.823278       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-9040/fail-once-local-85whs\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:58.860629       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-9040/fail-once-local-lddxq\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:11:58.967664       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-2904-5672/csi-hostpathplugin-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:59.190543       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-2904-5672/csi-hostpath-provisioner-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:59.416225       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-2904-5672/csi-hostpath-resizer-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:59.644107       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-2904-5672/csi-hostpath-snapshotter-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:59.834991       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-2904/inline-volume-tester-5mdsf\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:11:59.959233       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-9424/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-p2gch\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:00.489983       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6275-4785/csi-mockplugin-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:00.585736       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6275-4785/csi-mockplugin-attacher-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:01.180687       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3067/pod-subpath-test-inlinevolume-kzlr\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:02.020425       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-3044/pfpod\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:04.665151       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-2701/pause\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:04.744491       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-8546/liveness-7c64fd28-57e5-4746-a674-9d3358abf744\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:05.645468       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-4797/replace-1629864720-bjzdk\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:05.662745       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-535/concurrent-1629864720-bp6qc\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:05.889985       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"e2e-kubelet-etc-hosts-6924/test-pod\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:06.255410       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-2904/inline-volume-tester2-rrc8z\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:06.973558       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3817/emptydir-injector\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:07.311185       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-6551/test-rolling-update-with-lb-865d6c9bb7-nmbdm\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=3\nI0825 04:12:09.706948       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4268/httpd-deployment-5c84db5954-8cf6r\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:09.715816       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4268/httpd-deployment-5c84db5954-76zgj\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:10.032142       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-6551/test-rolling-update-with-lb-865d6c9bb7-llphx\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=2\nI0825 04:12:10.307541       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"e2e-kubelet-etc-hosts-6924/test-host-network-pod\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:11.229193       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6275/pvc-volume-tester-klc78\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:11.510045       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4268/httpd-deployment-5c84db5954-nmjn8\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:11.958773       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-6551/test-rolling-update-with-lb-865d6c9bb7-sc98v\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:12.257914       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4268/httpd-deployment-86bff9b6d7-bw9v8\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:12.346321       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-6262/test-container-pod\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:13.124618       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-464/termination-message-container509cf2f2-12a8-407b-97ce-b03c33b0f2d8\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:13.497179       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-4412/pod-update-activedeadlineseconds-002eb6a4-6b1d-4156-9e86-362e1a2293bd\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:14.963954       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8954/httpd-deployment-86bff9b6d7-ftvvx\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:16.023262       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-6551/test-rolling-update-with-lb-6f88fc9b74-tgtn9\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=3\nI0825 04:12:16.713723       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9561/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-85jgl\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:17.357061       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-8275/foo-d4rf7\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:17.362662       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-8275/foo-p2zgn\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:18.052844       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-6551/test-rolling-update-with-lb-6f88fc9b74-2bmzh\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=2\nI0825 04:12:18.054650       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-9424/local-injector\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:18.502989       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6692/pod-subpath-test-inlinevolume-pf56\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:19.347633       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6408/exec-volume-test-preprovisionedpv-g5kj\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:19.512148       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8110/up-down-1-jbh84\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:19.512628       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8110/up-down-1-c7ch4\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:19.512783       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8110/up-down-1-57mn5\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:19.582466       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-8838/test-recreate-deployment-786dd7c454-fcd6j\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:20.125808       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-6551/test-rolling-update-with-lb-6f88fc9b74-bkqvp\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:20.844281       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-5885/pod-11c116a7-b014-41d3-ac1c-488247a868eb\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:22.214438       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-8838/test-recreate-deployment-f79dd4667-zzmps\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:22.855482       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1610/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-x9z9v\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:22.969839       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8110/up-down-2-qwqz8\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:22.974491       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8110/up-down-2-rrhkm\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:22.995826       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8110/up-down-2-vvtbz\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:23.003107       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-2999/test-webserver-42b50ec2-a50e-407f-a1c6-6e6c8ca83f5a\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:23.835098       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8779/hostexec\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:24.741014       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-6551/test-rolling-update-with-lb-868948fd9c-ggjzh\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=3\nI0825 04:12:26.725609       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-3042/pod-configmaps-303e46c8-16a6-437d-ad9d-7c48ef90b00b\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:30.274914       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-3386/downwardapi-volume-cacf6bcd-dc21-4a63-9d25-789b15653169\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:30.302682       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"clientset-3707/podcacbcf89-1eee-492c-b4b9-f414aad3cd52\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:31.718478       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-6551/test-rolling-update-with-lb-868948fd9c-kstxr\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=2\nI0825 04:12:32.324373       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8110/verify-service-up-host-exec-pod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:32.505106       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-9424/local-client\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:32.987083       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6330/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-cnnkp\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:33.751200       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9561/pod-subpath-test-preprovisionedpv-z97z\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:34.386034       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1610/pod-subpath-test-preprovisionedpv-t8sx\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:34.637586       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8110/verify-service-up-exec-pod-dhhcg\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:34.901690       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7259/exec-volume-test-dynamicpv-tlbh\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:35.585270       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-5345/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:35.690370       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-5345/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:35.713435       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7320/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-mq8x7\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:35.795834       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-5345/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:35.900921       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-5345/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:37.677468       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-6551/test-rolling-update-with-lb-868948fd9c-spgqm\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:40.626628       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6330/pod-aa8c2085-cec0-456c-94b4-ce2631f8b453\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:41.119805       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9808/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-gf84c\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:43.850603       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4510-2252/csi-mockplugin-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:46.209518       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-5745/downwardapi-volume-458397b5-0c9e-4053-ac6d-02feae06bce8\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:46.380057       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4206/pod-subpath-test-dynamicpv-b9bj\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:46.534734       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6330/pod-f6276a98-1767-4e16-abe3-10f49adee5dc\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:47.573956       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-1325/busybox-readonly-false-f1eb0465-3ec8-4a22-a911-8d7a4a5be66e\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:49.332309       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7320/pod-subpath-test-preprovisionedpv-5pjp\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:50.724680       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svc-latency-1153/svc-latency-rc-88jqb\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:50.737193       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-4408/pod-handle-http-request\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:50.881331       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4510/pvc-volume-tester-xjlks\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:12:51.809074       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-6316/security-context-1cd8d92b-fd08-4232-b0a9-4dd4bce6a00f\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:54.020973       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-5951/affinity-clusterip-2td77\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:54.026672       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-5951/affinity-clusterip-m5tvj\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:54.037155       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-5951/affinity-clusterip-5pfb6\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:55.965532       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replicaset-1551/condition-test-5sd2m\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:55.994211       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replicaset-1551/condition-test-g6grk\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:57.064986       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-5345/test-container-pod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:57.158477       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-4408/pod-with-prestop-exec-hook\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:57.280949       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-8996/downwardapi-volume-6cf16be7-f27c-4f75-9245-1405ec2a458f\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:12:59.513949       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-9380/liveness-e6205ba6-8646-4ae0-9523-4c6f8bab0024\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:00.472765       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-5951/execpod-affinity7ms6m\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:00.889363       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6184/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-w5bpq\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:01.266344       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6381/failure-3\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:02.089811       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-2500/downwardapi-volume-9cbe7d8c-3be7-4463-bbc2-f393d61ec775\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:02.441687       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-2902/pod-ready\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:04.772324       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"hostpath-8043/pod-host-path-test\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:05.073369       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-2729/dns-test-25300789-ef25-44e4-b434-3f0b45d6bcf2\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:05.896833       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-3408/forbid-1629864780-snn2l\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:05.962711       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-4797/replace-1629864780-ss7sg\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:06.043752       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-535/concurrent-1629864780-dj4dw\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:06.967449       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6444/pod-logs-websocket-e6e9547a-8ef4-4df3-83a7-7732237eca33\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:08.454667       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-316/pod-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:08.780801       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6254/update-demo-nautilus-tljxl\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:08.780958       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6254/update-demo-nautilus-hhgr9\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:09.648120       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3296/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-fvsm4\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:15.423492       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-4519-5054/csi-hostpath-attacher-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:15.781001       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-4519-5054/csi-hostpathplugin-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:15.985352       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-4519-5054/csi-hostpath-provisioner-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:16.199784       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-4519-5054/csi-hostpath-resizer-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:16.409271       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-4519-5054/csi-hostpath-snapshotter-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:16.551916       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7666/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-nq6rb\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:16.964044       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9678/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-9xq7p\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:18.408898       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-2898/image-pull-test506395cc-ae7f-4ac5-b937-938a9450747b\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:19.299923       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6184/pod-subpath-test-preprovisionedpv-5tf6\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:19.385857       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3296/pod-subpath-test-preprovisionedpv-njdz\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:21.539408       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9678/pod-2f3cf3fc-0c67-43f6-9249-451feb7038ad\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:23.326215       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3321/simpletest.rc-f5lst\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:23.330913       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3321/simpletest.rc-jxqqt\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:23.331753       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3321/simpletest.rc-rz5ld\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:23.342005       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3321/simpletest.rc-vq2xj\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:23.352205       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3321/simpletest.rc-8w7q2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:23.357287       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3321/simpletest.rc-d8nfv\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:23.357596       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3321/simpletest.rc-cnwk8\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:23.367357       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3321/simpletest.rc-c6r7l\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:23.372131       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3321/simpletest.rc-9nxxl\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:23.375188       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3321/simpletest.rc-n4pbl\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:23.983664       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-413/termination-message-containerbc00ecf4-4e3e-457f-a760-093076cf5286\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:24.049960       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-4375/hostpathsymlink-injector\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:24.937078       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6254/update-demo-nautilus-q5lld\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:29.595962       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9678/pod-62f3657a-2aa3-41d0-b7ad-5a19c7cc065c\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:29.933549       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8829/pod-subpath-test-inlinevolume-nc7p\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:32.652193       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-8040/pod-bddebad4-b1d1-483b-987a-cbfc1fe5b5ac\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:32.815092       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7666/pod-59c5eefb-b9a7-4c9e-9756-4c826ddb9916\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:34.532351       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-4375/hostpathsymlink-client\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:35.336979       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7666/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-sw68c\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:35.902588       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-2430/pod-handle-http-request\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:37.169464       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-8040/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-jp94w\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:38.297653       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-6261/pfpod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:38.322745       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-2667-2631/csi-hostpath-attacher-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:38.331567       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-2430/pod-with-poststart-exec-hook\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:38.647684       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-2667-2631/csi-hostpathplugin-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:38.855879       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-2667-2631/csi-hostpath-provisioner-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:39.107298       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-2667-2631/csi-hostpath-resizer-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:39.309742       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-2667-2631/csi-hostpath-snapshotter-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:42.168417       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-2667/pod-5b27f2cd-b36f-4601-85e8-ba24c3a9d6af\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:42.454141       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-9966/metadata-volume-82388531-4f42-414d-b6ec-063514de30f0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:45.264084       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-306/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-67nnz\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:45.650531       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"subpath-2837/pod-subpath-test-downwardapi-479t\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:48.033099       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1257/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-d6gxk\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:49.441198       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-5394/pod-qos-class-72e6e105-5a9e-4342-81f3-531635782d70\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:49.661000       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4993/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-zdzqg\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:51.259213       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-6992/downwardapi-volume-7c84fade-2cd1-4bf3-be4b-e493765bccaa\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:52.487275       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5152/pod-subpath-test-inlinevolume-hxm8\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:52.542042       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"mount-propagation-4510/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-h5c5v\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:54.019807       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4993/pod-c8d5597a-4f09-4a33-b191-dee2f969ec12\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:54.433245       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pvc-protection-2409/pvc-tester-xvr5d\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:54.842401       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2471/pod-subpath-test-inlinevolume-pfjm\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:13:57.852023       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-test-1910/busybox-readonly-fs3c5e76e4-9f74-4816-bdde-85a79cca196f\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:13:59.127895       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replication-controller-4876/pod-adoption\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:01.762415       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6301-1823/csi-mockplugin-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:01.967435       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6301-1823/csi-mockplugin-attacher-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:02.472984       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-2182/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-ztb65\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:03.624805       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-306/pod-subpath-test-preprovisionedpv-2tnj\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:03.999505       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-8508/pod-1a983f4b-7d0f-4aac-be20-dcbb874524f4\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:04.339871       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1257/pod-subpath-test-preprovisionedpv-nd9d\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:04.979850       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5056-9522/csi-mockplugin-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:05.596019       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3527-6264/csi-mockplugin-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:05.800173       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3527-6264/csi-mockplugin-attacher-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:06.523227       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-8508/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-snrh5\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:08.100711       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6301/pvc-volume-tester-j2gdq\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:08.249495       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1799/pod-subpath-test-dynamicpv-zt4t\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:08.513203       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"var-expansion-4802/var-expansion-cf816ea9-0d85-4d43-b7fd-53a13217787e\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:08.581945       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-306/pod-subpath-test-preprovisionedpv-2tnj\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:09.851430       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3582/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-55bhg\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:10.412947       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1141/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-kt4j5\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:11.259285       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"containers-9552/client-containers-260c125f-5699-445c-8e72-673b7944bd72\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:11.938706       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3527/pvc-volume-tester-pgbfl\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:12.126564       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-1892/dns-test-29137003-437f-45d9-9f65-e4874df96655\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:13.128562       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5056/pvc-volume-tester-qlmgs\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:13.363479       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-test-1845/busybox-scheduling-d009f88d-792e-467a-9ac5-916fa77bc21e\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:15.022805       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3206/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-mn8hb\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:17.545070       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-7618/pod-configmaps-03a36e07-0210-4cac-84fc-fcda576af593\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:18.272596       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1141/pod-subpath-test-preprovisionedpv-mfx5\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:18.306636       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3582/local-injector\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:18.409336       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6301/inline-volume-9pm2m\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:18.737313       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-8456/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-rn45m\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:18.797277       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-2182/local-injector\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:20.022569       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3224/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:20.128584       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3224/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:20.235597       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3224/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:20.300389       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3206/pod-2b69536d-55cf-4b2a-89de-f2ceddc731e0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:20.342031       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3224/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:23.012422       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"init-container-5054/pod-init-6700c6a0-a10b-431d-b674-70a30a0812c1\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:24.025443       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-7254/pod-service-account-39823bdf-1803-4213-97f3-e27d6acde17f\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:25.919987       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-9416/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-dxw8g\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:26.403105       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-9609/nfs-server\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:30.300493       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-2636/downwardapi-volume-78a99134-6446-48fb-8386-54146d50408e\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:30.803719       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-7018/ss-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:34.447151       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-8928/pod-877cbb00-66ed-44ff-99e0-60ffe69e9891\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:34.507206       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-9416/exec-volume-test-preprovisionedpv-zt5q\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:34.605625       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-8456/exec-volume-test-preprovisionedpv-vspm\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:35.400920       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6381/failure-4\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:37.409168       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"aggregator-4160/sample-apiserver-deployment-67dc674868-rt59l\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:39.805187       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-6373/pod-secrets-20bd4c22-472c-4a6c-af40-0a60e78833be\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:40.664873       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3582/local-client\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:40.974832       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-1584/agnhost-primary-9hvk6\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:41.428996       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-7018/ss-1\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:41.617176       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3224/test-container-pod\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:41.768085       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-1584/agnhost-primary-l64l2\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:42.632760       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5418/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-fvfhn\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:42.774663       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3689/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-5mbtp\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:43.122937       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-2182/local-client\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:43.291618       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"containers-8719/client-containers-60ac56b4-c36b-4af2-99b1-84f12a436571\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:43.706090       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2925/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-lxzj4\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:46.295723       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5418/pod-b050e24d-30c5-4fec-bb36-ffc0c09f4e8a\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:47.007901       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-931/security-context-28cc5919-ae53-48a2-83dd-183aa6bd12d9\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:47.394608       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7250/pod1\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:48.483704       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-9609/pvc-tester-qljr4\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:48.718092       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-2925/pod-93e4f215-ce68-45c3-9c07-659abbf9c524\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity.\"\nI0825 04:14:48.744863       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-2925/pod-93e4f215-ce68-45c3-9c07-659abbf9c524\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity.\"\nI0825 04:14:49.220143       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7915/exec-volume-test-dynamicpv-h8tc\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:51.565732       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-2925/pod-93e4f215-ce68-45c3-9c07-659abbf9c524\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity.\"\nI0825 04:14:53.011268       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7250/pod2\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:53.871519       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"prestop-4885/pod-prestop-hook-9d57a581-7b63-49c4-8c22-fd248993c921\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:53.972716       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-296/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-4b8dl\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nE0825 04:14:55.165149       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-93e4f215-ce68-45c3-9c07-659abbf9c524.169e717c9c12e62c\", GenerateName:\"\", Namespace:\"persistent-local-volumes-test-2925\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"persistent-local-volumes-test-2925\", Name:\"pod-93e4f215-ce68-45c3-9c07-659abbf9c524\", UID:\"89fcb7df-6f7c-4408-badb-7b693ff572d1\", APIVersion:\"v1\", ResourceVersion:\"13084\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: persistent-local-volumes-test-2925/pod-93e4f215-ce68-45c3-9c07-659abbf9c524\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0418ecfc9b8302c, ext:579087850480, loc:(*time.Location)(0x2dc26e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0418ecfc9b8302c, ext:579087850480, loc:(*time.Location)(0x2dc26e0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-93e4f215-ce68-45c3-9c07-659abbf9c524.169e717c9c12e62c\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-2925 because it is being terminated' (will not retry!)\nI0825 04:14:55.235766       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-9411/pod-00166c5e-3813-4b26-83bc-1cf2a4141b12\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:55.660762       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-9768-2048/csi-hostpath-attacher-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:56.002464       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-9768-2048/csi-hostpathplugin-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:56.203622       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-9768-2048/csi-hostpath-provisioner-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:56.416195       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-9768-2048/csi-hostpath-resizer-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:56.636156       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-9768-2048/csi-hostpath-snapshotter-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:57.343902       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-1472/adopt-release-gvm4f\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:57.353634       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-1472/adopt-release-hcts6\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:57.684009       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4891/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-477hg\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:14:59.330929       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pvc-protection-2340/pvc-tester-wsd6b\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:14:59.565037       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-9445/sample-webhook-deployment-6bd9446d55-x692w\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:00.233395       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-3570/busybox-637fc763-25ee-4acb-83cd-635490da4762\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:00.724564       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-6951/pod-aaa578b4-39b1-4c86-b825-8d55f4d82017\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:01.593439       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-9768/pod-ec140508-6bc8-4963-a2f3-e95cffad85d2\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:02.737502       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-296/pod-subpath-test-preprovisionedpv-wkr2\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:03.960723       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3689/exec-volume-test-preprovisionedpv-gbbr\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:04.118110       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-9768/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-spj9t\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:04.980334       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-2271/liveness-b13e4a92-bd72-4d38-ac82-9eb4c07dd3e5\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:05.075286       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-1472/adopt-release-jdlnp\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:06.155631       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-8165/concurrent-1629864900-nh8wd\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:07.674807       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1663/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-n8tm8\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:08.154367       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"pvc-protection-2340/pvc-tester-l4d9k\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-protection6rl7c\\\" is being deleted.\"\nI0825 04:15:08.160554       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"pvc-protection-2340/pvc-tester-l4d9k\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-protection6rl7c\\\" is being deleted.\"\nI0825 04:15:09.528354       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-9445/webhook-to-be-mutated\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:11.201333       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2907/service-headless-nk56f\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:11.228959       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2907/service-headless-hhdcf\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:11.229686       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2907/service-headless-xwtrl\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:12.630141       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-6027/pod-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:12.733752       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-6027/pod-1\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:12.844649       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-6027/pod-2\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:13.372886       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-4750/labelsupdate0a90c1ac-01f4-4035-964e-3507608db5d5\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:14.643854       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2907/service-headless-toggled-6r69v\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:14.652567       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2907/service-headless-toggled-bfcdg\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:14.654193       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2907/service-headless-toggled-d7l5f\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:14.726186       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-331/ss-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:17.794038       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1663/pod-subpath-test-preprovisionedpv-wcgb\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:17.993411       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2907/verify-service-up-host-exec-pod\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:18.163166       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4891/pod-subpath-test-preprovisionedpv-dmkl\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:20.140976       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3013/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-wh88s\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:20.313847       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2907/verify-service-up-exec-pod-pnhtt\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:21.041626       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-7432/frontend-7659f66489-dpxmv\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:21.061567       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-7432/frontend-7659f66489-gqrxt\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:21.061924       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-7432/frontend-7659f66489-mfjkg\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:21.695384       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-7432/agnhost-primary-56857545d9-k4jnr\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:22.361148       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-7432/agnhost-replica-55fd9c5577-dmmgx\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:22.376591       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-7432/agnhost-replica-55fd9c5577-v9zvk\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:22.734942       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-3831/pod-configmaps-ca517d05-65e1-4ce1-b9e1-1b4fa4c956e8\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:23.513717       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5256-519/csi-hostpath-attacher-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:23.874997       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5256-519/csi-hostpathplugin-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:24.067688       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5256-519/csi-hostpath-provisioner-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:24.111789       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-359/pod-projected-configmaps-13fcc42c-df90-452d-ad41-7ff4db2fba1a\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:24.280586       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5256-519/csi-hostpath-resizer-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:24.491647       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5256-519/csi-hostpath-snapshotter-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:25.714447       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2907/verify-service-down-host-exec-pod\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:25.891664       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-2545/pod-be9f32be-ccfe-41ed-99f3-58cf989c5fcf\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:27.168622       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-4105/dns-test-2e9c0190-3dfb-488f-8c31-bc075bc25ddd\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:27.319939       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5256/hostpath-injector\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:28.414993       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-2545/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-q4w9c\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:31.402979       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8523/pod-subpath-test-dynamicpv-qbs9\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:31.496208       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2907/verify-service-down-host-exec-pod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:32.537928       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-7838/agnhost-primary-v9kmx\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:34.349598       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3013/local-injector\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:34.452039       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-331/ss-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:37.290356       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9348/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-txsrz\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:37.290603       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2907/verify-service-up-host-exec-pod\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:37.952171       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-8237/downwardapi-volume-b84b6359-d731-4558-b4b8-98af34d3a4e1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:39.594696       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2907/verify-service-up-exec-pod-28h6b\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:41.827005       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-5459/explicit-root-uid\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:41.854241       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-6345/pod-projected-configmaps-402f560a-0f01-4f67-a35b-a352a4c8c85a\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:45.480452       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-9906/pod-configmaps-c77699d0-30c0-4a4b-ad61-5a56e365c410\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:45.548209       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-406/pfpod\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:49.362885       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9348/pod-subpath-test-preprovisionedpv-wqdx\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:49.762090       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-9429/nfs-server\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:51.072265       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"var-expansion-3681/var-expansion-575a65ef-cf41-4ac8-931b-99732893a623\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:52.439336       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replication-controller-3406/my-hostname-basic-7457cfd2-6aac-49e6-9d54-d6e3931e0191-m5x95\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:52.644568       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3013/local-client\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:53.325320       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-9429/pvc-tester-m58px\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:54.044012       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-9035/pfpod\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:55.598068       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5256/hostpath-client\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:15:57.971722       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"default/recycler-for-nfs-s2prj\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:15:59.064522       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-6469/security-context-a83ddf51-a9fc-4094-8b22-bcadb427f8a5\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:00.654200       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2883/pod-subpath-test-dynamicpv-7jpm\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:00.928633       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-331/ss-2\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:01.141746       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-718/pod-ephm-test-projected-6rsl\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:04.797731       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-9429/pvc-tester-wt4q5\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:04.915493       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-3585/downwardapi-volume-df3ab6be-3124-44e8-acac-d3a742a2fe1b\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:06.231837       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-8165/concurrent-1629864960-hxgqq\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:08.512450       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-7138/test-webserver-6c8c12f5-447b-4b48-8af9-878d07087337\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:09.412167       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"crd-webhook-8567/sample-crd-conversion-webhook-deployment-7d6697c5b7-sxkf2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:09.884835       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"default/recycler-for-nfs-s2prj\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:11.845386       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8012/agnhost-primary-t8hhz\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:12.404307       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-5276/downwardapi-volume-1a3ecc27-ab6d-40ba-8f49-caaa59181dbf\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:12.456335       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5123/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-g6c8j\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:14.976199       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4615-3958/csi-mockplugin-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:16.080995       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5123/pod-7c0d7201-519b-4cc4-a110-f9cae380895a\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:16.151071       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3830/exec-volume-test-inlinevolume-bjt4\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:18.469370       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4778/httpd\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:19.463848       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6703/aws-injector\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:24.049772       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-4921/pod-secrets-2220b9eb-cb1b-40c4-8264-6c3b5040f512\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:26.220297       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4615/pvc-volume-tester-nqrbc\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:27.639623       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1559/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-hqcpd\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:29.506053       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4778/run-test\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:32.191715       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-987/pod-subpath-test-inlinevolume-hq8n\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:33.421175       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1559/pod-subpath-test-preprovisionedpv-tvd7\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:34.598051       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-331/ss-2\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:35.364304       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"endpointslice-743/pod1\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:35.468566       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"endpointslice-743/pod2\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:36.185318       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-3982/implicit-nonroot-uid\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:37.689760       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-6637/busybox-7a53f434-ca5e-47d7-ae3d-99f27c65c1d8\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:39.158832       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-6925/alpine-nnp-nil-fccdf67c-4263-4d60-a21a-273ad53620e4\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:40.607879       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5836/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-ljt42\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:41.543271       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-9137/pod-354c1dd0-943a-4ede-9b47-fd75f4ed2b40\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:42.440125       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6714/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-gxvn4\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:43.663435       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-213/test-deployment-8b6954bfb-4bmb9\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:43.675065       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4778/run-test-2\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:43.677904       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-213/test-deployment-8b6954bfb-468b4\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:47.380132       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9372-5868/csi-hostpath-attacher-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:47.554904       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-213/test-deployment-7c65d4bcf9-cpxhk\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:47.700709       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9372-5868/csi-hostpathplugin-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:47.910768       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9372-5868/csi-hostpath-provisioner-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:47.980127       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-213/test-deployment-7c65d4bcf9-n9tkh\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:48.048906       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-213/test-deployment-768947d6f5-qllp4\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:48.126321       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9372-5868/csi-hostpath-resizer-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:48.346375       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9372-5868/csi-hostpath-snapshotter-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:48.452951       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5836/local-injector\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:51.161721       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9372/pod-subpath-test-dynamicpv-rcjm\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:51.660041       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-213/test-deployment-768947d6f5-5hv5f\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:51.723889       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6703/aws-client\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:51.904849       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7862/affinity-clusterip-transition-jgmws\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:51.922260       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7862/affinity-clusterip-transition-glfx9\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:51.922525       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7862/affinity-clusterip-transition-wdm6x\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:53.833422       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1012/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:53.944974       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-331/ss-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:53.951738       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1012/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:54.047624       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1012/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:54.153238       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1012/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:16:54.156461       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"subpath-7820/pod-subpath-test-projected-sftc\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:55.077739       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"var-expansion-9878/var-expansion-619a01b0-b290-469f-9d3f-b56be2e03b66\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:16:59.164078       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4778/run-test-3\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:01.354625       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7862/execpod-affinity6trr6\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:02.801883       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6714/exec-volume-test-preprovisionedpv-b564\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:04.998730       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5836/local-client\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:06.612949       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-3087/busybox-user-0-11244afe-0811-458d-80b6-a0ec3bb70446\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:08.661320       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-2408/httpd\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:08.795463       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-2279/sample-webhook-deployment-6bd9446d55-llldj\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:09.695812       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-7656/pod-cbb30a1e-ed3e-4f2f-97b8-e292a1239352\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:13.301802       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-1734/security-context-3258fecd-eeae-4067-a497-6b9e14a69ec4\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:16.373978       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-4487/projected-volume-93431976-0b73-44e7-9c02-70a81ed08325\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:17.089801       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-2219/e2e-test-httpd-pod\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:17.294878       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1012/test-container-pod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:17.399880       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1012/host-test-container-pod\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:18.476070       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6070/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-hxx74\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:20.582389       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-2266/aws-injector\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:20.686582       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-188-7272/csi-hostpath-attacher-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:21.006631       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-188-7272/csi-hostpathplugin-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:21.112185       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-1450/pod-configmaps-b47494d7-026a-4351-aeb3-742f15a3c2d8\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:21.218910       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-188-7272/csi-hostpath-provisioner-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:21.425887       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-188-7272/csi-hostpath-resizer-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:21.641099       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-188-7272/csi-hostpath-snapshotter-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:21.837032       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-188/inline-volume-tester-6vcxb\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:23.514197       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-331/ss-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:24.814455       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-2842/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:24.918514       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-2842/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:25.025848       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-2842/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:25.136200       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-2842/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:27.475126       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-6151/pod-projected-configmaps-bf809ee0-2bba-4be1-9246-b0e0bbba9a3b\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:27.867681       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8121-2142/csi-mockplugin-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:28.867010       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6315-7100/csi-hostpath-attacher-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:29.211743       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6315-7100/csi-hostpathplugin-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:29.421235       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6315-7100/csi-hostpath-provisioner-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:29.626893       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6315-7100/csi-hostpath-resizer-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:29.785519       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1148-4118/csi-mockplugin-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:29.850086       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6315-7100/csi-hostpath-snapshotter-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:30.948167       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4754/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-28snx\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:30.993666       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-1414/all-succeed-wm7mv\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:30.994211       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-1414/all-succeed-ctm2h\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:32.658522       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6315/pod-subpath-test-dynamicpv-z47j\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:32.848524       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7878/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-ntf8l\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:32.925326       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-1414/all-succeed-vcwkz\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:32.940577       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-1414/all-succeed-n7kww\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:34.110083       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1885/pod-subpath-test-dynamicpv-55cr\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:34.198932       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8121/pvc-volume-tester-n5t8f\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:34.711798       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4754/pod-f2630f40-f22c-48bc-9124-4fd4252b279a\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:34.761578       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6070/pod-subpath-test-preprovisionedpv-w6lm\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:36.506107       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7878/pod-aa030194-83ab-4609-bd47-51b376238f74\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:40.861123       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-2266/aws-client\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:41.029687       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1148/pvc-volume-tester-hjmkm\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:42.615378       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4754/pod-8c100324-9a34-473d-b71e-1c305bc70a23\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:43.847704       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2202-1260/csi-mockplugin-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:43.951301       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2202-1260/csi-mockplugin-attacher-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:44.049659       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2202-1260/csi-mockplugin-resizer-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:46.183519       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-2842/test-container-pod\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:47.232080       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-7096/downward-api-25e2b934-8656-4c57-a690-48d05a947987\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:49.343999       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-3342/pod-7f01e141-368b-4014-91e0-938dd3badb1e\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:50.079177       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2202/pvc-volume-tester-xm7d4\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:50.663119       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7499/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-8smkn\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:52.754726       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4606/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-ndpfs\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:17:55.034322       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-2223/busybox-3a5d7871-7fee-43ed-bf7f-59205de9fefe\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:17:56.307113       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7499/pod-5c884ebd-da72-4a64-bf48-e5f0cd1f78f2\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:18:00.311030       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7499/pod-df1b8211-8425-4841-a0d4-cafd9420d794\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:18:00.992213       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-7084/pod1\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:01.099377       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-7084/pod2\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:01.225120       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-7084/pod3\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:02.713165       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4606/pod-subpath-test-preprovisionedpv-jklx\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:18:03.566119       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-7527/ss-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:05.473018       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9460/pod-always-succeed5d254bef-6224-42d3-81ff-bc1a246d214b\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:06.383213       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-9887/successful-jobs-history-limit-1629865080-h7k6b\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:07.675621       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-9659/pod-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:09.776649       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4606/pod-subpath-test-preprovisionedpv-jklx\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:18:10.980390       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-331/ss-2\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:10.993070       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-8061/busybox-privileged-true-6afc0270-31a4-49ed-8df0-b65eda81131a\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:12.880298       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"sysctl-3254/sysctl-133a6d37-e6e5-47e6-bfe8-1c79a542b36d\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:15.955925       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4237/pod-subpath-test-inlinevolume-hngf\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:18:18.393465       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-5559/pod-9f7efd01-70dd-492a-997b-3b163ac5c004\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:18.798352       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-7440/pod-e190c055-16c2-4019-a690-555dd1f4f16b\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:22.539151       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-8410/sample-webhook-deployment-6bd9446d55-w69k4\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:23.119572       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-8676/sample-webhook-deployment-6bd9446d55-76mv8\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:24.129958       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-382/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-c6fq2\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:18:24.164056       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-9215/pod-1b991562-e092-41dd-b42f-067718637f50\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:24.506189       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-3941/pod-secrets-a3f1cc46-088f-47d5-ad4c-426a2c3a3754\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:25.712898       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-9265/pod-7695251b-0937-488a-a6ce-7b5c87cd1446\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:25.942061       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1292-5450/csi-mockplugin-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:18:26.146747       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1292-5450/csi-mockplugin-resizer-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:18:26.409364       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-7527/ss-1\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:30.501126       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-2818/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-r6wt4\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:18:30.910388       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-3026/security-context-08066f4d-1e30-415a-9036-6fcef7ac0b62\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:31.072048       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-8676/to-be-attached-pod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:31.123918       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-4398/pod-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:31.232749       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-4398/pod-1\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:31.334337       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-4398/pod-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:32.189180       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-4989/pod-configmaps-fe465ca9-b89c-4402-8870-581f1c530fd7\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:32.688176       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-382/pod-subpath-test-preprovisionedpv-4hbc\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:18:32.967230       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-7224/pod-secrets-a02769ab-8c5f-45e4-b073-a8cbd1fff51b\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:34.024434       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-7283/sample-webhook-deployment-6bd9446d55-2jlp8\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:35.426641       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-9848/pod-secrets-5c159fc6-0d81-4ff8-8b49-959e3d11c043\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:35.896258       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-1536/pod-projected-configmaps-8138715a-2ed7-4f87-9ae5-f1eef53216d0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:36.616678       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3183/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-xfzhr\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:18:36.888886       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1292/pvc-volume-tester-cwnkn\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:18:40.447756       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6941/httpd\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:43.941385       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-331/ss-1\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:45.272010       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-9040/logs-generator\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:46.879015       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8446/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-dcvff\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:18:48.376968       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7242/pod-f2629a78-d7a3-4e93-8811-d532dbb6ba2a\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:48.569314       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-2818/exec-volume-test-preprovisionedpv-rv9g\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:18:49.040853       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-2438/pod-ephm-test-projected-kg4m\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:49.359293       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-7527/ss-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:49.985607       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-9265/pod-50b78a9c-3033-4e76-bc72-290b39b047e7\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:50.641168       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5194-4350/csi-mockplugin-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:18:50.831699       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5194-4350/csi-mockplugin-attacher-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:18:51.940610       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1292/pvc-volume-tester-2j4kr\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:18:52.887440       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7242/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-swgb8\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:18:53.447003       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2558/externalsvc-rxtp8\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:53.455195       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2558/externalsvc-6ktqk\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:56.346189       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5194/pvc-volume-tester-wlhqm\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:18:57.013142       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2558/execpoddd656\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:18:59.753447       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3675/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-2h7ws\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:02.036308       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5356/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-v6smd\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:02.073234       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9504/server-envvars-096d952f-e105-44b3-bd79-7840cec20551\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:02.657751       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5194/inline-volume-pvf8b\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:02.705901       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3183/pod-subpath-test-preprovisionedpv-tmcs\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:03.140370       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8446/pod-subpath-test-preprovisionedpv-7vl2\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:03.412043       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3675/pod-5f440ebf-1b3d-4ca4-b80f-c6efd9a77bdc\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:04.381458       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-331/ss-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:04.615315       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9504/client-envvars-32caaa1b-a20d-47a2-96a9-b5803bd7e429\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:06.479375       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-4577/failed-jobs-history-limit-1629865140-jrlw4\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:06.505961       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-9887/successful-jobs-history-limit-1629865140-l8tft\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:09.284416       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-2677/sample-webhook-deployment-6bd9446d55-qtm9f\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:10.219650       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8818/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-ksz7t\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:10.447814       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5812/pod-subpath-test-inlinevolume-xddz\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:10.727426       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"containers-9290/client-containers-f47e429a-e974-42ab-be3b-ed4014dbf36e\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:11.713080       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-5855/pod-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:11.816660       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-5855/pod-1\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:13.852313       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8818/pod-79c7c7be-d37a-419e-ac3b-447a93e65bf2\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:14.384070       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"hostpath-2811/pod-host-path-test\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:15.142931       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-766/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-c2j4f\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:15.181362       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-478/startup-9325038a-6b05-4660-8360-48f2bbe9a8da\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:15.626342       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-6785/security-context-0def1ca8-6ca4-4923-b4b7-d53f36615cc2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.008946       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-5f6z7\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.015072       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-5gb26\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.026261       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-kwkmj\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.052868       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-tdww2\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.053921       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-54xkz\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.063957       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-fks9h\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.069323       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-nm5qw\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.086616       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-c5llb\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.087273       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-5f7wd\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.087345       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-bqt7c\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.087398       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-trkdv\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.106729       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-jpm9w\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.126959       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-dsk7q\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.136825       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-8q868\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.137151       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-9kzbb\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.137231       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-vkv2x\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.142202       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-9tn75\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.142278       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-ltj4x\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.142352       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-mks2k\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.142423       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-cx6mx\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.154541       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-sgb58\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.154866       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-9zxv6\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.161834       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-fbcs4\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.162668       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-pxqvw\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.162904       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-7rd8v\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.163598       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-t5fsr\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.194704       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-wzxx2\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.242560       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-xkjl9\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.289727       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-qb7vr\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.340312       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-kbpjn\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.391360       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-xhwhf\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.490159       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-zxjlb\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.541260       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-zfl54\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.592785       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-8p7pv\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.664865       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-zlmzr\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.708067       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-ksbds\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.757859       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-mq5pz\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.803776       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-fks8c\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.844818       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-r2cnp\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:17.890936       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8710/cleanup40-44f9f254-ff54-4185-862c-6337e948ca5d-qlg89\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:18.298467       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5356/pod-subpath-test-preprovisionedpv-qw8f\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:18.606120       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3803/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:18.715068       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3803/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:18.817276       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3803/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:18.923687       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3803/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:19.925025       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"sysctl-2980/sysctl-fa584632-1c4f-4bcf-bb8b-7d3db8467538\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:21.508445       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6458/e2e-test-httpd-pod\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:23.794373       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3365/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-67l7q\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:25.952544       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8407/ss2-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:30.049454       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4164-6690/csi-mockplugin-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:30.071666       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-2610/pod-ephm-test-projected-hqw2\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:30.154341       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4164-6690/csi-mockplugin-attacher-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:30.259348       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4164-6690/csi-mockplugin-resizer-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:31.043170       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8407/ss2-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:31.670246       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2739/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-886b8\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:32.818902       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8407/ss2-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:33.686937       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4206/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-w4wlg\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:33.778902       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-766/pod-subpath-test-preprovisionedpv-zk7m\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:36.299074       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4164/pvc-volume-tester-7lwwl\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:36.567743       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8407/ss2-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:37.424208       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7094/hostpath-injector\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:37.799676       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-4672/test-dns-nameservers\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:38.891597       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8407/ss2-1\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:40.075301       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3803/test-container-pod\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:40.501159       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-8087/pod-test\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:41.141698       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7146/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-kvqxb\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:41.524830       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-8946/liveness-fa9c8732-f3e2-4615-9028-50ef24a85430\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:42.770902       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6687/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-6svwf\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:44.856586       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8407/ss2-2\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:47.856616       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4206/pod-subpath-test-preprovisionedpv-tkvg\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:48.956365       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3365/local-injector\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:49.958945       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-8827-6698/csi-hostpath-attacher-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:50.284641       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-8827-6698/csi-hostpathplugin-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:50.488613       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-8827-6698/csi-hostpath-provisioner-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:50.515979       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-1455/pod-01f781c1-1435-4a25-91bd-3ac458180945\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:50.701786       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-8827-6698/csi-hostpath-resizer-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:50.924891       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-8827-6698/csi-hostpath-snapshotter-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:51.274441       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3867-5700/csi-mockplugin-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:51.377687       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3867-5700/csi-mockplugin-attacher-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:51.716955       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7094/hostpath-client\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:53.745641       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-8827/pod-74fb5616-db9b-4efa-85c2-8ae6d3ec9c5e\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:54.416916       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6687/pod-785b6905-c94b-4a58-a0b1-46938a257d16\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:56.613769       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"init-container-1230/pod-init-c40dc2ad-fa66-4e26-a650-fe72ec3695b1\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:57.481635       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3867/pvc-volume-tester-c9s9k\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:57.820967       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8407/ss2-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:19:58.394722       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6687/pod-2aab3f44-571d-47af-b9d5-6726840dfca9\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:19:58.696330       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-7809/annotationupdate464ca6dc-9ca8-44f5-ac75-94e216776506\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:00.382234       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-2616/sample-webhook-deployment-6bd9446d55-hq69m\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:01.355692       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4164/pvc-volume-tester-t2qtp\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:02.004660       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8407/ss2-1\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:03.371951       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3365/local-client\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:03.446279       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7146/pod-subpath-test-preprovisionedpv-f24f\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:03.781177       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8407/ss2-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:04.000410       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6629/aws-injector\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:05.003307       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2873/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-8dkwl\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:05.158328       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4847/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-dnlz2\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:06.564780       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-4577/failed-jobs-history-limit-1629865200-kxpx9\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:07.070884       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-11/sample-webhook-deployment-6bd9446d55-dzv92\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:07.721009       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9106/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-bgd45\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:08.192533       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-7241-6804/csi-hostpath-attacher-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:08.510200       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-7241-6804/csi-hostpathplugin-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:08.716123       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-7241-6804/csi-hostpath-provisioner-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:08.929604       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-7241-6804/csi-hostpath-resizer-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:09.149618       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-7241-6804/csi-hostpath-snapshotter-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:09.354976       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-7241/inline-volume-tester-v8vbf\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:10.921565       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-674/agnhost-primary-4wjfc\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:12.237410       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-6299/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0825 04:20:12.243932       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-6299/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0825 04:20:12.448509       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-6299/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0825 04:20:12.453990       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-6299/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0825 04:20:12.874379       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6474/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-hqpnx\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:13.568274       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2276/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-lmt27\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:14.631286       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-6299/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0825 04:20:14.632164       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-6299/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0825 04:20:15.082548       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-6299/pfpod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0825 04:20:15.088007       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-6299/pfpod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0825 04:20:17.631684       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-6299/pfpod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0825 04:20:17.830851       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-6149/pod-projected-configmaps-f1ad449f-27f3-4ad9-8c33-a9245f2a2bb7\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:18.631640       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6474/pod-subpath-test-preprovisionedpv-2csz\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:18.632321       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-6299/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0825 04:20:19.086847       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2873/pod-subpath-test-preprovisionedpv-s77b\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:19.328087       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4847/pod-subpath-test-preprovisionedpv-rflk\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:19.632433       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-6299/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0825 04:20:19.783598       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9106/pod-subpath-test-preprovisionedpv-6t85\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:19.964777       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9847/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-gkj2g\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:20.267262       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4477/test-cleanup-controller-4cxmk\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:20.532343       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-6299/pfpod2\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0825 04:20:20.547412       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-6299/pfpod2\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0825 04:20:20.667785       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-dd94f59b7-gl9hw\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:20.684484       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-dd94f59b7-cfpf2\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:20.695766       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-dd94f59b7-qxbf4\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:20.695846       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-dd94f59b7-gh4pz\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:20.695900       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-dd94f59b7-ggpr9\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:20.708408       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-dd94f59b7-s85jz\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:20.875244       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-dd94f59b7-vsmqf\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:20.978450       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-dd94f59b7-zt24p\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:21.011449       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-3833/test-pod-1\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:21.118072       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-3833/test-pod-2\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:21.188948       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-dd94f59b7-8h8d5\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:21.221878       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-3833/test-pod-3\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:21.231107       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-3349/kube-proxy-mode-detector\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:21.298697       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-dd94f59b7-t5rkj\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:21.411306       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-dd94f59b7-jbj9c\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:21.513265       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8124/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-6xwm7\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:21.633087       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-6299/pfpod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0825 04:20:22.633178       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-6299/pfpod2\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0825 04:20:23.230064       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6475/configmap-client\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:23.705137       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2351/externalsvc-wd9lp\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:23.708916       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2351/externalsvc-8pns7\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:25.083883       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-dd94f59b7-hxdpx\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:25.308440       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-564cc96d6-tc658\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:25.321146       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-564cc96d6-dvg7p\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:25.389146       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-564cc96d6-x8cnp\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:25.508987       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-564cc96d6-skn4k\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:25.623436       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-dd94f59b7-c9ct8\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:25.740290       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-dd94f59b7-hfn9j\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nE0825 04:20:25.806888       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pfpod.169e71c997d7dfd6\", GenerateName:\"\", Namespace:\"limitrange-6299\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"limitrange-6299\", Name:\"pfpod\", UID:\"f87c0cc3-748e-4edc-a650-95cc98495ccc\", APIVersion:\"v1\", ResourceVersion:\"23982\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: limitrange-6299/pfpod\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0418f226ff4c5d6, ext:909729355115, loc:(*time.Location)(0x2dc26e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0418f226ff4c5d6, ext:909729355115, loc:(*time.Location)(0x2dc26e0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pfpod.169e71c997d7dfd6\" is forbidden: unable to create new content in namespace limitrange-6299 because it is being terminated' (will not retry!)\nE0825 04:20:25.810039       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pfpod2.169e71c99819c3f6\", GenerateName:\"\", Namespace:\"limitrange-6299\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"limitrange-6299\", Name:\"pfpod2\", UID:\"291bb3c7-efcd-48b3-8f21-98e26a28de48\", APIVersion:\"v1\", ResourceVersion:\"23984\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: limitrange-6299/pfpod2\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0418f227036a9f6, ext:909733673362, loc:(*time.Location)(0x2dc26e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0418f227036a9f6, ext:909733673362, loc:(*time.Location)(0x2dc26e0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pfpod2.169e71c99819c3f6\" is forbidden: unable to create new content in namespace limitrange-6299 because it is being terminated' (will not retry!)\nE0825 04:20:25.816221       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-no-resources.169e71c9986a4152\", GenerateName:\"\", Namespace:\"limitrange-6299\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"limitrange-6299\", Name:\"pod-no-resources\", UID:\"9c87cd96-4e43-4f0a-8521-41eeb1782f35\", APIVersion:\"v1\", ResourceVersion:\"23986\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: limitrange-6299/pod-no-resources\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0418f2270872752, ext:909738948348, loc:(*time.Location)(0x2dc26e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0418f2270872752, ext:909738948348, loc:(*time.Location)(0x2dc26e0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-no-resources.169e71c9986a4152\" is forbidden: unable to create new content in namespace limitrange-6299 because it is being terminated' (will not retry!)\nE0825 04:20:25.821324       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-partial-resources.169e71c998c6999f\", GenerateName:\"\", Namespace:\"limitrange-6299\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"limitrange-6299\", Name:\"pod-partial-resources\", UID:\"81025c1b-24a6-4d33-8bf8-3b91540a2329\", APIVersion:\"v1\", ResourceVersion:\"23988\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: limitrange-6299/pod-partial-resources\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0418f2270e37f9f, ext:909745000258, loc:(*time.Location)(0x2dc26e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0418f2270e37f9f, ext:909745000258, loc:(*time.Location)(0x2dc26e0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-partial-resources.169e71c998c6999f\" is forbidden: unable to create new content in namespace limitrange-6299 because it is being terminated' (will not retry!)\nI0825 04:20:26.876095       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-564cc96d6-67hd7\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:26.922419       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4477/test-cleanup-deployment-685c4f8568-kdfrb\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:28.229176       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1476/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-n225p\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:28.751744       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-697/pod-2f76dec1-7579-4aa2-a460-ddca358fb85d\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:28.807423       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-564cc96d6-d2sww\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:28.919538       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-68759fdb54-gllff\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:28.935568       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-68759fdb54-pxtsj\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:28.941843       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-68759fdb54-xpl7v\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:29.447708       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6629/aws-client\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:30.339819       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-6304/e2e-configmap-dns-server-a2aca8f1-a4b9-43c3-b8ef-9fff351db6a6\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:31.377989       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-3349/affinity-clusterip-timeout-m2666\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:31.394883       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-3349/affinity-clusterip-timeout-fqpd9\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:31.403539       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-3349/affinity-clusterip-timeout-rqn98\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:32.741704       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9847/pod-subpath-test-preprovisionedpv-c59v\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:33.269775       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2351/execpodrc42p\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:33.359757       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-857f965b54-llflq\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:33.392517       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-857f965b54-vgbjk\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:33.398014       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-857f965b54-88wnr\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:33.601669       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8124/pod-subpath-test-preprovisionedpv-bkcb\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:33.987021       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1476/pod-subpath-test-preprovisionedpv-sk7h\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:34.921576       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-857f965b54-lbccx\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:35.787341       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-857f965b54-rjwp6\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:38.754545       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-6304/e2e-dns-utils\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:39.182872       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7405/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-vcd2l\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:40.228358       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-3214/sample-webhook-deployment-6bd9446d55-fvcjm\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:40.364655       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-857f965b54-f7chv\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:40.465851       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-857f965b54-4hsr7\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:40.574200       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-857f965b54-z7rxk\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:40.689223       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-857f965b54-5pbd4\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:40.831409       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-3349/execpod-affinitytcnqp\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:40.907922       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-857f965b54-whl4g\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:41.134837       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-58564f9c6b-v26n9\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:41.142618       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-58564f9c6b-dbmxc\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:41.262110       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-58564f9c6b-qx97f\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:41.633171       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-6262/pfpod\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:41.930998       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-9878/exceed-active-deadline-2vvfk\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:41.933239       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-9878/exceed-active-deadline-wvwtd\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:42.411709       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-857f965b54-fx69x\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:44.019896       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1977/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:44.043766       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-2769/downwardapi-volume-8f0e78dc-9d32-4258-9639-6da9ff05c657\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:44.125212       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1977/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:44.231919       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1977/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:44.338437       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1977/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:44.709333       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replicaset-3492/pod-adoption-release\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:45.010040       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-9512/service-proxy-disabled-6mrwb\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:45.011342       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-9512/service-proxy-disabled-k67ls\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:45.011648       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-9512/service-proxy-disabled-qksrj\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:46.085028       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-7156/ss-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:49.145842       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7405/pod-subpath-test-preprovisionedpv-4zck\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:50.355737       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-58564f9c6b-rckp5\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:50.369776       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-58564f9c6b-7nkbl\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:50.403630       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-58564f9c6b-hqghd\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:51.480094       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-9512/service-proxy-toggled-w4tgv\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:51.480407       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-9512/service-proxy-toggled-h8qvm\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:51.480707       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-9512/service-proxy-toggled-87rgs\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:51.546449       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6367/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:51.566787       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replicaset-3492/pod-adoption-release-8jljc\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:51.650098       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6367/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:51.753501       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6367/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:51.857286       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6367/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:51.880470       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-1875/startup-8e856271-8b4f-47b4-9817-513407de1eaa\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:51.971056       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-58564f9c6b-thhsq\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:53.139925       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8126/pod-subpath-test-inlinevolume-kr2g\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:53.168953       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-58564f9c6b-dnfcp\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:54.108965       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7405/pod-subpath-test-preprovisionedpv-4zck\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:54.826532       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-9512/verify-service-up-host-exec-pod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:55.337873       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6969/pod-subpath-test-inlinevolume-9qff\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:55.933944       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4245/pod-subpath-test-dynamicpv-jlzf\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:56.144830       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"events-6920/send-events-182760d0-10b0-4840-b7e0-5ba10dbf39f6\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:20:56.423925       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9095-4039/csi-mockplugin-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:20:56.525401       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9095-4039/csi-mockplugin-attacher-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:01.073371       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4962/test-rollover-controller-8qjb4\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:02.568659       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9095/pvc-volume-tester-xpswb\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:02.760616       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-58564f9c6b-h7fpn\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:03.141864       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-9512/verify-service-up-exec-pod-2swk9\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:03.388344       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1977/test-container-pod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:03.492603       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1977/host-test-container-pod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:05.617400       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4962/test-rollover-deployment-78bc8b888c-wj82p\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:06.464138       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4962/test-rollover-deployment-668db69979-2dbgt\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:07.202438       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7210/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-bsbrc\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:09.333551       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-5337/sample-webhook-deployment-6bd9446d55-kjd7s\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:09.598191       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4245/pod-subpath-test-dynamicpv-jlzf\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:09.882449       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-742/pod-configmaps-c0e0e957-00d4-4696-a034-b83d6723e633\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:10.578290       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-9512/verify-service-down-host-exec-pod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:10.645846       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-58564f9c6b-dv8ns\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:10.680622       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-675bb8c874-q2ltc\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:10.688233       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-675bb8c874-p6v9v\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:10.726814       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-675bb8c874-rrkqb\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:12.073053       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-675bb8c874-jgpqt\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:12.297233       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-675bb8c874-kx68f\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:12.909780       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6367/test-container-pod\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:12.962190       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1562-2531/csi-hostpath-attacher-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:13.010657       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6367/host-test-container-pod\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:13.272060       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1562-2531/csi-hostpathplugin-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:13.391615       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-9661/simpletest.deployment-59cfbf9b4d-tv722\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:13.402114       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-9661/simpletest.deployment-59cfbf9b4d-786xx\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:13.500792       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1562-2531/csi-hostpath-provisioner-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:13.699887       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1562-2531/csi-hostpath-resizer-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:13.925226       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1562-2531/csi-hostpath-snapshotter-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:16.372691       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-9512/verify-service-down-host-exec-pod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:16.887272       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1562/pod-subpath-test-dynamicpv-j7j2\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:17.123862       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-804/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-2jt2l\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:18.348896       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3825/webserver-675bb8c874-zssk5\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:18.941929       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7534/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-rt7c7\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:19.063374       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-7156/ss-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:20.745892       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-8144/metadata-volume-1668ce9c-a175-4d3f-ae79-c4ede5a0c576\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:20.829759       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-804/pod-511b9e39-db13-46dc-88e6-1e6f661cf7cf\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:22.153723       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8623/ss2-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:22.289117       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-9512/verify-service-up-host-exec-pod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:22.631426       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-5403/downwardapi-volume-c967d30d-8552-4a3f-8e7c-5cfe6da4deca\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:24.567131       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-9512/verify-service-up-exec-pod-95sm2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:25.533629       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8623/ss2-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:25.685998       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3082/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-wgmnd\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:26.458092       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4727/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-4vnmr\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:26.518170       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3167/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-jmv7v\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:27.381620       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-3118/pod-6374fdef-3fc6-4b04-8047-c22309a707a3\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:28.531543       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8623/ss2-2\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:28.999145       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"e2e-privileged-pod-7322/privileged-pod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:29.318909       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3082/pod-6d37ef9c-0030-4b70-ba29-7851df230731\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:30.103094       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-9512/verify-service-down-host-exec-pod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:30.275947       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1562/pod-subpath-test-dynamicpv-j7j2\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:30.397610       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2172/rs-t2pgr\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:30.419523       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2172/rs-r586g\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:30.419901       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2172/rs-hjv7r\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:30.451779       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2172/rs-grr5c\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:30.452487       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2172/rs-pdwj5\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:30.466889       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2172/rs-w2pnp\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:30.467910       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2172/rs-jcx4w\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:30.472267       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2172/rs-x9c5w\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:30.485019       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2172/rs-cf7nb\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:30.488493       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2172/rs-lv8dx\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:31.619971       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-657/pod-subpath-test-inlinevolume-hzfj\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:33.014276       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7534/pod-subpath-test-preprovisionedpv-sllb\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:33.895181       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1576-1838/csi-hostpath-attacher-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:34.216230       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1576-1838/csi-hostpathplugin-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:34.386163       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4727/pod-subpath-test-preprovisionedpv-7zzp\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:34.449406       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1576-1838/csi-hostpath-provisioner-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:34.470342       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3167/exec-volume-test-preprovisionedpv-xfzp\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:34.639618       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1576-1838/csi-hostpath-resizer-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:34.861552       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1576-1838/csi-hostpath-snapshotter-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:35.043831       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2172/rs-jx7vw\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:35.379483       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3082/pod-c97146d4-8200-40e3-adb5-09198bb6d288\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:36.802324       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8792/httpd\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:37.439186       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-249/kube-proxy-mode-detector\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:37.696065       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1576/pod-subpath-test-dynamicpv-t82v\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:38.597565       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-935/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-m75p2\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:39.318261       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-7156/ss-2\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:39.721368       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-4322/nfs-server\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:40.047105       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-7297/implicit-root-uid\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:41.621197       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-249/echo-sourceip\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:43.401376       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replicaset-7166/my-hostname-basic-7b402c1d-5e8d-415b-96df-62793d1fdf82-c27tl\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:43.924839       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-1148/security-context-ba275adf-4c6b-4cc7-b917-e264255ea057\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:45.675974       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replication-controller-306/pod-release-zt7tq\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:45.995708       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replication-controller-306/pod-release-8bvlk\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:46.150298       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7993-5153/csi-hostpath-attacher-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:46.474670       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7993-5153/csi-hostpathplugin-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:46.684077       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7993-5153/csi-hostpath-provisioner-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:46.904849       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7993-5153/csi-hostpath-resizer-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:46.941058       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-9902/image-pull-test9b3f1d1c-69c6-4a86-9fcc-2dcbd0265213\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:47.118302       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7993-5153/csi-hostpath-snapshotter-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:48.347549       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-4322/pvc-tester-v54dp\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:48.366671       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-249/pause-pod-6995b79788-ntgkq\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:48.371304       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-935/pod-subpath-test-preprovisionedpv-l6c9\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:48.386617       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-249/pause-pod-6995b79788-s52k7\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=3\nI0825 04:21:49.960904       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7993/hostpath-injector\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:51.679204       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1223/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-gxvb8\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:51.739246       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2646/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-t27t2\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:55.183468       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-4322/pvc-tester-hg5xk\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:57.808457       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-4322/pvc-tester-tpmzv\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:21:58.332965       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4622-5316/csi-hostpath-attacher-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:58.656332       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4622-5316/csi-hostpathplugin-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:58.884429       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4622-5316/csi-hostpath-provisioner-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:59.093002       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4622-5316/csi-hostpath-resizer-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:59.309111       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4622-5316/csi-hostpath-snapshotter-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:21:59.530776       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"topology-6704/pod-bca0581c-b85a-4806-bfe8-297c0ce6b97b\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:00.606169       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7515/pod-subpath-test-inlinevolume-9d5v\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:01.045157       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8623/ss2-2\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:02.136981       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4622/pod-subpath-test-dynamicpv-fqw6\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:03.461181       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1223/pod-subpath-test-preprovisionedpv-qm2g\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:03.800370       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2646/pod-subpath-test-preprovisionedpv-nfrt\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:04.006759       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1038/pod-subpath-test-inlinevolume-dh9l\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:05.962972       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-5942/downwardapi-volume-ce7994e8-bc99-4c5e-9425-dcd08d9d1ad7\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:06.825759       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-6707/pod-c906331a-860c-45f8-9a46-458673264f02\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:08.000551       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-2993/pod-projected-configmaps-ea9a6b17-36f0-43f5-ba05-e32bd53980b0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:09.596566       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-2756/busybox-33d3d0d2-4e70-4fd0-8b31-19f2996020a1\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:10.754707       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8781/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:10.860473       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8781/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:10.961692       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8781/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:11.074211       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8781/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:11.117068       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8634/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-7d9sc\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:11.621619       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-9102/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-xrh6f\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:11.955127       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8623/ss2-1\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:12.379490       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1611/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:12.483264       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1611/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:12.491839       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-6961/pod-projected-configmaps-57df5924-07aa-4bc2-891a-f096caa3ccfd\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:12.588411       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1611/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:12.692165       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1611/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:13.389460       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3562/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-cq8xs\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:13.461237       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9835/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:13.571777       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9835/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:13.672438       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9835/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:13.777087       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9835/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:15.534440       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-6815/pod-secrets-e24361c7-8686-42da-a82a-eacc8ac07190\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:16.112193       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3690/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-mv8xl\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:16.277874       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-8550/test-webserver-3f8b57db-e0de-4af8-83cb-28056c0a68aa\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:18.414656       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-3562/pod-6d3c634d-85ce-4674-8b47-2f6595e6a0ae\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity.\"\nI0825 04:22:18.420518       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-3562/pod-6d3c634d-85ce-4674-8b47-2f6595e6a0ae\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity.\"\nI0825 04:22:19.437620       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-9102/local-injector\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:19.486356       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8623/ss2-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:19.752261       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3690/pod-75e14972-ff87-4b22-bce8-3a0436e351e8\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:20.524546       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-1751/pod-projected-secrets-0a10c942-b36d-4044-b404-95f9a4fd2261\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:20.661911       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-3562/pod-6d3c634d-85ce-4674-8b47-2f6595e6a0ae\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity.\"\nI0825 04:22:21.587638       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7923/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-7zq96\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:23.778470       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-1823/downward-api-9d70e78d-631e-4d91-a446-9dcc060208d5\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:24.640956       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-2421/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0825 04:22:24.646668       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-2421/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0825 04:22:24.662521       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-3562/pod-6d3c634d-85ce-4674-8b47-2f6595e6a0ae\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity.\"\nE0825 04:22:24.857147       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-6d3c634d-85ce-4674-8b47-2f6595e6a0ae.169e71e54fc953a0\", GenerateName:\"\", Namespace:\"persistent-local-volumes-test-3562\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"persistent-local-volumes-test-3562\", Name:\"pod-6d3c634d-85ce-4674-8b47-2f6595e6a0ae\", UID:\"b566bdba-5912-4f44-b8ca-888d6d7b92ef\", APIVersion:\"v1\", ResourceVersion:\"29029\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: persistent-local-volumes-test-3562/pod-6d3c634d-85ce-4674-8b47-2f6595e6a0ae\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0418f4032f253a0, ext:1028779526470, loc:(*time.Location)(0x2dc26e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0418f4032f253a0, ext:1028779526470, loc:(*time.Location)(0x2dc26e0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-6d3c634d-85ce-4674-8b47-2f6595e6a0ae.169e71e54fc953a0\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-3562 because it is being terminated' (will not retry!)\nI0825 04:22:25.330313       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-4615/simpletest.rc-qrzzn\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:25.345078       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-4615/simpletest.rc-9gqf4\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:25.656127       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-6183/pod-projected-secrets-b6511d45-c739-4060-a122-2ca2133b5f19\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:26.662691       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-2421/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0825 04:22:30.258017       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7993/hostpath-client\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:31.180865       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-2421/terminating-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0825 04:22:31.185986       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-2421/terminating-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0825 04:22:31.349809       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8514/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-mwn8x\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:31.473822       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-386/downward-api-c6096404-32e3-4f7d-a3fa-3db52355ff62\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:32.119916       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8781/test-container-pod\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:32.212905       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8781/host-test-container-pod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:32.821599       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9835/test-container-pod\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:32.835481       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-12/dns-test-28eea048-7bf8-4102-9109-cde5650c95f7\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:33.662271       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7923/pod-09b9e5be-6c20-440e-8e72-acfaec758644\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:33.664565       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-2421/terminating-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0825 04:22:33.910306       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1839/exec-volume-test-preprovisionedpv-xxcq\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:34.039325       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-6707/pod-f74e34f3-9063-4b59-9092-b87dc615983f\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:34.044254       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1611/test-container-pod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:34.215330       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8634/pod-subpath-test-preprovisionedpv-xxdv\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:35.474821       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-7368/pod-sharedvolume-5b75813e-c8e2-40d0-80e3-f5597d7b49b2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:36.183469       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7923/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-6nb4w\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:37.179971       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4748/busybox1\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:39.384300       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-8251/sample-webhook-deployment-6bd9446d55-8jcxr\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:39.892695       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8623/ss2-2\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:41.370234       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-2806/pod-3a4a4134-82df-4c82-ad9a-62a0e3047f11\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:42.833332       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-5004/pod-projected-configmaps-a184ad55-9ade-4aff-9dee-e8c7e3055ad2\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:42.863529       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-8468/downwardapi-volume-2a303515-5580-4747-8b90-5931b8080d61\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:43.750728       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-9102/local-client\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:45.423142       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-58/httpd\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:45.789010       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-551/pod-projected-secrets-02523e5f-520d-42da-85eb-4cb75b4c23c3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:46.358621       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-2059/busybox-privileged-false-e4333283-8a0c-4c1d-b53b-9eed41cf0b29\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:47.223571       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2136/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-gd67h\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:49.061678       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2773/pod-subpath-test-inlinevolume-xvxj\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:49.634744       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6717/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-4vqns\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:49.724763       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8514/pod-subpath-test-preprovisionedpv-b62f\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:51.368080       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-3968/busybox-readonly-true-249b3954-4cf4-4ce0-8315-c32570a0afec\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:51.653745       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4495/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-tgfhf\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:22:51.720401       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-3432/pod-configmaps-60730944-987c-45ed-984d-16981f6113d7\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:52.732597       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-7156/ss-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:54.372809       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8623/ss2-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:57.162566       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4305/hairpin\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:57.422908       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replication-controller-8038/condition-test-8zzd8\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:57.434370       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replication-controller-8038/condition-test-75rfk\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:57.523842       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-3353/httpd\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:57.858416       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-7552/httpd\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:59.009646       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7809/externalname-service-lr48s\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:59.026524       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7809/externalname-service-nkcqf\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:22:59.592363       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1885/exec-volume-test-inlinevolume-6v4m\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:00.582681       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-7156/ss-1\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:00.732259       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-8515/security-context-63044b98-424d-407e-88b1-b5d37bca98d7\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:00.737557       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-9055/pod-d9a6f03c-e191-4c1d-9ff3-088c48fe32b7\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:02.373737       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8623/ss2-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:03.499879       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2136/pod-subpath-test-preprovisionedpv-thhh\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:03.624945       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4495/pod-subpath-test-preprovisionedpv-mhn5\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:03.651991       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-8122/image-pull-testf7761741-bfd1-462b-abd0-0f7c82614979\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:05.264977       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7809/execpodrsdtb\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:06.704877       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-928-9345/csi-hostpath-attacher-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:06.745236       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-838/simple-1629865380-rg8gn\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:07.033446       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-928-9345/csi-hostpathplugin-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:07.242283       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-928-9345/csi-hostpath-provisioner-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:07.452166       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-928-9345/csi-hostpath-resizer-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:07.674593       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-928-9345/csi-hostpath-snapshotter-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:08.280549       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-33/pod-subpath-test-inlinevolume-jmgh\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:08.286429       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-1892/dns-test-3f86887e-1f80-4078-b47f-01fc4291b630\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:09.880233       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7190/slow-terminating-unready-pod-59dzb\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:10.348032       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-6278/termination-message-container13407cb3-ef7d-451d-b8bc-dc3180c1c75b\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:11.210593       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-8275/downwardapi-volume-c6857a2a-9c84-44a8-b045-7b9a96e1a50e\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:11.865286       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-5003/backofflimit-tk74x\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:11.997072       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-7156/ss-2\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:12.603538       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-928/pod-16fdc065-d43f-4a96-afc9-92184942cd08\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:12.706385       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7190/execpod-7rzms\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:13.051385       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-5003/backofflimit-tq5w8\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:13.871524       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-334/pod-exec-websocket-0dc36249-955a-4b2e-8bd9-83f0c4805495\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:15.211074       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-7552/run-log-test\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:15.521114       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-798/nfs-server\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:16.863211       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pvc-protection-9486/pvc-tester-k5sn6\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:17.126005       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-928/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-qtwr5\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:17.518412       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8602/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-8fnjf\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:19.767566       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6026/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-74x4b\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:20.115537       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8019/affinity-nodeport-xxzxl\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:20.116616       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8019/affinity-nodeport-w5k62\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:20.127582       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8019/affinity-nodeport-j6h9w\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:22.260253       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6751/nodeport-test-th98p\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:22.270273       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6751/nodeport-test-nrw7s\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:23.672495       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8019/execpod-affinityrf9r9\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:24.707560       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-5520/pod-9035fa1f-9a67-42d9-85db-227e019cb10f\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:25.009418       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-7696/nfs-server\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:25.510227       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6751/execpoddwlkd\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:29.097270       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-798/pvc-tester-clzxj\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:30.314957       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-8481/pod-service-account-defaultsa\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:30.418797       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-8481/pod-service-account-mountsa\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:30.523467       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-8481/pod-service-account-nomountsa\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:30.629028       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-8481/pod-service-account-defaultsa-mountspec\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:30.731868       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-8481/pod-service-account-mountsa-mountspec\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:30.842131       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-8481/pod-service-account-nomountsa-mountspec\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:30.941021       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-8481/pod-service-account-defaultsa-nomountspec\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:31.046249       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-8481/pod-service-account-mountsa-nomountspec\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:31.150533       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-8481/pod-service-account-nomountsa-nomountspec\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:31.198583       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2424/rs-7rl87\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:31.207568       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2424/rs-jsk4l\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:31.215299       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2424/rs-pfvjw\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:31.901122       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3599-3217/csi-hostpath-attacher-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:32.191882       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3599-3217/csi-hostpathplugin-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:32.405557       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3599-3217/csi-hostpath-provisioner-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:32.524134       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1138/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:32.619500       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3599-3217/csi-hostpath-resizer-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:32.633683       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1138/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:32.736694       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1138/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:32.778943       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-7696/pvc-tester-4x2wx\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:32.845873       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3599-3217/csi-hostpath-snapshotter-0\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:32.849238       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1138/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:33.041175       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3599/inline-volume-tester-dpttw\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:33.287886       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-wrapper-7333/pod-secrets-64092789-3e07-432c-9984-e8ca57f0d545\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:33.666991       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8602/pod-subpath-test-preprovisionedpv-b9vc\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:33.908569       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-9055/pod-8317a1a1-fef3-4ecb-9c71-dc09afb8f1f1\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:33.958156       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6026/pod-subpath-test-preprovisionedpv-x8ql\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:34.608942       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5238-2179/csi-mockplugin-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:34.715375       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5238-2179/csi-mockplugin-attacher-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:34.817469       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5238-2179/csi-mockplugin-resizer-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:35.817595       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-5735/pod-configmaps-1a639731-770e-48a1-a0a7-c56d43639c68\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:36.740371       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-695/metadata-volume-23d81f27-eb0c-4723-8c12-538d19962f9a\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:39.502373       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1494/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-l22d9\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:40.400491       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-4576/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-vvznb\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:40.789627       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-888/pod-subpath-test-inlinevolume-przc\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:42.247337       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2424/rs-bps86\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:45.411895       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2424/rs-n7mzs\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:45.461127       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5238/pvc-volume-tester-mq4g6\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:46.457647       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6891/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-kr97k\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:48.269037       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-4576/pod-57236f7e-b14a-4608-9bdf-35c11e8adc72\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:48.616483       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-2509/pfpod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:48.794470       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3838-461/csi-mockplugin-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:49.800386       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3298/pod-subpath-test-inlinevolume-qk7w\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:50.788671       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-4576/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-qwwgs\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:50.919536       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replication-controller-7242/rc-test-qztwv\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:51.911010       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1494/pod-66b6a447-7a47-44fe-8f2d-9667fc036abb\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:54.089225       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6891/pod-c98e2f12-3ced-44bb-a345-1e0aae24bd0c\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:54.205817       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-7658/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0825 04:23:54.212245       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-7658/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0825 04:23:55.430065       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-3160/ss-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:55.489163       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replication-controller-7242/rc-test-rtvfz\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:55.831175       1 volume_binding.go:260] Failed to bind volumes for pod \"csi-mock-volumes-3838/pvc-volume-tester-427h8\": binding volumes: provisioning failed for PVC \"pvc-58b7b\"\nE0825 04:23:55.831228       1 framework.go:744] \"Failed running PreBind plugin\" err=\"binding volumes: provisioning failed for PVC \\\"pvc-58b7b\\\"\" plugin=\"VolumeBinding\" pod=\"csi-mock-volumes-3838/pvc-volume-tester-427h8\"\nE0825 04:23:55.831293       1 factory.go:338] \"Error scheduling pod; retrying\" err=\"running PreBind plugin \\\"VolumeBinding\\\": binding volumes: provisioning failed for PVC \\\"pvc-58b7b\\\"\" pod=\"csi-mock-volumes-3838/pvc-volume-tester-427h8\"\nI0825 04:23:55.887181       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1138/test-container-pod\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:56.681999       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-7658/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0825 04:23:56.844235       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3838/pvc-volume-tester-427h8\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:23:58.728261       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-0-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:58.731663       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-1-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:23:58.734640       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-2-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:00.817531       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-5520/pod-086f31da-aa78-4802-868f-7c2aef324b9e\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:01.375465       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2703/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-zv5cr\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:01.933102       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-511/pod-configmaps-b49aaa3f-a7e1-416e-9134-259c95f876ef\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:02.965267       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-0-1\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:04.074029       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4946/affinity-nodeport-transition-6qpb6\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:04.092545       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4946/affinity-nodeport-transition-4cr2c\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:04.093624       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4946/affinity-nodeport-transition-nfbmj\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:04.354910       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-9598/sample-webhook-deployment-6bd9446d55-tz7kb\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:05.775414       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-1-1\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:06.765161       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-2-1\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:07.449220       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-560/pod-bbb5885c-4a3f-4938-8820-9d5a323910e7\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:08.227202       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-0-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:10.324342       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-8717/aws-injector\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:10.362932       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-4186/pod-b51da08a-bef4-45c0-b92d-543b4ba74a65\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:11.986327       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3403/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-xk67w\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:12.363820       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-1-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:12.827476       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-2-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:13.643969       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4946/execpod-affinitygzlxs\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:14.369448       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-0-3\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:14.532287       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4474/pod-subpath-test-inlinevolume-qvbd\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:15.222383       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5839/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-76fqc\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:16.368072       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-1-3\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:18.184630       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-1063/pod-submit-remove-c2848c1c-13ce-44b1-b060-e589c303aaaa\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:19.512558       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6359/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-8f5ft\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:19.564284       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-9061/pod-projected-secrets-09cae94d-5329-44a5-a397-74bc5b286845\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:19.571251       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-2-3\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:19.581392       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2703/pod-subpath-test-preprovisionedpv-wgdh\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:19.741195       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3403/local-injector\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:19.756553       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9255/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-smzm5\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:20.389370       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-test-4808/busybox-host-aliases7f0bedb4-02cb-4f02-8e94-670a6a6a07e7\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:21.563937       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-0-4\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:21.852884       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5050/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-bkl5j\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:24.764958       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-2-4\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:26.164119       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-1-4\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:26.626498       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2703/pod-subpath-test-preprovisionedpv-wgdh\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:26.767998       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-0-5\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:28.447499       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-1980/alpine-nnp-false-31b4fb77-d700-4dcd-8338-47228eb81933\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:28.696631       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-1-5\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:30.503154       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-1-6\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:31.102569       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-0-6\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:31.770631       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-2532/pod-781a46e5-f38a-467b-b521-981f2e5f7df0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:33.711300       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5839/exec-volume-test-preprovisionedpv-95cc\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:33.721614       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9255/pod-subpath-test-preprovisionedpv-42c5\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:33.729391       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-2002/test-pod-162cc2ef-77c4-47e3-87ab-4a0cb2d61b53\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:33.756213       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-7346/sample-webhook-deployment-6bd9446d55-xpc74\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:33.848402       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"containers-7557/client-containers-2f543dce-7679-4c89-bbb2-51a18473bb69\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:33.895787       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-2-5\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:34.513077       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6359/local-injector\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:34.679226       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5050/pod-subpath-test-preprovisionedpv-l5tn\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:36.295617       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-0-7\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:36.836523       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3837/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-ssppl\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:37.500121       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-1-7\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:37.622728       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7187/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-ngdt8\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:39.410332       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-946/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-fwc62\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:39.962729       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-0-8\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:40.696722       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-2-6\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:41.362591       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-1-8\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:42.059727       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3403/local-client\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:43.001211       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-5282/downwardapi-volume-143d0b36-df26-4f98-9075-f38c126e80c5\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:43.750743       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-6975/pod-configmaps-fd7b8de2-2276-4966-bfb4-ffed1db54877\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:44.765500       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-0-9\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:45.563702       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-1-9\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:45.972534       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-2-7\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:46.655834       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-1762/sample-webhook-deployment-6bd9446d55-vx9cl\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:47.314342       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3837/pod-23d55f7d-55c9-475b-bf51-b5785c54f04b\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:47.333043       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7187/pod-d29a3740-824e-40af-9e24-f9bd47949517\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:47.734408       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-2059/labelsupdate84ced3d0-f596-42e7-a3de-2768fb7b7d6d\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:48.278839       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7386/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-blwfk\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:48.619643       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-8717/aws-client\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:48.753154       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-946/pod-d13bfaa4-e0e8-4004-9e41-75a9f1bf70a0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:49.241908       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7007/aws-injector\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:49.894955       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"subpath-2528/pod-subpath-test-configmap-ht4f\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:51.198247       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-8929/pod-secrets-d85914c8-548d-4edd-be97-31f2b66161ec\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:51.232697       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-2-8\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:51.471947       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-1701/pod-configmaps-81c6664a-59a1-46b1-ae26-91854a72ed95\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:51.633519       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-1-10\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:52.032480       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-0-10\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:54.501045       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-2-9\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:55.254242       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4811/hostexec-ip-172-20-38-132.eu-west-3.compute.internal-2cq7x\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:55.321206       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3837/pod-59deb73c-eff0-4e52-9f7b-d267a2222c26\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:55.393062       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7187/pod-50ec1508-ae37-4a2e-a091-34b32bdbf9a8\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:55.900574       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-1-11\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:56.042261       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6359/local-client\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:56.163872       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8770/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-4nv5k\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:56.198675       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-2-10\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:56.979729       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-8304/downward-api-9a168ab6-cdb7-422f-b8ba-541bfecb1c45\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:57.132837       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-0-11\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:58.097764       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-2-11\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:24:58.841713       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-946/pod-02a9ea80-a05d-4633-867d-435e88cf7b4c\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:24:59.684638       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7340/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-ss8d6\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:00.366252       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-1-12\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:01.072909       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7689-156/csi-hostpath-attacher-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:01.429838       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7689-156/csi-hostpathplugin-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:01.504423       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-9988/dns-test-8a63eb6d-eabf-45bb-adb9-c9f30ddcdded\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:01.618084       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7689-156/csi-hostpath-provisioner-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:01.765448       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-0-12\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:01.847867       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7689-156/csi-hostpath-resizer-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:02.052696       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7689-156/csi-hostpath-snapshotter-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:02.368880       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-2-12\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:02.457027       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-7689/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0825 04:25:02.462620       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-7689/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0825 04:25:02.717579       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4811/pod-subpath-test-preprovisionedpv-cf29\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:04.035571       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8770/pod-subpath-test-preprovisionedpv-k8mw\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:04.239491       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7222/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-knp6b\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:04.509371       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7386/pod-subpath-test-preprovisionedpv-xd45\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:04.702281       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7689/hostpath-injector\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:06.364453       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-2-13\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:07.373112       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-0-13\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:08.661965       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7222/pod-02b09d7d-ef32-42d5-a581-fa798006ee04\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:08.861737       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7914/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-28f9g\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:08.939039       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7340/pod-a8ac0366-3bf7-4a46-b89b-285f76162f01\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:09.423054       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-3220/nfs-server\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:09.561905       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7007/aws-client\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:11.363429       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-2-14\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:12.197092       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-8f84f\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:12.216414       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-c7dql\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:12.216642       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-tsgg7\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:12.216942       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-dksww\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:12.222408       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-7spqt\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:12.231256       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-mh4dc\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:12.232797       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-nb44s\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:12.245465       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-k2jd8\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:12.253178       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-g7jdf\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:12.257983       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-n2n8q\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:12.779197       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6295/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-x9dhv\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:14.041337       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-1-13\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:14.049458       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6182/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-cp8q9\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:14.564935       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-0-14\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:16.001200       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4472-374/csi-hostpath-attacher-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:16.321619       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4472-374/csi-hostpathplugin-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:16.568178       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4472-374/csi-hostpath-provisioner-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:16.908956       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4472-374/csi-hostpath-resizer-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:17.105999       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4472-374/csi-hostpath-snapshotter-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:17.345516       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-6282/pod-adce5c65-59ea-4746-a951-ea17a3843107\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:19.096522       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-3220/pvc-tester-7xbl7\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:19.589439       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7914/pod-2d8a092f-253e-464e-ada0-cd840f03df5d\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:19.634809       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2044/rs-h8k8v\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:19.650576       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2044/rs-pnhdk\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:19.655235       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2044/rs-wmslv\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:19.666154       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2044/rs-d9hq6\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:19.666236       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2044/rs-vj468\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:19.666401       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2044/rs-s27nm\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:19.666613       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2044/rs-qk5gm\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:19.679301       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2044/rs-7hnvw\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:19.693058       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2044/rs-g89cd\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:19.693169       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2044/rs-zwvqv\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:19.895631       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9182/pod-submit-status-1-14\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:19.943803       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4472/pod-subpath-test-dynamicpv-mkj2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:22.111929       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7914/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-46c97\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:24.897831       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"subpath-8401/pod-subpath-test-configmap-87p5\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:25.016970       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-795d758f88-hgr4b\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:25.038567       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-795d758f88-vvq7h\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:25.045366       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-795d758f88-8fcwp\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:25.115311       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-795d758f88-d8nfx\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:25.134779       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-795d758f88-kxzh9\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.161354       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-45qjx\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.169861       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-lwnkz\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.177415       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-795d758f88-98xhq\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.177940       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-26mtc\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.213052       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-795d758f88-5n97n\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.219338       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-p4d25\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.219493       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-gz7t4\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.225188       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-795d758f88-f27z7\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.225347       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-ncw29\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.226043       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-vkc4f\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.236993       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-6ppst\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.237190       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-28mxj\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.248906       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-twpnb\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.263486       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-sqdph\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.266436       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-dd94f59b7-z4d96\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.271773       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-795d758f88-2h27p\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.273627       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-795d758f88-26bsn\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.301422       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-795d758f88-vr92r\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.301505       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-795d758f88-26d4q\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:26.301563       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4066/webserver-deployment-795d758f88-plps9\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:27.803609       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-7249/liveness-a3b5ff7a-5cdc-4e10-a909-690bf67f9b71\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:28.467812       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4538/httpd\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:31.087077       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4008-2969/csi-hostpath-attacher-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:31.403523       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4008-2969/csi-hostpathplugin-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:31.624450       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4008-2969/csi-hostpath-provisioner-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:31.844839       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4008-2969/csi-hostpath-resizer-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:32.073448       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4008-2969/csi-hostpath-snapshotter-0\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:32.398148       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2044/rs-8r25d\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:32.830103       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6182/pod-subpath-test-preprovisionedpv-qnl5\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:32.878656       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6295/pod-subpath-test-preprovisionedpv-d68t\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:33.317841       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-7912/downwardapi-volume-5a06c84b-9d0c-4b16-8cd1-a9748a2a4515\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:34.943449       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4008/pod-subpath-test-dynamicpv-626j\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:35.096273       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-1498/pod-8a1a7827-b21c-4610-873f-2beb9f0446ef\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:35.607738       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7689/hostpath-client\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:37.646741       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-9322/pfpod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0825 04:25:37.660120       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-9322/pfpod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0825 04:25:37.913561       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4174/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-cwt42\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:38.199436       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2478/hostexec-ip-172-20-37-233.eu-west-3.compute.internal-zskl8\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:38.511081       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-3483/pod-secrets-75c1d9e4-9a61-4ce8-8052-4ae2737cced7\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:39.007024       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-6430/pod-configmaps-0f020a73-db63-45fb-bada-85921e23fd38\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:39.707477       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-9322/pfpod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0825 04:25:40.246767       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-5204/pfpod\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:41.523740       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-4151/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-hb7mb\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:41.992348       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6182/pod-subpath-test-preprovisionedpv-qnl5\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:42.749759       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-8469/pod-4d368ef4-a119-4497-8c52-475712ded0cd\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:43.445140       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-595/dns-test-737352c0-0ab1-46d7-b474-d98207b00346\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:44.180164       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-9322/burstable-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0825 04:25:44.186122       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-9322/burstable-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0825 04:25:46.708178       1 factory.go:322] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-9322/burstable-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0825 04:25:47.946433       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-7445/dns-test-85472bf3-c3f9-45c8-8a22-4cb5fd49df4e\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:49.676815       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-4151/pod-0cdd0d82-08d3-4b94-8f2b-edfaa1f607e3\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:49.734304       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-2680/ss2-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:50.447626       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-2452/image-pull-testeff184ec-81e9-487d-a0e6-aebd0c9e54b4\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:51.740340       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-765/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-77d2r\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:52.186402       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-4151/hostexec-ip-172-20-36-72.eu-west-3.compute.internal-qkwtf\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:52.222679       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-8196/pod-projected-secrets-51b6b718-2562-486e-84e1-7b36eded989c\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:52.384986       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-2680/ss2-1\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:54.728161       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-8185/pod-update-e46f88e4-4bff-479e-8d91-f69a324607e6\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:54.858506       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3664/hostexec-ip-172-20-32-67.eu-west-3.compute.internal-xtx2v\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:55.807691       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-2680/ss2-2\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:56.457525       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-5316/netserver-0\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:56.554455       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-5316/netserver-1\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:56.659072       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-5316/netserver-2\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:56.760938       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-5316/netserver-3\" node=\"ip-172-20-38-132.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:25:57.079138       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-8005/annotationupdatec2980a69-2c29-4877-9834-826668c2abec\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:25:57.102371       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-765/pod-4e5967e6-b53e-4a75-954e-eb7875003744\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:26:01.160912       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"subpath-4743/pod-subpath-test-secret-qzc5\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:26:03.146872       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-765/pod-430e3623-5bb1-44b2-a46c-b30048731daf\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:26:03.261035       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"hostpath-1268/pod-host-path-test\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:26:04.203702       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2478/pod-subpath-test-preprovisionedpv-ttgs\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:26:04.232355       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-3853/kube-proxy-mode-detector\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:26:04.955854       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-8469/pod-7246873e-5b03-4f39-bc84-e3d5d1f20ff6\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:26:06.539033       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-7679/pod-c2fab7fa-c3a9-4cd6-909c-37ebfbb92fa1\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:26:10.049410       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-2507-7881/csi-hostpath-attacher-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:26:10.153874       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-3853/affinity-nodeport-timeout-kg8lg\" node=\"ip-172-20-32-67.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:26:10.162948       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-3853/affinity-nodeport-timeout-nhwk7\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:26:10.169701       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-3853/affinity-nodeport-timeout-wdfbh\" node=\"ip-172-20-36-72.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0825 04:26:10.350446       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-2507-7881/csi-hostpathplugin-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:26:10.558232       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-2507-7881/csi-hostpath-provisioner-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:26:10.796953       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-2507-7881/csi-hostpath-resizer-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0825 04:26:10.981469       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-2507-7881/csi-hostpath-snapshotter-0\" node=\"ip-172-20-37-233.eu-west-3.compute.internal\" evaluatedNodes=5 feasibleNodes=1\n==== END logs for container kube-scheduler of pod kube-system/kube-scheduler-ip-172-20-44-96.eu-west-3.compute.internal ====\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"17925\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ReplicationControllerList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"38026\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ServiceList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"38038\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"DaemonSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"38053\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"DeploymentList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"38060\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ReplicaSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"38069\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"PodList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"38079\"\n    },\n    \"items\": []\n}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:26:14.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9762" for this suite.


... skipping 13 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:91
STEP: Creating a pod to test downward API volume plugin
Aug 25 04:26:15.198: INFO: Waiting up to 5m0s for pod "metadata-volume-6095e0c1-0668-4afa-b918-13ca3c2e0e93" in namespace "projected-9602" to be "Succeeded or Failed"
Aug 25 04:26:15.302: INFO: Pod "metadata-volume-6095e0c1-0668-4afa-b918-13ca3c2e0e93": Phase="Pending", Reason="", readiness=false. Elapsed: 103.419027ms
Aug 25 04:26:17.406: INFO: Pod "metadata-volume-6095e0c1-0668-4afa-b918-13ca3c2e0e93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207904552s
Aug 25 04:26:19.514: INFO: Pod "metadata-volume-6095e0c1-0668-4afa-b918-13ca3c2e0e93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315401841s
Aug 25 04:26:21.619: INFO: Pod "metadata-volume-6095e0c1-0668-4afa-b918-13ca3c2e0e93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.420377284s
STEP: Saw pod success
Aug 25 04:26:21.619: INFO: Pod "metadata-volume-6095e0c1-0668-4afa-b918-13ca3c2e0e93" satisfied condition "Succeeded or Failed"
Aug 25 04:26:21.723: INFO: Trying to get logs from node ip-172-20-32-67.eu-west-3.compute.internal pod metadata-volume-6095e0c1-0668-4afa-b918-13ca3c2e0e93 container client-container: <nil>
STEP: delete the pod
Aug 25 04:26:21.935: INFO: Waiting for pod metadata-volume-6095e0c1-0668-4afa-b918-13ca3c2e0e93 to disappear
Aug 25 04:26:22.039: INFO: Pod metadata-volume-6095e0c1-0668-4afa-b918-13ca3c2e0e93 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:7.678 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:91
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":22,"skipped":130,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 50 lines ...
Aug 25 04:25:17.345: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug 25 04:25:17.464: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpath729pz] to have phase Bound
Aug 25 04:25:17.573: INFO: PersistentVolumeClaim csi-hostpath729pz found but phase is Pending instead of Bound.
Aug 25 04:25:19.679: INFO: PersistentVolumeClaim csi-hostpath729pz found and phase=Bound (2.214293393s)
STEP: Creating pod pod-subpath-test-dynamicpv-mkj2
STEP: Creating a pod to test atomic-volume-subpath
Aug 25 04:25:19.990: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-mkj2" in namespace "provisioning-4472" to be "Succeeded or Failed"
Aug 25 04:25:20.094: INFO: Pod "pod-subpath-test-dynamicpv-mkj2": Phase="Pending", Reason="", readiness=false. Elapsed: 103.512866ms
Aug 25 04:25:22.198: INFO: Pod "pod-subpath-test-dynamicpv-mkj2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20758284s
Aug 25 04:25:24.302: INFO: Pod "pod-subpath-test-dynamicpv-mkj2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311918339s
Aug 25 04:25:26.409: INFO: Pod "pod-subpath-test-dynamicpv-mkj2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.418136745s
Aug 25 04:25:28.514: INFO: Pod "pod-subpath-test-dynamicpv-mkj2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.522982142s
Aug 25 04:25:30.618: INFO: Pod "pod-subpath-test-dynamicpv-mkj2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.627173317s
... skipping 4 lines ...
Aug 25 04:25:41.153: INFO: Pod "pod-subpath-test-dynamicpv-mkj2": Phase="Running", Reason="", readiness=true. Elapsed: 21.162626031s
Aug 25 04:25:43.257: INFO: Pod "pod-subpath-test-dynamicpv-mkj2": Phase="Running", Reason="", readiness=true. Elapsed: 23.266653521s
Aug 25 04:25:45.361: INFO: Pod "pod-subpath-test-dynamicpv-mkj2": Phase="Running", Reason="", readiness=true. Elapsed: 25.370679755s
Aug 25 04:25:47.465: INFO: Pod "pod-subpath-test-dynamicpv-mkj2": Phase="Running", Reason="", readiness=true. Elapsed: 27.474542824s
Aug 25 04:25:49.569: INFO: Pod "pod-subpath-test-dynamicpv-mkj2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.578628827s
STEP: Saw pod success
Aug 25 04:25:49.569: INFO: Pod "pod-subpath-test-dynamicpv-mkj2" satisfied condition "Succeeded or Failed"
Aug 25 04:25:49.673: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod pod-subpath-test-dynamicpv-mkj2 container test-container-subpath-dynamicpv-mkj2: <nil>
STEP: delete the pod
Aug 25 04:25:49.896: INFO: Waiting for pod pod-subpath-test-dynamicpv-mkj2 to disappear
Aug 25 04:25:49.999: INFO: Pod pod-subpath-test-dynamicpv-mkj2 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-mkj2
Aug 25 04:25:50.000: INFO: Deleting pod "pod-subpath-test-dynamicpv-mkj2" in namespace "provisioning-4472"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":41,"skipped":342,"failed":0}
[BeforeEach] [sig-network] EndpointSliceMirroring
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:26:22.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename endpointslicemirroring
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:26:23.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslicemirroring-4533" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete","total":-1,"completed":42,"skipped":342,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:26:24.114: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 94 lines ...
Aug 25 04:26:24.215: INFO: AfterEach: Cleaning up test resources.
Aug 25 04:26:24.215: INFO: Deleting PersistentVolumeClaim "pvc-45kwh"
Aug 25 04:26:24.320: INFO: Deleting PersistentVolume "hostpath-crnqs"

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds","total":-1,"completed":36,"skipped":268,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:26:15.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 25 04:26:15.760: INFO: Waiting up to 5m0s for pod "pod-ea6c1e02-472b-4406-9b7b-620b4cc5a945" in namespace "emptydir-5990" to be "Succeeded or Failed"
Aug 25 04:26:15.864: INFO: Pod "pod-ea6c1e02-472b-4406-9b7b-620b4cc5a945": Phase="Pending", Reason="", readiness=false. Elapsed: 103.348267ms
Aug 25 04:26:17.967: INFO: Pod "pod-ea6c1e02-472b-4406-9b7b-620b4cc5a945": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207053587s
Aug 25 04:26:20.073: INFO: Pod "pod-ea6c1e02-472b-4406-9b7b-620b4cc5a945": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312601746s
Aug 25 04:26:22.177: INFO: Pod "pod-ea6c1e02-472b-4406-9b7b-620b4cc5a945": Phase="Pending", Reason="", readiness=false. Elapsed: 6.416583255s
Aug 25 04:26:24.286: INFO: Pod "pod-ea6c1e02-472b-4406-9b7b-620b4cc5a945": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.525435751s
STEP: Saw pod success
Aug 25 04:26:24.286: INFO: Pod "pod-ea6c1e02-472b-4406-9b7b-620b4cc5a945" satisfied condition "Succeeded or Failed"
Aug 25 04:26:24.390: INFO: Trying to get logs from node ip-172-20-32-67.eu-west-3.compute.internal pod pod-ea6c1e02-472b-4406-9b7b-620b4cc5a945 container test-container: <nil>
STEP: delete the pod
Aug 25 04:26:24.604: INFO: Waiting for pod pod-ea6c1e02-472b-4406-9b7b-620b4cc5a945 to disappear
Aug 25 04:26:24.707: INFO: Pod pod-ea6c1e02-472b-4406-9b7b-620b4cc5a945 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 43 lines ...
• [SLOW TEST:11.479 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":16,"skipped":143,"failed":1,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-secret-qzc5
STEP: Creating a pod to test atomic-volume-subpath
Aug 25 04:26:01.208: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-qzc5" in namespace "subpath-4743" to be "Succeeded or Failed"
Aug 25 04:26:01.312: INFO: Pod "pod-subpath-test-secret-qzc5": Phase="Pending", Reason="", readiness=false. Elapsed: 103.77862ms
Aug 25 04:26:03.416: INFO: Pod "pod-subpath-test-secret-qzc5": Phase="Running", Reason="", readiness=true. Elapsed: 2.20754446s
Aug 25 04:26:05.523: INFO: Pod "pod-subpath-test-secret-qzc5": Phase="Running", Reason="", readiness=true. Elapsed: 4.314402882s
Aug 25 04:26:07.627: INFO: Pod "pod-subpath-test-secret-qzc5": Phase="Running", Reason="", readiness=true. Elapsed: 6.418478703s
Aug 25 04:26:09.731: INFO: Pod "pod-subpath-test-secret-qzc5": Phase="Running", Reason="", readiness=true. Elapsed: 8.523152515s
Aug 25 04:26:11.835: INFO: Pod "pod-subpath-test-secret-qzc5": Phase="Running", Reason="", readiness=true. Elapsed: 10.626929604s
Aug 25 04:26:13.943: INFO: Pod "pod-subpath-test-secret-qzc5": Phase="Running", Reason="", readiness=true. Elapsed: 12.734988722s
Aug 25 04:26:16.047: INFO: Pod "pod-subpath-test-secret-qzc5": Phase="Running", Reason="", readiness=true. Elapsed: 14.839052241s
Aug 25 04:26:18.151: INFO: Pod "pod-subpath-test-secret-qzc5": Phase="Running", Reason="", readiness=true. Elapsed: 16.942798336s
Aug 25 04:26:20.256: INFO: Pod "pod-subpath-test-secret-qzc5": Phase="Running", Reason="", readiness=true. Elapsed: 19.047390736s
Aug 25 04:26:22.360: INFO: Pod "pod-subpath-test-secret-qzc5": Phase="Running", Reason="", readiness=true. Elapsed: 21.151718761s
Aug 25 04:26:24.465: INFO: Pod "pod-subpath-test-secret-qzc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.256883094s
STEP: Saw pod success
Aug 25 04:26:24.465: INFO: Pod "pod-subpath-test-secret-qzc5" satisfied condition "Succeeded or Failed"
Aug 25 04:26:24.569: INFO: Trying to get logs from node ip-172-20-37-233.eu-west-3.compute.internal pod pod-subpath-test-secret-qzc5 container test-container-subpath-secret-qzc5: <nil>
STEP: delete the pod
Aug 25 04:26:24.792: INFO: Waiting for pod pod-subpath-test-secret-qzc5 to disappear
Aug 25 04:26:24.896: INFO: Pod pod-subpath-test-secret-qzc5 no longer exists
STEP: Deleting pod pod-subpath-test-secret-qzc5
Aug 25 04:26:24.896: INFO: Deleting pod "pod-subpath-test-secret-qzc5" in namespace "subpath-4743"
... skipping 276 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (block volmode)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should provision storage with pvc data source
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:236
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":-1,"completed":45,"skipped":230,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:26:27.695: INFO: Only supported for providers [vsphere] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1440
------------------------------
... skipping 81 lines ...
• [SLOW TEST:25.167 seconds]
[sig-api-machinery] Servers with support for API chunking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return chunks of results for list calls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:77
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":21,"skipped":207,"failed":1,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 115 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":268,"failed":0}
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:26:24.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 30 lines ...
• [SLOW TEST:6.291 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:132
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":23,"skipped":132,"failed":0}
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:26:24.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
• [SLOW TEST:7.218 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":24,"skipped":132,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:26:31.664: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 21 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-map-0b569341-e3f9-40c2-842d-c9f19731120d
STEP: Creating a pod to test consume configMaps
Aug 25 04:26:27.287: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-98b07c44-3b33-4cf5-8735-475eed4f9b70" in namespace "projected-2367" to be "Succeeded or Failed"
Aug 25 04:26:27.389: INFO: Pod "pod-projected-configmaps-98b07c44-3b33-4cf5-8735-475eed4f9b70": Phase="Pending", Reason="", readiness=false. Elapsed: 102.102366ms
Aug 25 04:26:29.492: INFO: Pod "pod-projected-configmaps-98b07c44-3b33-4cf5-8735-475eed4f9b70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204992675s
Aug 25 04:26:31.595: INFO: Pod "pod-projected-configmaps-98b07c44-3b33-4cf5-8735-475eed4f9b70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.308431278s
STEP: Saw pod success
Aug 25 04:26:31.596: INFO: Pod "pod-projected-configmaps-98b07c44-3b33-4cf5-8735-475eed4f9b70" satisfied condition "Succeeded or Failed"
Aug 25 04:26:31.698: INFO: Trying to get logs from node ip-172-20-38-132.eu-west-3.compute.internal pod pod-projected-configmaps-98b07c44-3b33-4cf5-8735-475eed4f9b70 container agnhost-container: <nil>
STEP: delete the pod
Aug 25 04:26:31.927: INFO: Waiting for pod pod-projected-configmaps-98b07c44-3b33-4cf5-8735-475eed4f9b70 to disappear
Aug 25 04:26:32.036: INFO: Pod pod-projected-configmaps-98b07c44-3b33-4cf5-8735-475eed4f9b70 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:5.680 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":208,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:26:32.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-3411" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server","total":-1,"completed":36,"skipped":209,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:26:33.201: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 11 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":36,"skipped":227,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:26:25.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
• [SLOW TEST:8.046 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":37,"skipped":227,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:26:33.274: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 11 lines ...
      Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":38,"skipped":268,"failed":0}
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:26:31.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124
Aug 25 04:26:31.870: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-6055" to be "Succeeded or Failed"
Aug 25 04:26:31.980: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 109.907916ms
Aug 25 04:26:34.084: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213950955s
Aug 25 04:26:36.187: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.317887023s
Aug 25 04:26:36.188: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:26:36.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6055" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99
    should run with an explicit non-root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":39,"skipped":268,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:26:36.527: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 89 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  [k8s.io] Delete Grace Period
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    should be submitted and removed
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed","total":-1,"completed":46,"skipped":233,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:26:38.407: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 50 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:26:40.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-3529" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":-1,"completed":47,"skipped":238,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:26:40.570: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 68 lines ...
Aug 25 04:26:30.812: INFO: PersistentVolumeClaim pvc-92jfb found but phase is Pending instead of Bound.
Aug 25 04:26:32.917: INFO: PersistentVolumeClaim pvc-92jfb found and phase=Bound (14.837474391s)
Aug 25 04:26:32.917: INFO: Waiting up to 3m0s for PersistentVolume local-z5zxx to have phase Bound
Aug 25 04:26:33.021: INFO: PersistentVolume local-z5zxx found and phase=Bound (104.459115ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-tmtc
STEP: Creating a pod to test subpath
Aug 25 04:26:33.343: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tmtc" in namespace "provisioning-3213" to be "Succeeded or Failed"
Aug 25 04:26:33.448: INFO: Pod "pod-subpath-test-preprovisionedpv-tmtc": Phase="Pending", Reason="", readiness=false. Elapsed: 104.638071ms
Aug 25 04:26:35.552: INFO: Pod "pod-subpath-test-preprovisionedpv-tmtc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209398828s
Aug 25 04:26:37.657: INFO: Pod "pod-subpath-test-preprovisionedpv-tmtc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.313873995s
STEP: Saw pod success
Aug 25 04:26:37.657: INFO: Pod "pod-subpath-test-preprovisionedpv-tmtc" satisfied condition "Succeeded or Failed"
Aug 25 04:26:37.761: INFO: Trying to get logs from node ip-172-20-36-72.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-tmtc container test-container-volume-preprovisionedpv-tmtc: <nil>
STEP: delete the pod
Aug 25 04:26:37.978: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tmtc to disappear
Aug 25 04:26:38.089: INFO: Pod pod-subpath-test-preprovisionedpv-tmtc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tmtc
Aug 25 04:26:38.089: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tmtc" in namespace "provisioning-3213"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":31,"skipped":194,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:26:41.024: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 69 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:26:42.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7428" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":32,"skipped":199,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:26:42.972: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 67 lines ...
Aug 25 04:26:33.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 25 04:26:33.871: INFO: Waiting up to 5m0s for pod "pod-23fe6922-95b5-4f07-a58b-2864e80b09e9" in namespace "emptydir-4810" to be "Succeeded or Failed"
Aug 25 04:26:33.973: INFO: Pod "pod-23fe6922-95b5-4f07-a58b-2864e80b09e9": Phase="Pending", Reason="", readiness=false. Elapsed: 102.515082ms
Aug 25 04:26:36.077: INFO: Pod "pod-23fe6922-95b5-4f07-a58b-2864e80b09e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205734782s
Aug 25 04:26:38.180: INFO: Pod "pod-23fe6922-95b5-4f07-a58b-2864e80b09e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308567953s
Aug 25 04:26:40.283: INFO: Pod "pod-23fe6922-95b5-4f07-a58b-2864e80b09e9": Phase="Running", Reason="", readiness=true. Elapsed: 6.41229411s
Aug 25 04:26:42.386: INFO: Pod "pod-23fe6922-95b5-4f07-a58b-2864e80b09e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.515219259s
STEP: Saw pod success
Aug 25 04:26:42.386: INFO: Pod "pod-23fe6922-95b5-4f07-a58b-2864e80b09e9" satisfied condition "Succeeded or Failed"
Aug 25 04:26:42.489: INFO: Trying to get logs from node ip-172-20-32-67.eu-west-3.compute.internal pod pod-23fe6922-95b5-4f07-a58b-2864e80b09e9 container test-container: <nil>
STEP: delete the pod
Aug 25 04:26:42.744: INFO: Waiting for pod pod-23fe6922-95b5-4f07-a58b-2864e80b09e9 to disappear
Aug 25 04:26:42.853: INFO: Pod pod-23fe6922-95b5-4f07-a58b-2864e80b09e9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:9.839 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":212,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:26:43.073: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 78 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 27 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-a5534291-46c2-4ce3-b373-9be1064a7f27
STEP: Creating a pod to test consume configMaps
Aug 25 04:26:41.326: INFO: Waiting up to 5m0s for pod "pod-configmaps-dad3899f-c811-4b15-88e6-84424ccf3dbf" in namespace "configmap-1852" to be "Succeeded or Failed"
Aug 25 04:26:41.428: INFO: Pod "pod-configmaps-dad3899f-c811-4b15-88e6-84424ccf3dbf": Phase="Pending", Reason="", readiness=false. Elapsed: 102.751586ms
Aug 25 04:26:43.533: INFO: Pod "pod-configmaps-dad3899f-c811-4b15-88e6-84424ccf3dbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207091858s
Aug 25 04:26:45.636: INFO: Pod "pod-configmaps-dad3899f-c811-4b15-88e6-84424ccf3dbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.309957514s
STEP: Saw pod success
Aug 25 04:26:45.636: INFO: Pod "pod-configmaps-dad3899f-c811-4b15-88e6-84424ccf3dbf" satisfied condition "Succeeded or Failed"
Aug 25 04:26:45.739: INFO: Trying to get logs from node ip-172-20-32-67.eu-west-3.compute.internal pod pod-configmaps-dad3899f-c811-4b15-88e6-84424ccf3dbf container agnhost-container: <nil>
STEP: delete the pod
Aug 25 04:26:45.950: INFO: Waiting for pod pod-configmaps-dad3899f-c811-4b15-88e6-84424ccf3dbf to disappear
Aug 25 04:26:46.054: INFO: Pod pod-configmaps-dad3899f-c811-4b15-88e6-84424ccf3dbf no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:5.661 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":244,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:26:46.274: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 44 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-8cb88de2-8d2e-4957-8962-e65a33b58c8f
STEP: Creating a pod to test consume configMaps
Aug 25 04:26:45.025: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bc49ef7e-5eda-4524-9b57-085d1afa520c" in namespace "projected-3885" to be "Succeeded or Failed"
Aug 25 04:26:45.128: INFO: Pod "pod-projected-configmaps-bc49ef7e-5eda-4524-9b57-085d1afa520c": Phase="Pending", Reason="", readiness=false. Elapsed: 103.334691ms
Aug 25 04:26:47.232: INFO: Pod "pod-projected-configmaps-bc49ef7e-5eda-4524-9b57-085d1afa520c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.20736648s
STEP: Saw pod success
Aug 25 04:26:47.232: INFO: Pod "pod-projected-configmaps-bc49ef7e-5eda-4524-9b57-085d1afa520c" satisfied condition "Succeeded or Failed"
Aug 25 04:26:47.336: INFO: Trying to get logs from node ip-172-20-38-132.eu-west-3.compute.internal pod pod-projected-configmaps-bc49ef7e-5eda-4524-9b57-085d1afa520c container agnhost-container: <nil>
STEP: delete the pod
Aug 25 04:26:47.554: INFO: Waiting for pod pod-projected-configmaps-bc49ef7e-5eda-4524-9b57-085d1afa520c to disappear
Aug 25 04:26:47.657: INFO: Pod pod-projected-configmaps-bc49ef7e-5eda-4524-9b57-085d1afa520c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:26:47.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3885" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":283,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:26:47.888: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 22 lines ...
STEP: Creating a kubernetes client
Aug 25 04:26:43.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: creating the pod
Aug 25 04:26:43.569: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 25 04:26:48.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2489" for this suite.


• [SLOW TEST:5.875 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":33,"skipped":208,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
... skipping 108 lines ...
• [SLOW TEST:6.665 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":49,"skipped":259,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:26:53.031: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 143 lines ...
• [SLOW TEST:23.494 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":38,"skipped":229,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:26:56.818: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 14 lines ...
      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:833
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":23,"skipped":224,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 25 04:26:51.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:90
STEP: Creating projection with secret that has name projected-secret-test-b18b3b17-8160-451e-9bde-b393f0015209
STEP: Creating a pod to test consume secrets
Aug 25 04:26:52.436: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-621d599a-2f4c-4b32-9a74-712acab93d2a" in namespace "projected-5135" to be "Succeeded or Failed"
Aug 25 04:26:52.543: INFO: Pod "pod-projected-secrets-621d599a-2f4c-4b32-9a74-712acab93d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 107.272966ms
Aug 25 04:26:54.647: INFO: Pod "pod-projected-secrets-621d599a-2f4c-4b32-9a74-712acab93d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210614924s
Aug 25 04:26:56.753: INFO: Pod "pod-projected-secrets-621d599a-2f4c-4b32-9a74-712acab93d2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.316890663s
STEP: Saw pod success
Aug 25 04:26:56.753: INFO: Pod "pod-projected-secrets-621d599a-2f4c-4b32-9a74-712acab93d2a" satisfied condition "Succeeded or Failed"
Aug 25 04:26:56.856: INFO: Trying to get logs from node ip-172-20-32-67.eu-west-3.compute.internal pod pod-projected-secrets-621d599a-2f4c-4b32-9a74-712acab93d2a container projected-secret-volume-test: <nil>
STEP: delete the pod
Aug 25 04:26:57.075: INFO: Waiting for pod pod-projected-secrets-621d599a-2f4c-4b32-9a74-712acab93d2a to disappear
Aug 25 04:26:57.178: INFO: Pod pod-projected-secrets-621d599a-2f4c-4b32-9a74-712acab93d2a no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 5 lines ...
• [SLOW TEST:6.197 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:90
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":24,"skipped":224,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:26:57.502: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 84 lines ...
• [SLOW TEST:46.912 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove pods when job is deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:75
------------------------------
{"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":37,"skipped":254,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:27:01.526: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 108 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":41,"skipped":287,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:27:04.440: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:169
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":25,"skipped":152,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug 25 04:27:06.064: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 57 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Container restart
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:130
    should verify that container can restart successfully after configmaps modified
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131
------------------------------
{"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":-1,"completed":14,"skipped":133,"failed":0}
Aug 25 04:27:09.728: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-7390
STEP: Creating statefulset with conflicting port in namespace statefulset-7390
STEP: Waiting until pod test-pod will start running in namespace statefulset-7390
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7390
Aug 25 04:26:54.342: INFO: Observed stateful pod in namespace: statefulset-7390, name: ss-0, uid: b4af968e-b1a3-433d-933f-e953e6aebb2a, status phase: Pending. Waiting for statefulset controller to delete.
Aug 25 04:26:55.331: INFO: Observed stateful pod in namespace: statefulset-7390, name: ss-0, uid: b4af968e-b1a3-433d-933f-e953e6aebb2a, status phase: Failed. Waiting for statefulset controller to delete.
Aug 25 04:26:55.335: INFO: Observed stateful pod in namespace: statefulset-7390, name: ss-0, uid: b4af968e-b1a3-433d-933f-e953e6aebb2a, status phase: Failed. Waiting for statefulset controller to delete.
Aug 25 04:26:55.339: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7390
STEP: Removing pod with conflicting port in namespace statefulset-7390
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7390 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114
Aug 25 04:27:01.866: INFO: Deleting all statefulset in ns statefulset-7390
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    Should recreate evicted statefulset [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":34,"skipped":209,"failed":0}
Aug 25 04:27:13.029: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":30,"skipped":190,"failed":0}
Aug 25 04:27:14.468: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 33 lines ...
• [SLOW TEST:10.189 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":42,"skipped":292,"failed":0}
Aug 25 04:27:14.657: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
... skipping 126 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k