Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 56m12s |
Revision | master |
... skipping 209 lines ... + CHANNELS=/tmp/channels.fRBUoHE7P + kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --down --kops-binary-path=/tmp/kops.jiBFoNAhQ I0622 08:09:06.627950 6257 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true I0622 08:09:06.629658 6257 app.go:61] RunDir for this run: "/logs/artifacts/403903f7-f202-11ec-8dfe-daa417708791" I0622 08:09:06.643129 6257 app.go:120] ID for this run: "403903f7-f202-11ec-8dfe-daa417708791" I0622 08:09:06.658190 6257 dumplogs.go:45] /tmp/kops.jiBFoNAhQ toolbox dump --name e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu W0622 08:09:07.175151 6257 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1 I0622 08:09:07.175197 6257 down.go:48] /tmp/kops.jiBFoNAhQ delete cluster --name e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --yes I0622 08:09:07.198388 6279 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true I0622 08:09:07.198508 6279 featureflag.go:162] FeatureFlag "AlphaAllowGCE"=true I0622 08:09:07.198513 6279 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" not found Error: exit status 1 + echo 'kubetest2 down failed' kubetest2 down failed + [[ v == \v ]] + KOPS_BASE_URL= ++ kops-download-release v1.23.2 ++ local kops +++ mktemp -t kops.XXXXXXXXX ++ kops=/tmp/kops.w7rkcJi4j ... skipping 7 lines ... + kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --up --kops-binary-path=/tmp/kops.w7rkcJi4j --kubernetes-version=v1.23.1 --control-plane-size=1 --template-path=tests/e2e/templates/many-addons.yaml.tmpl '--create-args=--networking calico' I0622 08:09:10.880891 6313 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true I0622 08:09:10.882398 6313 app.go:61] RunDir for this run: "/logs/artifacts/403903f7-f202-11ec-8dfe-daa417708791" I0622 08:09:10.887065 6313 app.go:120] ID for this run: "403903f7-f202-11ec-8dfe-daa417708791" I0622 08:09:10.887153 6313 up.go:44] Cleaning up any leaked resources from previous cluster I0622 08:09:10.887190 6313 dumplogs.go:45] /tmp/kops.w7rkcJi4j toolbox dump --name e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu W0622 08:09:11.415485 6313 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1 I0622 08:09:11.415569 6313 down.go:48] /tmp/kops.w7rkcJi4j delete cluster --name e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --yes I0622 08:09:11.440731 6335 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true I0622 08:09:11.440840 6335 featureflag.go:162] FeatureFlag "AlphaAllowGCE"=true I0622 08:09:11.440845 6335 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" not found I0622 08:09:11.903578 6313 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 2022/06/22 08:09:11 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404 I0622 08:09:11.915844 6313 http.go:37] curl https://ip.jsb.workers.dev I0622 08:09:12.031936 6313 template.go:58] /tmp/kops.w7rkcJi4j toolbox template --template tests/e2e/templates/many-addons.yaml.tmpl --output /tmp/kops-template3554819528/manifest.yaml --values /tmp/kops-template3554819528/values.yaml --name e2e-143745cea3-c83fe.test-cncf-aws.k8s.io I0622 08:09:12.054127 6346 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true I0622 08:09:12.054239 6346 featureflag.go:162] FeatureFlag "AlphaAllowGCE"=true I0622 08:09:12.054244 6346 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true I0622 08:09:12.169890 6313 create.go:33] /tmp/kops.w7rkcJi4j create --filename /tmp/kops-template3554819528/manifest.yaml --name e2e-143745cea3-c83fe.test-cncf-aws.k8s.io ... skipping 66 lines ... NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:09:49.940249 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:09:59.981462 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:10:10.019324 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:10:20.065281 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:10:30.117231 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:10:40.164485 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:10:50.204837 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:11:00.239197 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:11:10.282320 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:11:20.317306 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:11:30.350792 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:11:40.388445 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:11:50.429097 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:12:00.465773 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:12:10.507644 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:12:20.559009 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:12:30.592751 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:12:40.644244 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:12:50.677483 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:13:00.726830 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:13:10.759895 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:13:20.795951 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:13:30.834666 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0622 08:13:40.874902 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 31 lines ... Pod kube-system/ebs-csi-node-t6zgh system-node-critical pod "ebs-csi-node-t6zgh" is pending Pod kube-system/ebs-csi-node-tclt4 system-node-critical pod "ebs-csi-node-tclt4" is pending Pod kube-system/metrics-server-655dc594b4-csgtp system-cluster-critical pod "metrics-server-655dc594b4-csgtp" is pending Pod kube-system/metrics-server-655dc594b4-fmssg system-cluster-critical pod "metrics-server-655dc594b4-fmssg" is pending Pod kube-system/node-local-dns-k6xx5 system-node-critical pod "node-local-dns-k6xx5" is pending Validation Failed W0622 08:13:52.236424 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 27 lines ... Pod kube-system/ebs-csi-node-lbh5j system-node-critical pod "ebs-csi-node-lbh5j" is pending Pod kube-system/ebs-csi-node-t6zgh system-node-critical pod "ebs-csi-node-t6zgh" is pending Pod kube-system/ebs-csi-node-tclt4 system-node-critical pod "ebs-csi-node-tclt4" is pending Pod kube-system/metrics-server-655dc594b4-csgtp system-cluster-critical pod "metrics-server-655dc594b4-csgtp" is pending Pod kube-system/metrics-server-655dc594b4-fmssg system-cluster-critical pod "metrics-server-655dc594b4-fmssg" is pending Validation Failed W0622 08:14:03.350661 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 21 lines ... Pod kube-system/ebs-csi-node-lbh5j system-node-critical pod "ebs-csi-node-lbh5j" is pending Pod kube-system/ebs-csi-node-t6zgh system-node-critical pod "ebs-csi-node-t6zgh" is pending Pod kube-system/ebs-csi-node-tclt4 system-node-critical pod "ebs-csi-node-tclt4" is pending Pod kube-system/metrics-server-655dc594b4-csgtp system-cluster-critical pod "metrics-server-655dc594b4-csgtp" is pending Pod kube-system/metrics-server-655dc594b4-fmssg system-cluster-critical pod "metrics-server-655dc594b4-fmssg" is not ready (metrics-server) Validation Failed W0622 08:14:14.395442 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 11 lines ... Pod kube-system/cert-manager-webhook-6d4d986bbd-klmzc system-cluster-critical pod "cert-manager-webhook-6d4d986bbd-klmzc" is not ready (cert-manager) Pod kube-system/ebs-csi-controller-774fbb7f45-5lmzd system-cluster-critical pod "ebs-csi-controller-774fbb7f45-5lmzd" is pending Pod kube-system/ebs-csi-node-lbh5j system-node-critical pod "ebs-csi-node-lbh5j" is pending Pod kube-system/metrics-server-655dc594b4-csgtp system-cluster-critical pod "metrics-server-655dc594b4-csgtp" is not ready (metrics-server) Pod kube-system/metrics-server-655dc594b4-fmssg system-cluster-critical pod "metrics-server-655dc594b4-fmssg" is not ready (metrics-server) Validation Failed W0622 08:14:25.603682 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 8 lines ... VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/aws-load-balancer-controller-694f898955-lrvn8 system-cluster-critical pod "aws-load-balancer-controller-694f898955-lrvn8" is pending Pod kube-system/metrics-server-655dc594b4-csgtp system-cluster-critical pod "metrics-server-655dc594b4-csgtp" is not ready (metrics-server) Pod kube-system/metrics-server-655dc594b4-fmssg system-cluster-critical pod "metrics-server-655dc594b4-fmssg" is not ready (metrics-server) Validation Failed W0622 08:14:36.610347 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 7 lines ... VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/aws-load-balancer-controller-694f898955-lrvn8 system-cluster-critical pod "aws-load-balancer-controller-694f898955-lrvn8" is pending Pod kube-system/kube-proxy-ip-172-20-0-145.ec2.internal system-node-critical pod "kube-proxy-ip-172-20-0-145.ec2.internal" is pending Validation Failed W0622 08:14:47.640239 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 6 lines ... ip-172-20-0-74.ec2.internal node True VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/aws-load-balancer-controller-694f898955-lrvn8 system-cluster-critical pod "aws-load-balancer-controller-694f898955-lrvn8" is pending Validation Failed W0622 08:14:58.939273 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 6 lines ... ip-172-20-0-74.ec2.internal node True VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/aws-load-balancer-controller-694f898955-lrvn8 system-cluster-critical pod "aws-load-balancer-controller-694f898955-lrvn8" is pending Validation Failed W0622 08:15:09.946465 6388 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 541 lines ... evicting pod kube-system/cert-manager-webhook-6d4d986bbd-klmzc I0622 08:18:37.608578 6509 request.go:665] Waited for 1.142576113s due to client-side throttling, not priority and fairness, request: GET:https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/namespaces/kube-system/pods/cilium-operator-7fb7bf5c7-ggspl I0622 08:19:05.511990 6509 instancegroups.go:653] Waiting for 5s for pods to stabilize after draining. I0622 08:19:10.512456 6509 instancegroups.go:588] Stopping instance "i-09e81ab781ad88d7b", node "ip-172-20-0-180.ec2.internal", in group "master-us-east-1a.masters.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" (this may take a while). I0622 08:19:10.708818 6509 instancegroups.go:434] waiting for 15s after terminating instance I0622 08:19:25.710429 6509 instancegroups.go:467] Validating the cluster. I0622 08:19:25.794774 6509 instancegroups.go:513] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.90.232.148:443: connect: connection refused. I0622 08:20:25.833989 6509 instancegroups.go:513] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.90.232.148:443: i/o timeout. I0622 08:21:25.893236 6509 instancegroups.go:513] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.90.232.148:443: i/o timeout. I0622 08:22:25.938062 6509 instancegroups.go:513] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.90.232.148:443: i/o timeout. I0622 08:23:25.982196 6509 instancegroups.go:513] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 3.90.232.148:443: i/o timeout. I0622 08:23:56.018969 6509 instancegroups.go:513] Cluster did not validate, will retry in "30s": unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host. I0622 08:24:27.543595 6509 instancegroups.go:523] Cluster did not pass validation, will retry in "30s": node "ip-172-20-0-74.ec2.internal" of role "node" is not ready, node "ip-172-20-0-145.ec2.internal" of role "node" is not ready, system-cluster-critical pod "aws-load-balancer-controller-694f898955-j5stl" is pending, system-cluster-critical pod "aws-node-termination-handler-566d67f964-nwzcp" is pending, system-cluster-critical pod "cert-manager-699d66b4b-5c6s9" is pending, system-cluster-critical pod "cert-manager-cainjector-6465ccdb69-dr2fr" is pending, system-cluster-critical pod "cert-manager-webhook-6d4d986bbd-6bcfp" is pending, system-node-critical pod "cilium-gsj22" is not ready (cilium-agent), system-cluster-critical pod "cluster-autoscaler-5f8fdb7d5c-n9tzg" is pending, system-cluster-critical pod "ebs-csi-controller-774fbb7f45-5pf5j" is pending, system-node-critical pod "ebs-csi-node-k98hw" is pending, system-cluster-critical pod "metrics-server-655dc594b4-fmssg" is not ready (metrics-server). I0622 08:24:58.663125 6509 instancegroups.go:523] Cluster did not pass validation, will retry in "30s": node "ip-172-20-0-74.ec2.internal" of role "node" is not ready, node "ip-172-20-0-145.ec2.internal" of role "node" is not ready, system-cluster-critical pod "cert-manager-webhook-6d4d986bbd-6bcfp" is not ready (cert-manager), system-cluster-critical pod "ebs-csi-controller-774fbb7f45-5pf5j" is pending. I0622 08:25:29.709708 6509 instancegroups.go:503] Cluster validated; revalidating in 10s to make sure it does not flap. I0622 08:25:40.897558 6509 instancegroups.go:500] Cluster validated. I0622 08:25:40.897631 6509 instancegroups.go:467] Validating the cluster. ... skipping 40 lines ... evicting pod kube-system/hubble-relay-55846f56fb-rmvh2 evicting pod kube-system/metrics-server-655dc594b4-fmssg evicting pod kube-system/coredns-autoscaler-57dd87df6c-flmqt evicting pod kube-system/coredns-7884856795-bzxnt WARNING: ignoring DaemonSet-managed Pods: kube-system/cilium-kzhs4, kube-system/ebs-csi-node-9bbx9, kube-system/node-local-dns-h4lgh evicting pod kube-system/coredns-7884856795-phnlf error when evicting pods/"coredns-7884856795-phnlf" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. WARNING: ignoring DaemonSet-managed Pods: kube-system/cilium-b979h, kube-system/ebs-csi-node-f4m7j, kube-system/node-local-dns-x8c8v evicting pod kube-system/metrics-server-655dc594b4-csgtp evicting pod kube-system/hubble-relay-55846f56fb-9q6sv I0622 08:37:05.464769 6509 instancegroups.go:653] Waiting for 5s for pods to stabilize after draining. error when evicting pods/"metrics-server-655dc594b4-csgtp" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. evicting pod kube-system/coredns-7884856795-phnlf I0622 08:37:10.465554 6509 instancegroups.go:588] Stopping instance "i-0c414ebe1ef3e22a7", node "ip-172-20-0-145.ec2.internal", in group "nodes-us-east-1a.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" (this may take a while). evicting pod kube-system/metrics-server-655dc594b4-csgtp error when evicting pods/"metrics-server-655dc594b4-csgtp" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I0622 08:37:10.634238 6509 instancegroups.go:434] waiting for 15s after terminating instance I0622 08:37:11.096295 6509 instancegroups.go:653] Waiting for 5s for pods to stabilize after draining. evicting pod kube-system/metrics-server-655dc594b4-csgtp error when evicting pods/"metrics-server-655dc594b4-csgtp" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I0622 08:37:16.097359 6509 instancegroups.go:588] Stopping instance "i-080ec9ffe91c4490b", node "ip-172-20-0-16.ec2.internal", in group "nodes-us-east-1a.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" (this may take a while). I0622 08:37:16.258184 6509 instancegroups.go:434] waiting for 15s after terminating instance I0622 08:37:17.497875 6509 instancegroups.go:653] Waiting for 5s for pods to stabilize after draining. evicting pod kube-system/metrics-server-655dc594b4-csgtp error when evicting pods/"metrics-server-655dc594b4-csgtp" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I0622 08:37:22.498476 6509 instancegroups.go:588] Stopping instance "i-067c7686bcae60b94", node "ip-172-20-0-206.ec2.internal", in group "nodes-us-east-1a.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" (this may take a while). I0622 08:37:22.631336 6509 instancegroups.go:434] waiting for 15s after terminating instance evicting pod kube-system/metrics-server-655dc594b4-csgtp error when evicting pods/"metrics-server-655dc594b4-csgtp" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I0622 08:37:25.635337 6509 instancegroups.go:467] Validating the cluster. I0622 08:37:26.717631 6509 instancegroups.go:523] Cluster did not pass validation, will retry in "30s": system-node-critical pod "cilium-84745" is pending, system-node-critical pod "cilium-krnmk" is pending, system-node-critical pod "ebs-csi-node-shxr7" is pending, system-node-critical pod "ebs-csi-node-xq8cp" is pending, system-node-critical pod "kube-proxy-ip-172-20-0-145.ec2.internal" is not ready (kube-proxy), system-node-critical pod "kube-proxy-ip-172-20-0-16.ec2.internal" is not ready (kube-proxy), system-cluster-critical pod "metrics-server-655dc594b4-wctbl" is not ready (metrics-server), system-node-critical pod "node-local-dns-dtlc2" is pending, system-node-critical pod "node-local-dns-j5j69" is pending. evicting pod kube-system/metrics-server-655dc594b4-csgtp error when evicting pods/"metrics-server-655dc594b4-csgtp" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. evicting pod kube-system/metrics-server-655dc594b4-csgtp error when evicting pods/"metrics-server-655dc594b4-csgtp" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. evicting pod kube-system/metrics-server-655dc594b4-csgtp error when evicting pods/"metrics-server-655dc594b4-csgtp" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. evicting pod kube-system/metrics-server-655dc594b4-csgtp I0622 08:37:48.857782 6509 instancegroups.go:653] Waiting for 5s for pods to stabilize after draining. I0622 08:37:53.858052 6509 instancegroups.go:588] Stopping instance "i-080b371d82f5f63ab", node "ip-172-20-0-74.ec2.internal", in group "nodes-us-east-1a.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" (this may take a while). I0622 08:37:53.991735 6509 instancegroups.go:434] waiting for 15s after terminating instance I0622 08:37:58.104951 6509 instancegroups.go:523] Cluster did not pass validation, will retry in "30s": system-node-critical pod "cilium-2z6lf" is pending, system-node-critical pod "cilium-bsx92" is pending, system-node-critical pod "cilium-f59n7" is pending, system-node-critical pod "cilium-krnmk" is pending, system-node-critical pod "ebs-csi-node-75gkr" is pending, system-node-critical pod "ebs-csi-node-s42dg" is pending, system-node-critical pod "ebs-csi-node-shxr7" is pending, system-node-critical pod "ebs-csi-node-xjp6t" is pending, system-node-critical pod "kube-proxy-ip-172-20-0-145.ec2.internal" is not ready (kube-proxy), system-node-critical pod "kube-proxy-ip-172-20-0-16.ec2.internal" is not ready (kube-proxy), system-node-critical pod "kube-proxy-ip-172-20-0-206.ec2.internal" is not ready (kube-proxy), system-cluster-critical pod "metrics-server-655dc594b4-h7bxn" is not ready (metrics-server), system-node-critical pod "node-local-dns-cnplk" is pending, system-node-critical pod "node-local-dns-j5j69" is pending, system-node-critical pod "node-local-dns-k7cwt" is pending, system-node-critical pod "node-local-dns-x69sr" is pending. I0622 08:38:29.457656 6509 instancegroups.go:523] Cluster did not pass validation, will retry in "30s": system-node-critical pod "cilium-2z6lf" is pending, system-node-critical pod "cilium-f59n7" is pending, system-node-critical pod "cilium-krnmk" is pending, system-node-critical pod "cilium-mnr7d" is pending, system-node-critical pod "ebs-csi-node-2mbxd" is pending, system-node-critical pod "ebs-csi-node-75gkr" is pending, system-node-critical pod "ebs-csi-node-shxr7" is pending, system-node-critical pod "ebs-csi-node-xjp6t" is pending, system-node-critical pod "kube-proxy-ip-172-20-0-74.ec2.internal" is not ready (kube-proxy), system-node-critical pod "node-local-dns-j5j69" is pending, system-node-critical pod "node-local-dns-k7cwt" is pending, system-node-critical pod "node-local-dns-v6tb6" is pending, system-node-critical pod "node-local-dns-x69sr" is pending. ... skipping 270 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: blockfs] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 509 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: hostPath] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver hostPath doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 302 lines ... [AfterEach] [sig-api-machinery] client-go should negotiate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:40:54.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":1,"skipped":40,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:40:54.131: INFO: Only supported for providers [gce gke] (not aws) ... skipping 221 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:40:55.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "lease-test-39" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Server request timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 8 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:40:56.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "request-timeout-8800" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed","total":-1,"completed":1,"skipped":25,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 22 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:40:57.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "ingressclass-689" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:40:57.247: INFO: Driver aws doesn't support ext3 -- skipping ... skipping 102 lines ... [32m• [SLOW TEST:7.505 seconds][0m [sig-apps] Deployment [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m RecreateDeployment should delete old pods and create new ones [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:01.411: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 50 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 22 08:40:54.172: INFO: Waiting up to 5m0s for pod "downwardapi-volume-466bce02-f25e-4d17-a24c-c372b01f070f" in namespace "projected-1855" to be "Succeeded or Failed" Jun 22 08:40:54.267: INFO: Pod "downwardapi-volume-466bce02-f25e-4d17-a24c-c372b01f070f": Phase="Pending", Reason="", readiness=false. Elapsed: 94.943988ms Jun 22 08:40:56.299: INFO: Pod "downwardapi-volume-466bce02-f25e-4d17-a24c-c372b01f070f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126987936s Jun 22 08:40:58.329: INFO: Pod "downwardapi-volume-466bce02-f25e-4d17-a24c-c372b01f070f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15670221s Jun 22 08:41:00.360: INFO: Pod "downwardapi-volume-466bce02-f25e-4d17-a24c-c372b01f070f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.187321915s Jun 22 08:41:02.399: INFO: Pod "downwardapi-volume-466bce02-f25e-4d17-a24c-c372b01f070f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.226358511s Jun 22 08:41:04.429: INFO: Pod "downwardapi-volume-466bce02-f25e-4d17-a24c-c372b01f070f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.256914254s [1mSTEP[0m: Saw pod success Jun 22 08:41:04.429: INFO: Pod "downwardapi-volume-466bce02-f25e-4d17-a24c-c372b01f070f" satisfied condition "Succeeded or Failed" Jun 22 08:41:04.458: INFO: Trying to get logs from node ip-172-20-0-238.ec2.internal pod downwardapi-volume-466bce02-f25e-4d17-a24c-c372b01f070f container client-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:04.536: INFO: Waiting for pod downwardapi-volume-466bce02-f25e-4d17-a24c-c372b01f070f to disappear Jun 22 08:41:04.566: INFO: Pod downwardapi-volume-466bce02-f25e-4d17-a24c-c372b01f070f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:10.815 seconds][0m [sig-storage] Projected downwardAPI [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:04.669: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping ... skipping 47 lines ... Jun 22 08:40:54.064: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-06918d8a-1cc5-46a6-8620-8f91c5866495 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 22 08:40:54.308: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-313fe96a-bf20-4708-90e3-97dc983f2f6f" in namespace "projected-220" to be "Succeeded or Failed" Jun 22 08:40:54.368: INFO: Pod "pod-projected-configmaps-313fe96a-bf20-4708-90e3-97dc983f2f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 59.367164ms Jun 22 08:40:56.408: INFO: Pod "pod-projected-configmaps-313fe96a-bf20-4708-90e3-97dc983f2f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09987373s Jun 22 08:40:58.439: INFO: Pod "pod-projected-configmaps-313fe96a-bf20-4708-90e3-97dc983f2f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131093192s Jun 22 08:41:00.471: INFO: Pod "pod-projected-configmaps-313fe96a-bf20-4708-90e3-97dc983f2f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163198123s Jun 22 08:41:02.505: INFO: Pod "pod-projected-configmaps-313fe96a-bf20-4708-90e3-97dc983f2f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.19667762s Jun 22 08:41:04.540: INFO: Pod "pod-projected-configmaps-313fe96a-bf20-4708-90e3-97dc983f2f6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.231765012s [1mSTEP[0m: Saw pod success Jun 22 08:41:04.540: INFO: Pod "pod-projected-configmaps-313fe96a-bf20-4708-90e3-97dc983f2f6f" satisfied condition "Succeeded or Failed" Jun 22 08:41:04.571: INFO: Trying to get logs from node ip-172-20-0-114.ec2.internal pod pod-projected-configmaps-313fe96a-bf20-4708-90e3-97dc983f2f6f container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:04.677: INFO: Waiting for pod pod-projected-configmaps-313fe96a-bf20-4708-90e3-97dc983f2f6f to disappear Jun 22 08:41:04.708: INFO: Pod pod-projected-configmaps-313fe96a-bf20-4708-90e3-97dc983f2f6f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:10.945 seconds][0m [sig-storage] Projected configMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume as non-root [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 76 lines ... [32m• [SLOW TEST:11.376 seconds][0m [sig-node] KubeletManagedEtcHosts [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 30 lines ... [32m• [SLOW TEST:11.381 seconds][0m [sig-api-machinery] Watchers [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should observe an object deletion if it stops meeting the requirements of the selector [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:05.294: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 150 lines ... [32m• [SLOW TEST:9.390 seconds][0m [sig-apps] Deployment [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m test Deployment ReplicaSet orphaning and adoption regarding controllerRef [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:136[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":2,"skipped":27,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 22 08:40:54.308: INFO: Waiting up to 5m0s for pod "downwardapi-volume-659b0612-5eb1-4d85-9981-babd61f6c4f1" in namespace "downward-api-8986" to be "Succeeded or Failed" Jun 22 08:40:54.368: INFO: Pod "downwardapi-volume-659b0612-5eb1-4d85-9981-babd61f6c4f1": Phase="Pending", Reason="", readiness=false. Elapsed: 59.344948ms Jun 22 08:40:56.405: INFO: Pod "downwardapi-volume-659b0612-5eb1-4d85-9981-babd61f6c4f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096288899s Jun 22 08:40:58.438: INFO: Pod "downwardapi-volume-659b0612-5eb1-4d85-9981-babd61f6c4f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129240591s Jun 22 08:41:00.469: INFO: Pod "downwardapi-volume-659b0612-5eb1-4d85-9981-babd61f6c4f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160099985s Jun 22 08:41:02.505: INFO: Pod "downwardapi-volume-659b0612-5eb1-4d85-9981-babd61f6c4f1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.196611185s Jun 22 08:41:04.537: INFO: Pod "downwardapi-volume-659b0612-5eb1-4d85-9981-babd61f6c4f1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.228516504s Jun 22 08:41:06.634: INFO: Pod "downwardapi-volume-659b0612-5eb1-4d85-9981-babd61f6c4f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.325154336s [1mSTEP[0m: Saw pod success Jun 22 08:41:06.635: INFO: Pod "downwardapi-volume-659b0612-5eb1-4d85-9981-babd61f6c4f1" satisfied condition "Succeeded or Failed" Jun 22 08:41:06.692: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod downwardapi-volume-659b0612-5eb1-4d85-9981-babd61f6c4f1 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:07.196: INFO: Waiting for pod downwardapi-volume-659b0612-5eb1-4d85-9981-babd61f6c4f1 to disappear Jun 22 08:41:07.228: INFO: Pod downwardapi-volume-659b0612-5eb1-4d85-9981-babd61f6c4f1 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:13.433 seconds][0m [sig-storage] Downward API volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should provide container's memory limit [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:07.328: INFO: Only supported for providers [azure] (not aws) ... skipping 44 lines ... [32m• [SLOW TEST:18.209 seconds][0m [sig-apps] ReplicaSet [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m Replace and Patch tests [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:12.057: INFO: Only supported for providers [vsphere] (not aws) [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 76 lines ... [32m• [SLOW TEST:18.533 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should be able to change the type from NodePort to ExternalName [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 80 lines ... [32m• [SLOW TEST:19.518 seconds][0m [sig-network] DNS [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should support configurable pod resolv.conf [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":1,"skipped":5,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:13.409: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping [AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 22 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:49 [It] new files should be created with FSGroup ownership when container is root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:54 [1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs Jun 22 08:41:05.010: INFO: Waiting up to 5m0s for pod "pod-68b969b1-0954-4049-b5dd-42b648b0ed6c" in namespace "emptydir-3890" to be "Succeeded or Failed" Jun 22 08:41:05.042: INFO: Pod "pod-68b969b1-0954-4049-b5dd-42b648b0ed6c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.306217ms Jun 22 08:41:07.074: INFO: Pod "pod-68b969b1-0954-4049-b5dd-42b648b0ed6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063138129s Jun 22 08:41:09.128: INFO: Pod "pod-68b969b1-0954-4049-b5dd-42b648b0ed6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11723155s Jun 22 08:41:11.159: INFO: Pod "pod-68b969b1-0954-4049-b5dd-42b648b0ed6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148866835s Jun 22 08:41:13.191: INFO: Pod "pod-68b969b1-0954-4049-b5dd-42b648b0ed6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.180382742s [1mSTEP[0m: Saw pod success Jun 22 08:41:13.191: INFO: Pod "pod-68b969b1-0954-4049-b5dd-42b648b0ed6c" satisfied condition "Succeeded or Failed" Jun 22 08:41:13.241: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod pod-68b969b1-0954-4049-b5dd-42b648b0ed6c container test-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:13.322: INFO: Waiting for pod pod-68b969b1-0954-4049-b5dd-42b648b0ed6c to disappear Jun 22 08:41:13.353: INFO: Pod pod-68b969b1-0954-4049-b5dd-42b648b0ed6c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 8 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:47[0m new files should be created with FSGroup ownership when container is root [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:54[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":2,"skipped":1,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:13.421: INFO: Only supported for providers [gce gke] (not aws) ... skipping 276 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 22 08:41:04.862: INFO: Waiting up to 5m0s for pod "metadata-volume-f44e65df-a9f5-44ae-897c-0d45620d5819" in namespace "projected-6827" to be "Succeeded or Failed" Jun 22 08:41:04.892: INFO: Pod "metadata-volume-f44e65df-a9f5-44ae-897c-0d45620d5819": Phase="Pending", Reason="", readiness=false. Elapsed: 29.733839ms Jun 22 08:41:06.931: INFO: Pod "metadata-volume-f44e65df-a9f5-44ae-897c-0d45620d5819": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068411033s Jun 22 08:41:08.962: INFO: Pod "metadata-volume-f44e65df-a9f5-44ae-897c-0d45620d5819": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099779609s Jun 22 08:41:10.992: INFO: Pod "metadata-volume-f44e65df-a9f5-44ae-897c-0d45620d5819": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129573104s Jun 22 08:41:13.023: INFO: Pod "metadata-volume-f44e65df-a9f5-44ae-897c-0d45620d5819": Phase="Running", Reason="", readiness=true. Elapsed: 8.160849637s Jun 22 08:41:15.057: INFO: Pod "metadata-volume-f44e65df-a9f5-44ae-897c-0d45620d5819": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.194704198s [1mSTEP[0m: Saw pod success Jun 22 08:41:15.057: INFO: Pod "metadata-volume-f44e65df-a9f5-44ae-897c-0d45620d5819" satisfied condition "Succeeded or Failed" Jun 22 08:41:15.087: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod metadata-volume-f44e65df-a9f5-44ae-897c-0d45620d5819 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:15.159: INFO: Waiting for pod metadata-volume-f44e65df-a9f5-44ae-897c-0d45620d5819 to disappear Jun 22 08:41:15.188: INFO: Pod metadata-volume-f44e65df-a9f5-44ae-897c-0d45620d5819 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:10.575 seconds][0m [sig-storage] Projected downwardAPI [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":6,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:15.253: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 149 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m One pod requesting one prebound PVC [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and read from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":33,"failed":0} [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:19.664: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 70 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:41:19.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "runtimeclass-5711" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeFeature:RuntimeHandler]","total":-1,"completed":2,"skipped":47,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:19.956: INFO: Driver hostPath doesn't support ext4 -- skipping ... skipping 136 lines ... [32m• [SLOW TEST:8.986 seconds][0m [sig-apps] Deployment [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should validate Deployment Status endpoints [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:21.052: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 67 lines ... [1mSTEP[0m: Building a namespace api object, basename configmap [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap configmap-9137/configmap-test-54fccb4f-b85c-4863-8f25-16d5c971de03 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 22 08:41:15.488: INFO: Waiting up to 5m0s for pod "pod-configmaps-5a9bd23f-1b36-4c5b-a36e-2160ba4294f4" in namespace "configmap-9137" to be "Succeeded or Failed" Jun 22 08:41:15.521: INFO: Pod "pod-configmaps-5a9bd23f-1b36-4c5b-a36e-2160ba4294f4": Phase="Pending", Reason="", readiness=false. Elapsed: 32.895031ms Jun 22 08:41:17.551: INFO: Pod "pod-configmaps-5a9bd23f-1b36-4c5b-a36e-2160ba4294f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062771393s Jun 22 08:41:19.581: INFO: Pod "pod-configmaps-5a9bd23f-1b36-4c5b-a36e-2160ba4294f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092576047s Jun 22 08:41:21.611: INFO: Pod "pod-configmaps-5a9bd23f-1b36-4c5b-a36e-2160ba4294f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123253841s Jun 22 08:41:23.642: INFO: Pod "pod-configmaps-5a9bd23f-1b36-4c5b-a36e-2160ba4294f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.154212459s [1mSTEP[0m: Saw pod success Jun 22 08:41:23.642: INFO: Pod "pod-configmaps-5a9bd23f-1b36-4c5b-a36e-2160ba4294f4" satisfied condition "Succeeded or Failed" Jun 22 08:41:23.672: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod pod-configmaps-5a9bd23f-1b36-4c5b-a36e-2160ba4294f4 container env-test: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:23.737: INFO: Waiting for pod pod-configmaps-5a9bd23f-1b36-4c5b-a36e-2160ba4294f4 to disappear Jun 22 08:41:23.767: INFO: Pod pod-configmaps-5a9bd23f-1b36-4c5b-a36e-2160ba4294f4 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:8.555 seconds][0m [sig-node] ConfigMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be consumable via environment variable [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 38 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452[0m that expects NO client request [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:462[0m should support a client that connects, sends DATA, and disconnects [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:463[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":2,"skipped":39,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:41:13.459: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 [1mSTEP[0m: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 22 08:41:13.671: INFO: Waiting up to 5m0s for pod "security-context-359a1dda-a7cb-4873-a598-39266d718756" in namespace "security-context-9719" to be "Succeeded or Failed" Jun 22 08:41:13.706: INFO: Pod "security-context-359a1dda-a7cb-4873-a598-39266d718756": Phase="Pending", Reason="", readiness=false. Elapsed: 35.475191ms Jun 22 08:41:15.738: INFO: Pod "security-context-359a1dda-a7cb-4873-a598-39266d718756": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067473965s Jun 22 08:41:17.771: INFO: Pod "security-context-359a1dda-a7cb-4873-a598-39266d718756": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100626173s Jun 22 08:41:19.803: INFO: Pod "security-context-359a1dda-a7cb-4873-a598-39266d718756": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132115111s Jun 22 08:41:21.835: INFO: Pod "security-context-359a1dda-a7cb-4873-a598-39266d718756": Phase="Pending", Reason="", readiness=false. Elapsed: 8.164642253s Jun 22 08:41:23.867: INFO: Pod "security-context-359a1dda-a7cb-4873-a598-39266d718756": Phase="Pending", Reason="", readiness=false. Elapsed: 10.196422048s Jun 22 08:41:25.899: INFO: Pod "security-context-359a1dda-a7cb-4873-a598-39266d718756": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.228072526s [1mSTEP[0m: Saw pod success Jun 22 08:41:25.899: INFO: Pod "security-context-359a1dda-a7cb-4873-a598-39266d718756" satisfied condition "Succeeded or Failed" Jun 22 08:41:25.931: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod security-context-359a1dda-a7cb-4873-a598-39266d718756 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:26.477: INFO: Waiting for pod security-context-359a1dda-a7cb-4873-a598-39266d718756 to disappear Jun 22 08:41:26.509: INFO: Pod security-context-359a1dda-a7cb-4873-a598-39266d718756 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:13.116 seconds][0m [sig-node] Security Context [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m should support seccomp unconfined on the pod [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":3,"skipped":23,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:26.577: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 133 lines ... Jun 22 08:41:12.902: INFO: PersistentVolumeClaim pvc-nrrk5 found but phase is Pending instead of Bound. Jun 22 08:41:14.933: INFO: PersistentVolumeClaim pvc-nrrk5 found and phase=Bound (10.191106432s) Jun 22 08:41:14.933: INFO: Waiting up to 3m0s for PersistentVolume local-h6ppz to have phase Bound Jun 22 08:41:14.964: INFO: PersistentVolume local-h6ppz found and phase=Bound (30.377168ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-8w74 [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 22 08:41:15.054: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-8w74" in namespace "volume-2576" to be "Succeeded or Failed" Jun 22 08:41:15.084: INFO: Pod "exec-volume-test-preprovisionedpv-8w74": Phase="Pending", Reason="", readiness=false. Elapsed: 30.564023ms Jun 22 08:41:17.115: INFO: Pod "exec-volume-test-preprovisionedpv-8w74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060719974s Jun 22 08:41:19.146: INFO: Pod "exec-volume-test-preprovisionedpv-8w74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092140981s Jun 22 08:41:21.176: INFO: Pod "exec-volume-test-preprovisionedpv-8w74": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122662461s Jun 22 08:41:23.207: INFO: Pod "exec-volume-test-preprovisionedpv-8w74": Phase="Pending", Reason="", readiness=false. Elapsed: 8.153212398s Jun 22 08:41:25.238: INFO: Pod "exec-volume-test-preprovisionedpv-8w74": Phase="Pending", Reason="", readiness=false. Elapsed: 10.184120558s Jun 22 08:41:27.268: INFO: Pod "exec-volume-test-preprovisionedpv-8w74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.214137721s [1mSTEP[0m: Saw pod success Jun 22 08:41:27.268: INFO: Pod "exec-volume-test-preprovisionedpv-8w74" satisfied condition "Succeeded or Failed" Jun 22 08:41:27.297: INFO: Trying to get logs from node ip-172-20-0-238.ec2.internal pod exec-volume-test-preprovisionedpv-8w74 container exec-container-preprovisionedpv-8w74: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:27.366: INFO: Waiting for pod exec-volume-test-preprovisionedpv-8w74 to disappear Jun 22 08:41:27.397: INFO: Pod exec-volume-test-preprovisionedpv-8w74 no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-8w74 Jun 22 08:41:27.397: INFO: Deleting pod "exec-volume-test-preprovisionedpv-8w74" in namespace "volume-2576" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":1,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:27.890: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 52 lines ... Jun 22 08:41:11.058: INFO: PersistentVolumeClaim pvc-h9tdh found but phase is Pending instead of Bound. Jun 22 08:41:13.091: INFO: PersistentVolumeClaim pvc-h9tdh found and phase=Bound (8.179753737s) Jun 22 08:41:13.091: INFO: Waiting up to 3m0s for PersistentVolume local-jpxx7 to have phase Bound Jun 22 08:41:13.119: INFO: PersistentVolume local-jpxx7 found and phase=Bound (28.519367ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-j64s [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:41:13.208: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-j64s" in namespace "provisioning-6464" to be "Succeeded or Failed" Jun 22 08:41:13.241: INFO: Pod "pod-subpath-test-preprovisionedpv-j64s": Phase="Pending", Reason="", readiness=false. Elapsed: 33.119141ms Jun 22 08:41:15.271: INFO: Pod "pod-subpath-test-preprovisionedpv-j64s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063143261s Jun 22 08:41:17.302: INFO: Pod "pod-subpath-test-preprovisionedpv-j64s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094707343s Jun 22 08:41:19.332: INFO: Pod "pod-subpath-test-preprovisionedpv-j64s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124561788s Jun 22 08:41:21.362: INFO: Pod "pod-subpath-test-preprovisionedpv-j64s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.154251588s Jun 22 08:41:23.392: INFO: Pod "pod-subpath-test-preprovisionedpv-j64s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.184168959s Jun 22 08:41:25.423: INFO: Pod "pod-subpath-test-preprovisionedpv-j64s": Phase="Pending", Reason="", readiness=false. Elapsed: 12.215332236s Jun 22 08:41:27.452: INFO: Pod "pod-subpath-test-preprovisionedpv-j64s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.244328441s [1mSTEP[0m: Saw pod success Jun 22 08:41:27.452: INFO: Pod "pod-subpath-test-preprovisionedpv-j64s" satisfied condition "Succeeded or Failed" Jun 22 08:41:27.483: INFO: Trying to get logs from node ip-172-20-0-238.ec2.internal pod pod-subpath-test-preprovisionedpv-j64s container test-container-subpath-preprovisionedpv-j64s: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:27.555: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-j64s to disappear Jun 22 08:41:27.583: INFO: Pod pod-subpath-test-preprovisionedpv-j64s no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-j64s Jun 22 08:41:27.583: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-j64s" in namespace "provisioning-6464" ... skipping 34 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":11,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:41:21.082: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward api env vars Jun 22 08:41:21.259: INFO: Waiting up to 5m0s for pod "downward-api-086aa857-7484-4c67-9b93-e8d7c6431478" in namespace "downward-api-1520" to be "Succeeded or Failed" Jun 22 08:41:21.291: INFO: Pod "downward-api-086aa857-7484-4c67-9b93-e8d7c6431478": Phase="Pending", Reason="", readiness=false. Elapsed: 31.867829ms Jun 22 08:41:23.320: INFO: Pod "downward-api-086aa857-7484-4c67-9b93-e8d7c6431478": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060902797s Jun 22 08:41:25.350: INFO: Pod "downward-api-086aa857-7484-4c67-9b93-e8d7c6431478": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090956742s Jun 22 08:41:27.380: INFO: Pod "downward-api-086aa857-7484-4c67-9b93-e8d7c6431478": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121129815s Jun 22 08:41:29.409: INFO: Pod "downward-api-086aa857-7484-4c67-9b93-e8d7c6431478": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.15025257s [1mSTEP[0m: Saw pod success Jun 22 08:41:29.409: INFO: Pod "downward-api-086aa857-7484-4c67-9b93-e8d7c6431478" satisfied condition "Succeeded or Failed" Jun 22 08:41:29.438: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod downward-api-086aa857-7484-4c67-9b93-e8d7c6431478 container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:29.505: INFO: Waiting for pod downward-api-086aa857-7484-4c67-9b93-e8d7c6431478 to disappear Jun 22 08:41:29.534: INFO: Pod downward-api-086aa857-7484-4c67-9b93-e8d7c6431478 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:8.512 seconds][0m [sig-node] Downward API [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:29.602: INFO: Driver hostPath doesn't support DynamicPV -- skipping ... skipping 127 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m Two pods mounting a local volume at the same time [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248[0m should be able to write from pod1 and read from pod2 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":12,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:30.381: INFO: Only supported for providers [gce gke] (not aws) ... skipping 207 lines ... Jun 22 08:41:26.598: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0777 on node default medium Jun 22 08:41:26.788: INFO: Waiting up to 5m0s for pod "pod-cefdf01c-c8ed-413c-b313-72f455e6079e" in namespace "emptydir-3619" to be "Succeeded or Failed" Jun 22 08:41:26.821: INFO: Pod "pod-cefdf01c-c8ed-413c-b313-72f455e6079e": Phase="Pending", Reason="", readiness=false. Elapsed: 32.998639ms Jun 22 08:41:28.854: INFO: Pod "pod-cefdf01c-c8ed-413c-b313-72f455e6079e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065546909s Jun 22 08:41:30.886: INFO: Pod "pod-cefdf01c-c8ed-413c-b313-72f455e6079e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097465283s [1mSTEP[0m: Saw pod success Jun 22 08:41:30.886: INFO: Pod "pod-cefdf01c-c8ed-413c-b313-72f455e6079e" satisfied condition "Succeeded or Failed" Jun 22 08:41:30.917: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod pod-cefdf01c-c8ed-413c-b313-72f455e6079e container test-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:30.993: INFO: Waiting for pod pod-cefdf01c-c8ed-413c-b313-72f455e6079e to disappear Jun 22 08:41:31.025: INFO: Pod pod-cefdf01c-c8ed-413c-b313-72f455e6079e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:41:31.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-3619" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":35,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:31.098: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 66 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:41:31.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "disruption-6491" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":2,"skipped":33,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:31.386: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 111 lines ... Jun 22 08:41:23.841: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 [1mSTEP[0m: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 22 08:41:24.027: INFO: Waiting up to 5m0s for pod "security-context-c4dbf756-1d6c-4c06-a338-57f01eff7d32" in namespace "security-context-7974" to be "Succeeded or Failed" Jun 22 08:41:24.062: INFO: Pod "security-context-c4dbf756-1d6c-4c06-a338-57f01eff7d32": Phase="Pending", Reason="", readiness=false. Elapsed: 34.701989ms Jun 22 08:41:26.093: INFO: Pod "security-context-c4dbf756-1d6c-4c06-a338-57f01eff7d32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065297376s Jun 22 08:41:28.124: INFO: Pod "security-context-c4dbf756-1d6c-4c06-a338-57f01eff7d32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096190141s Jun 22 08:41:30.154: INFO: Pod "security-context-c4dbf756-1d6c-4c06-a338-57f01eff7d32": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126495818s Jun 22 08:41:32.188: INFO: Pod "security-context-c4dbf756-1d6c-4c06-a338-57f01eff7d32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.160319707s [1mSTEP[0m: Saw pod success Jun 22 08:41:32.188: INFO: Pod "security-context-c4dbf756-1d6c-4c06-a338-57f01eff7d32" satisfied condition "Succeeded or Failed" Jun 22 08:41:32.223: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod security-context-c4dbf756-1d6c-4c06-a338-57f01eff7d32 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:32.308: INFO: Waiting for pod security-context-c4dbf756-1d6c-4c06-a338-57f01eff7d32 to disappear Jun 22 08:41:32.341: INFO: Pod security-context-c4dbf756-1d6c-4c06-a338-57f01eff7d32 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:8.561 seconds][0m [sig-node] Security Context [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m should support seccomp unconfined on the container [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":4,"skipped":23,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:32.415: INFO: Only supported for providers [azure] (not aws) ... skipping 85 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Kubectl label [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1329[0m should update the label on a resource [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":2,"skipped":36,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:32.605: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 55 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:41:32.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "svcaccounts-1652" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":5,"skipped":30,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:32.798: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 37 lines ... [32m• [SLOW TEST:29.637 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m removes definition from spec when one version gets changed to not be served [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":3,"skipped":32,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes ... skipping 29 lines ... Jun 22 08:41:11.417: INFO: PersistentVolumeClaim pvc-bqs4g found but phase is Pending instead of Bound. Jun 22 08:41:13.448: INFO: PersistentVolumeClaim pvc-bqs4g found and phase=Bound (12.23136345s) Jun 22 08:41:13.448: INFO: Waiting up to 3m0s for PersistentVolume local-skhfk to have phase Bound Jun 22 08:41:13.480: INFO: PersistentVolume local-skhfk found and phase=Bound (31.618742ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-2ppn [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 22 08:41:13.579: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-2ppn" in namespace "volume-7761" to be "Succeeded or Failed" Jun 22 08:41:13.610: INFO: Pod "exec-volume-test-preprovisionedpv-2ppn": Phase="Pending", Reason="", readiness=false. Elapsed: 30.381834ms Jun 22 08:41:15.646: INFO: Pod "exec-volume-test-preprovisionedpv-2ppn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066379499s Jun 22 08:41:17.677: INFO: Pod "exec-volume-test-preprovisionedpv-2ppn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097393607s Jun 22 08:41:19.709: INFO: Pod "exec-volume-test-preprovisionedpv-2ppn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129303152s Jun 22 08:41:21.740: INFO: Pod "exec-volume-test-preprovisionedpv-2ppn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160617692s Jun 22 08:41:23.771: INFO: Pod "exec-volume-test-preprovisionedpv-2ppn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.191745557s Jun 22 08:41:25.803: INFO: Pod "exec-volume-test-preprovisionedpv-2ppn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.223971331s Jun 22 08:41:27.837: INFO: Pod "exec-volume-test-preprovisionedpv-2ppn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.257450938s Jun 22 08:41:29.868: INFO: Pod "exec-volume-test-preprovisionedpv-2ppn": Phase="Pending", Reason="", readiness=false. Elapsed: 16.289015867s Jun 22 08:41:31.907: INFO: Pod "exec-volume-test-preprovisionedpv-2ppn": Phase="Pending", Reason="", readiness=false. Elapsed: 18.327626129s Jun 22 08:41:33.940: INFO: Pod "exec-volume-test-preprovisionedpv-2ppn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.360119133s [1mSTEP[0m: Saw pod success Jun 22 08:41:33.940: INFO: Pod "exec-volume-test-preprovisionedpv-2ppn" satisfied condition "Succeeded or Failed" Jun 22 08:41:33.975: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod exec-volume-test-preprovisionedpv-2ppn container exec-container-preprovisionedpv-2ppn: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:34.051: INFO: Waiting for pod exec-volume-test-preprovisionedpv-2ppn to disappear Jun 22 08:41:34.086: INFO: Pod exec-volume-test-preprovisionedpv-2ppn no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-2ppn Jun 22 08:41:34.086: INFO: Deleting pod "exec-volume-test-preprovisionedpv-2ppn" in namespace "volume-7761" ... skipping 28 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":6,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:36.856: INFO: Only supported for providers [gce gke] (not aws) ... skipping 80 lines ... [32m• [SLOW TEST:16.993 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m listing mutating webhooks should work [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":3,"skipped":58,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 27 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Kubectl server-side dry-run [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:926[0m should check if kubectl can dry-run update Pods [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":3,"skipped":40,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:38.228: INFO: Only supported for providers [gce gke] (not aws) ... skipping 84 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should resize volume when PVC is edited while pod is using it [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":2,"skipped":12,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:38.531: INFO: Only supported for providers [azure] (not aws) [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 97 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should not mount / map unused volumes in a pod [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":28,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:38.920: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 11 lines ... [36mOnly supported for providers [gce gke] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302 [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:41:12.299: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename services [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 36 lines ... [32m• [SLOW TEST:26.928 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should be able to create a functioning NodePort service [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 10 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:41:39.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "endpointslice-1711" for this suite. [32m•[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":2,"skipped":31,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 10 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:41:39.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "node-lease-test-878" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":3,"skipped":21,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 17 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:41:40.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-670" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":4,"skipped":26,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 23 lines ... [It] should support readOnly file specified in the volumeMount [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380 Jun 22 08:41:32.953: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics Jun 22 08:41:32.953: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-l28m [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:41:33.008: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-l28m" in namespace "provisioning-4116" to be "Succeeded or Failed" Jun 22 08:41:33.054: INFO: Pod "pod-subpath-test-inlinevolume-l28m": Phase="Pending", Reason="", readiness=false. Elapsed: 46.689418ms Jun 22 08:41:35.087: INFO: Pod "pod-subpath-test-inlinevolume-l28m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07947336s Jun 22 08:41:37.117: INFO: Pod "pod-subpath-test-inlinevolume-l28m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109149741s Jun 22 08:41:39.147: INFO: Pod "pod-subpath-test-inlinevolume-l28m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138997157s Jun 22 08:41:41.177: INFO: Pod "pod-subpath-test-inlinevolume-l28m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.169518053s Jun 22 08:41:43.209: INFO: Pod "pod-subpath-test-inlinevolume-l28m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.20090075s [1mSTEP[0m: Saw pod success Jun 22 08:41:43.209: INFO: Pod "pod-subpath-test-inlinevolume-l28m" satisfied condition "Succeeded or Failed" Jun 22 08:41:43.239: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod pod-subpath-test-inlinevolume-l28m container test-container-subpath-inlinevolume-l28m: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:43.329: INFO: Waiting for pod pod-subpath-test-inlinevolume-l28m to disappear Jun 22 08:41:43.359: INFO: Pod pod-subpath-test-inlinevolume-l28m no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-l28m Jun 22 08:41:43.359: INFO: Deleting pod "pod-subpath-test-inlinevolume-l28m" in namespace "provisioning-4116" ... skipping 12 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":33,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:43.491: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 32 lines ... [36mDriver hostPath doesn't support PreprovisionedPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":3,"skipped":35,"failed":0} [BeforeEach] [sig-api-machinery] ServerSideApply /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:41:42.980: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename apply [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 7 lines ... [1mSTEP[0m: Destroying namespace "apply-7039" for this suite. [AfterEach] [sig-api-machinery] ServerSideApply /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56 [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist","total":-1,"completed":4,"skipped":35,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 16 lines ... [1mSTEP[0m: Destroying namespace "services-5868" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should delete a collection of services [Conformance]","total":-1,"completed":5,"skipped":37,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:44.047: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping ... skipping 44 lines ... Jun 22 08:41:31.413: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0644 on node default medium Jun 22 08:41:31.628: INFO: Waiting up to 5m0s for pod "pod-b16ee8d9-6f49-4d82-8be3-2fa9f1d3ad02" in namespace "emptydir-2967" to be "Succeeded or Failed" Jun 22 08:41:31.681: INFO: Pod "pod-b16ee8d9-6f49-4d82-8be3-2fa9f1d3ad02": Phase="Pending", Reason="", readiness=false. Elapsed: 52.631875ms Jun 22 08:41:33.712: INFO: Pod "pod-b16ee8d9-6f49-4d82-8be3-2fa9f1d3ad02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083514377s Jun 22 08:41:35.743: INFO: Pod "pod-b16ee8d9-6f49-4d82-8be3-2fa9f1d3ad02": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11519234s Jun 22 08:41:37.774: INFO: Pod "pod-b16ee8d9-6f49-4d82-8be3-2fa9f1d3ad02": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145406981s Jun 22 08:41:39.804: INFO: Pod "pod-b16ee8d9-6f49-4d82-8be3-2fa9f1d3ad02": Phase="Pending", Reason="", readiness=false. Elapsed: 8.175768435s Jun 22 08:41:41.847: INFO: Pod "pod-b16ee8d9-6f49-4d82-8be3-2fa9f1d3ad02": Phase="Pending", Reason="", readiness=false. Elapsed: 10.21927002s Jun 22 08:41:43.878: INFO: Pod "pod-b16ee8d9-6f49-4d82-8be3-2fa9f1d3ad02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.249604276s [1mSTEP[0m: Saw pod success Jun 22 08:41:43.878: INFO: Pod "pod-b16ee8d9-6f49-4d82-8be3-2fa9f1d3ad02" satisfied condition "Succeeded or Failed" Jun 22 08:41:43.908: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod pod-b16ee8d9-6f49-4d82-8be3-2fa9f1d3ad02 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:43.995: INFO: Waiting for pod pod-b16ee8d9-6f49-4d82-8be3-2fa9f1d3ad02 to disappear Jun 22 08:41:44.027: INFO: Pod pod-b16ee8d9-6f49-4d82-8be3-2fa9f1d3ad02 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:12.679 seconds][0m [sig-storage] EmptyDir volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":56,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:44.102: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping ... skipping 46 lines ... Jun 22 08:40:57.260: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename svcaccounts [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488 [1mSTEP[0m: Creating a pod to test service account token: Jun 22 08:40:58.474: INFO: Waiting up to 5m0s for pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18" in namespace "svcaccounts-8317" to be "Succeeded or Failed" Jun 22 08:40:58.503: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 29.141312ms Jun 22 08:41:00.534: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060144671s Jun 22 08:41:02.565: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091054977s Jun 22 08:41:04.596: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122025524s Jun 22 08:41:06.673: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 8.199163642s Jun 22 08:41:08.703: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.229088691s [1mSTEP[0m: Saw pod success Jun 22 08:41:08.703: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18" satisfied condition "Succeeded or Failed" Jun 22 08:41:08.732: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:08.810: INFO: Waiting for pod test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18 to disappear Jun 22 08:41:08.840: INFO: Pod test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18 no longer exists [1mSTEP[0m: Creating a pod to test service account token: Jun 22 08:41:08.872: INFO: Waiting up to 5m0s for pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18" in namespace "svcaccounts-8317" to be "Succeeded or Failed" Jun 22 08:41:08.902: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 30.048633ms Jun 22 08:41:10.933: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06077521s Jun 22 08:41:12.964: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091616651s Jun 22 08:41:14.997: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125083172s Jun 22 08:41:17.028: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 8.155494537s Jun 22 08:41:19.059: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.186696277s [1mSTEP[0m: Saw pod success Jun 22 08:41:19.059: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18" satisfied condition "Succeeded or Failed" Jun 22 08:41:19.088: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:19.162: INFO: Waiting for pod test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18 to disappear Jun 22 08:41:19.191: INFO: Pod test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18 no longer exists [1mSTEP[0m: Creating a pod to test service account token: Jun 22 08:41:19.223: INFO: Waiting up to 5m0s for pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18" in namespace "svcaccounts-8317" to be "Succeeded or Failed" Jun 22 08:41:19.252: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 29.634012ms Jun 22 08:41:21.283: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060167368s Jun 22 08:41:23.314: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091103447s Jun 22 08:41:25.344: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121464432s Jun 22 08:41:27.376: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 8.152913284s Jun 22 08:41:29.407: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 10.183761295s Jun 22 08:41:31.438: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.215291581s [1mSTEP[0m: Saw pod success Jun 22 08:41:31.438: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18" satisfied condition "Succeeded or Failed" Jun 22 08:41:31.469: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:31.548: INFO: Waiting for pod test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18 to disappear Jun 22 08:41:31.590: INFO: Pod test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18 no longer exists [1mSTEP[0m: Creating a pod to test service account token: Jun 22 08:41:31.659: INFO: Waiting up to 5m0s for pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18" in namespace "svcaccounts-8317" to be "Succeeded or Failed" Jun 22 08:41:31.697: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 37.440284ms Jun 22 08:41:33.726: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067084279s Jun 22 08:41:35.757: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097778395s Jun 22 08:41:37.788: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12843473s Jun 22 08:41:39.818: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159322749s Jun 22 08:41:41.872: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 10.212833181s Jun 22 08:41:43.903: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Pending", Reason="", readiness=false. Elapsed: 12.243748819s Jun 22 08:41:45.933: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.273851458s [1mSTEP[0m: Saw pod success Jun 22 08:41:45.933: INFO: Pod "test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18" satisfied condition "Succeeded or Failed" Jun 22 08:41:45.962: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:46.041: INFO: Waiting for pod test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18 to disappear Jun 22 08:41:46.072: INFO: Pod test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18 no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:48.873 seconds][0m [sig-auth] ServiceAccounts [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23[0m should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":21,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Generated clientset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 18 lines ... [32m• [SLOW TEST:8.460 seconds][0m [sig-api-machinery] Generated clientset [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/generated_clientset.go:103[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod","total":-1,"completed":4,"skipped":59,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:46.719: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 221 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41[0m on terminated container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134[0m should report termination message if TerminationMessagePath is set [Excluded:WindowsDocker] [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [Excluded:WindowsDocker] [NodeConformance]","total":-1,"completed":4,"skipped":34,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:46.849: INFO: Only supported for providers [gce gke] (not aws) ... skipping 36 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:41:48.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "svcaccounts-6538" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":5,"skipped":82,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 36 lines ... [32m• [SLOW TEST:22.312 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should be able to change the type from ExternalName to NodePort [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":3,"skipped":40,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:48.395: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 135 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSI Volume expansion [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:641[0m should expand volume by restarting pod if attach=off, nodeExpansion=on [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:670[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":2,"skipped":2,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:41:36.883: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs Jun 22 08:41:37.075: INFO: Waiting up to 5m0s for pod "pod-6ec0ab50-240d-45cf-90aa-3b72c795e6ff" in namespace "emptydir-7974" to be "Succeeded or Failed" Jun 22 08:41:37.106: INFO: Pod "pod-6ec0ab50-240d-45cf-90aa-3b72c795e6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 31.04955ms Jun 22 08:41:39.137: INFO: Pod "pod-6ec0ab50-240d-45cf-90aa-3b72c795e6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061809703s Jun 22 08:41:41.168: INFO: Pod "pod-6ec0ab50-240d-45cf-90aa-3b72c795e6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093008839s Jun 22 08:41:43.199: INFO: Pod "pod-6ec0ab50-240d-45cf-90aa-3b72c795e6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123979457s Jun 22 08:41:45.230: INFO: Pod "pod-6ec0ab50-240d-45cf-90aa-3b72c795e6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.155117567s Jun 22 08:41:47.261: INFO: Pod "pod-6ec0ab50-240d-45cf-90aa-3b72c795e6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 10.186022328s Jun 22 08:41:49.300: INFO: Pod "pod-6ec0ab50-240d-45cf-90aa-3b72c795e6ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.224789674s [1mSTEP[0m: Saw pod success Jun 22 08:41:49.300: INFO: Pod "pod-6ec0ab50-240d-45cf-90aa-3b72c795e6ff" satisfied condition "Succeeded or Failed" Jun 22 08:41:49.331: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod pod-6ec0ab50-240d-45cf-90aa-3b72c795e6ff container test-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:49.403: INFO: Waiting for pod pod-6ec0ab50-240d-45cf-90aa-3b72c795e6ff to disappear Jun 22 08:41:49.434: INFO: Pod pod-6ec0ab50-240d-45cf-90aa-3b72c795e6ff no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:12.615 seconds][0m [sig-storage] EmptyDir volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":28,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:49.505: INFO: Only supported for providers [gce gke] (not aws) ... skipping 128 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474[0m that expects NO client request [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:484[0m should support a client that connects, sends DATA, and disconnects [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:485[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":5,"skipped":50,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 68 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m Two pods mounting a local volume one after the other [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254[0m should be able to write from pod1 and read from pod2 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":12,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:41:59.529: INFO: Only supported for providers [azure] (not aws) ... skipping 401 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m version v1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74[0m should proxy through a service and a pod [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":4,"skipped":60,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":4,"skipped":69,"failed":0} [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:41:53.555: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename persistent-local-volumes-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 76 lines ... Jun 22 08:41:29.615: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support non-existent path /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194 Jun 22 08:41:29.761: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Jun 22 08:41:29.826: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5175" in namespace "provisioning-5175" to be "Succeeded or Failed" Jun 22 08:41:29.855: INFO: Pod "hostpath-symlink-prep-provisioning-5175": Phase="Pending", Reason="", readiness=false. Elapsed: 28.957315ms Jun 22 08:41:31.887: INFO: Pod "hostpath-symlink-prep-provisioning-5175": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060971719s Jun 22 08:41:33.917: INFO: Pod "hostpath-symlink-prep-provisioning-5175": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091312149s Jun 22 08:41:35.947: INFO: Pod "hostpath-symlink-prep-provisioning-5175": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121290658s Jun 22 08:41:37.979: INFO: Pod "hostpath-symlink-prep-provisioning-5175": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.153576445s [1mSTEP[0m: Saw pod success Jun 22 08:41:37.979: INFO: Pod "hostpath-symlink-prep-provisioning-5175" satisfied condition "Succeeded or Failed" Jun 22 08:41:37.979: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5175" in namespace "provisioning-5175" Jun 22 08:41:38.014: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5175" to be fully deleted Jun 22 08:41:38.044: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-fkrw [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:41:38.074: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-fkrw" in namespace "provisioning-5175" to be "Succeeded or Failed" Jun 22 08:41:38.103: INFO: Pod "pod-subpath-test-inlinevolume-fkrw": Phase="Pending", Reason="", readiness=false. Elapsed: 28.770726ms Jun 22 08:41:40.132: INFO: Pod "pod-subpath-test-inlinevolume-fkrw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057864377s Jun 22 08:41:42.165: INFO: Pod "pod-subpath-test-inlinevolume-fkrw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090633121s Jun 22 08:41:44.194: INFO: Pod "pod-subpath-test-inlinevolume-fkrw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119949097s Jun 22 08:41:46.225: INFO: Pod "pod-subpath-test-inlinevolume-fkrw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.1511864s Jun 22 08:41:48.255: INFO: Pod "pod-subpath-test-inlinevolume-fkrw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.180829727s Jun 22 08:41:50.289: INFO: Pod "pod-subpath-test-inlinevolume-fkrw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.214574558s Jun 22 08:41:52.318: INFO: Pod "pod-subpath-test-inlinevolume-fkrw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.243532829s [1mSTEP[0m: Saw pod success Jun 22 08:41:52.318: INFO: Pod "pod-subpath-test-inlinevolume-fkrw" satisfied condition "Succeeded or Failed" Jun 22 08:41:52.347: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod pod-subpath-test-inlinevolume-fkrw container test-container-volume-inlinevolume-fkrw: <nil> [1mSTEP[0m: delete the pod Jun 22 08:41:52.434: INFO: Waiting for pod pod-subpath-test-inlinevolume-fkrw to disappear Jun 22 08:41:52.463: INFO: Pod pod-subpath-test-inlinevolume-fkrw no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-fkrw Jun 22 08:41:52.463: INFO: Deleting pod "pod-subpath-test-inlinevolume-fkrw" in namespace "provisioning-5175" [1mSTEP[0m: Deleting pod Jun 22 08:41:52.492: INFO: Deleting pod "pod-subpath-test-inlinevolume-fkrw" in namespace "provisioning-5175" Jun 22 08:41:52.551: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5175" in namespace "provisioning-5175" to be "Succeeded or Failed" Jun 22 08:41:52.580: INFO: Pod "hostpath-symlink-prep-provisioning-5175": Phase="Pending", Reason="", readiness=false. Elapsed: 28.407786ms Jun 22 08:41:54.611: INFO: Pod "hostpath-symlink-prep-provisioning-5175": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059221204s Jun 22 08:41:56.641: INFO: Pod "hostpath-symlink-prep-provisioning-5175": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089031097s Jun 22 08:41:58.671: INFO: Pod "hostpath-symlink-prep-provisioning-5175": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119213941s Jun 22 08:42:00.700: INFO: Pod "hostpath-symlink-prep-provisioning-5175": Phase="Pending", Reason="", readiness=false. Elapsed: 8.148155963s Jun 22 08:42:02.729: INFO: Pod "hostpath-symlink-prep-provisioning-5175": Phase="Pending", Reason="", readiness=false. Elapsed: 10.177377754s Jun 22 08:42:04.764: INFO: Pod "hostpath-symlink-prep-provisioning-5175": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.212200546s [1mSTEP[0m: Saw pod success Jun 22 08:42:04.764: INFO: Pod "hostpath-symlink-prep-provisioning-5175" satisfied condition "Succeeded or Failed" Jun 22 08:42:04.764: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5175" in namespace "provisioning-5175" Jun 22 08:42:04.807: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5175" to be fully deleted [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:42:04.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "provisioning-5175" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support non-existent path [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":34,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 124 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Simple pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379[0m should support exec through kubectl proxy [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:473[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":3,"skipped":40,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:42:06.700: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 130 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Simple pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379[0m should contain last line of the log [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:623[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":5,"skipped":40,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:42:07.429: INFO: Only supported for providers [vsphere] (not aws) ... skipping 94 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m Verify if offline PVC expansion works [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":2,"skipped":43,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:42:07.658: INFO: Only supported for providers [vsphere] (not aws) ... skipping 44 lines ... [32m• [SLOW TEST:9.260 seconds][0m [sig-node] Pods [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should support remote command execution over websockets [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:42:08.808: INFO: Only supported for providers [gce gke] (not aws) ... skipping 71 lines ... Jun 22 08:41:57.196: INFO: PersistentVolumeClaim pvc-9mjlf found but phase is Pending instead of Bound. Jun 22 08:41:59.227: INFO: PersistentVolumeClaim pvc-9mjlf found and phase=Bound (12.241907545s) Jun 22 08:41:59.228: INFO: Waiting up to 3m0s for PersistentVolume local-6zv4n to have phase Bound Jun 22 08:41:59.259: INFO: PersistentVolume local-6zv4n found and phase=Bound (31.108492ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-qh6b [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:41:59.357: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qh6b" in namespace "provisioning-6573" to be "Succeeded or Failed" Jun 22 08:41:59.399: INFO: Pod "pod-subpath-test-preprovisionedpv-qh6b": Phase="Pending", Reason="", readiness=false. Elapsed: 41.84129ms Jun 22 08:42:01.430: INFO: Pod "pod-subpath-test-preprovisionedpv-qh6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072829078s Jun 22 08:42:03.461: INFO: Pod "pod-subpath-test-preprovisionedpv-qh6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103825049s Jun 22 08:42:05.495: INFO: Pod "pod-subpath-test-preprovisionedpv-qh6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138678527s Jun 22 08:42:07.536: INFO: Pod "pod-subpath-test-preprovisionedpv-qh6b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.179076889s Jun 22 08:42:09.571: INFO: Pod "pod-subpath-test-preprovisionedpv-qh6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.213775282s [1mSTEP[0m: Saw pod success Jun 22 08:42:09.571: INFO: Pod "pod-subpath-test-preprovisionedpv-qh6b" satisfied condition "Succeeded or Failed" Jun 22 08:42:09.603: INFO: Trying to get logs from node ip-172-20-0-114.ec2.internal pod pod-subpath-test-preprovisionedpv-qh6b container test-container-subpath-preprovisionedpv-qh6b: <nil> [1mSTEP[0m: delete the pod Jun 22 08:42:09.686: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qh6b to disappear Jun 22 08:42:09.717: INFO: Pod pod-subpath-test-preprovisionedpv-qh6b no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-qh6b Jun 22 08:42:09.717: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qh6b" in namespace "provisioning-6573" ... skipping 30 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":41,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:42:11.092: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 82 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m One pod requesting one prebound PVC [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and read from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":7,"skipped":36,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-windows] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28 Jun 22 08:42:11.485: INFO: Only supported for node OS distro [windows] (not debian) ... skipping 63 lines ... [32m• [SLOW TEST:80.428 seconds][0m [sig-storage] Projected configMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m updates should be reflected in volume [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:42:14.310: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 109 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should be able to unmount after the subpath directory is deleted [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":25,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:42:07.444: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward api env vars Jun 22 08:42:07.673: INFO: Waiting up to 5m0s for pod "downward-api-eba5c416-3a06-4641-9b2b-9ce0a657902b" in namespace "downward-api-6221" to be "Succeeded or Failed" Jun 22 08:42:07.705: INFO: Pod "downward-api-eba5c416-3a06-4641-9b2b-9ce0a657902b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.835418ms Jun 22 08:42:09.737: INFO: Pod "downward-api-eba5c416-3a06-4641-9b2b-9ce0a657902b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06444737s Jun 22 08:42:11.771: INFO: Pod "downward-api-eba5c416-3a06-4641-9b2b-9ce0a657902b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097870939s Jun 22 08:42:13.803: INFO: Pod "downward-api-eba5c416-3a06-4641-9b2b-9ce0a657902b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129824754s Jun 22 08:42:15.845: INFO: Pod "downward-api-eba5c416-3a06-4641-9b2b-9ce0a657902b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.17197881s Jun 22 08:42:17.881: INFO: Pod "downward-api-eba5c416-3a06-4641-9b2b-9ce0a657902b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.20803304s Jun 22 08:42:19.913: INFO: Pod "downward-api-eba5c416-3a06-4641-9b2b-9ce0a657902b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.240232602s [1mSTEP[0m: Saw pod success Jun 22 08:42:19.913: INFO: Pod "downward-api-eba5c416-3a06-4641-9b2b-9ce0a657902b" satisfied condition "Succeeded or Failed" Jun 22 08:42:19.945: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod downward-api-eba5c416-3a06-4641-9b2b-9ce0a657902b container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:42:20.018: INFO: Waiting for pod downward-api-eba5c416-3a06-4641-9b2b-9ce0a657902b to disappear Jun 22 08:42:20.049: INFO: Pod downward-api-eba5c416-3a06-4641-9b2b-9ce0a657902b no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:12.675 seconds][0m [sig-node] Downward API [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should provide pod UID as env vars [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":46,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral ... skipping 139 lines ... [AfterEach] [sig-api-machinery] client-go should negotiate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:42:20.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":7,"skipped":49,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:42:20.230: INFO: Only supported for providers [openstack] (not aws) ... skipping 45 lines ... [1mSTEP[0m: Building a namespace api object, basename secrets [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating secret with name secret-test-9b4e29a4-63a2-475a-8f69-e7714694566a [1mSTEP[0m: Creating a pod to test consume secrets Jun 22 08:42:11.458: INFO: Waiting up to 5m0s for pod "pod-secrets-7f618c87-7cb1-499c-89ed-7b4840f25086" in namespace "secrets-7929" to be "Succeeded or Failed" Jun 22 08:42:11.488: INFO: Pod "pod-secrets-7f618c87-7cb1-499c-89ed-7b4840f25086": Phase="Pending", Reason="", readiness=false. Elapsed: 30.127537ms Jun 22 08:42:13.520: INFO: Pod "pod-secrets-7f618c87-7cb1-499c-89ed-7b4840f25086": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061912591s Jun 22 08:42:15.566: INFO: Pod "pod-secrets-7f618c87-7cb1-499c-89ed-7b4840f25086": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10833341s Jun 22 08:42:17.598: INFO: Pod "pod-secrets-7f618c87-7cb1-499c-89ed-7b4840f25086": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1404373s Jun 22 08:42:19.630: INFO: Pod "pod-secrets-7f618c87-7cb1-499c-89ed-7b4840f25086": Phase="Pending", Reason="", readiness=false. Elapsed: 8.172116833s Jun 22 08:42:21.661: INFO: Pod "pod-secrets-7f618c87-7cb1-499c-89ed-7b4840f25086": Phase="Pending", Reason="", readiness=false. Elapsed: 10.202815152s Jun 22 08:42:23.692: INFO: Pod "pod-secrets-7f618c87-7cb1-499c-89ed-7b4840f25086": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.234494789s [1mSTEP[0m: Saw pod success Jun 22 08:42:23.693: INFO: Pod "pod-secrets-7f618c87-7cb1-499c-89ed-7b4840f25086" satisfied condition "Succeeded or Failed" Jun 22 08:42:23.723: INFO: Trying to get logs from node ip-172-20-0-238.ec2.internal pod pod-secrets-7f618c87-7cb1-499c-89ed-7b4840f25086 container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 22 08:42:23.801: INFO: Waiting for pod pod-secrets-7f618c87-7cb1-499c-89ed-7b4840f25086 to disappear Jun 22 08:42:23.831: INFO: Pod pod-secrets-7f618c87-7cb1-499c-89ed-7b4840f25086 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 5 lines ... [32m• [SLOW TEST:12.826 seconds][0m [sig-storage] Secrets [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":45,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes ... skipping 67 lines ... Jun 22 08:42:14.593: INFO: Pod aws-client still exists Jun 22 08:42:16.563: INFO: Waiting for pod aws-client to disappear Jun 22 08:42:16.593: INFO: Pod aws-client still exists Jun 22 08:42:18.562: INFO: Waiting for pod aws-client to disappear Jun 22 08:42:18.592: INFO: Pod aws-client no longer exists [1mSTEP[0m: cleaning the environment after aws Jun 22 08:42:18.887: INFO: Couldn't delete PD "aws://us-east-1a/vol-081e3be3718d088b3", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-081e3be3718d088b3 is currently attached to i-0920578a2cdabf0a3 status code: 400, request id: 9f863d8d-26af-475a-99dd-dbaddf68b8a2 Jun 22 08:42:24.258: INFO: Successfully deleted PD "aws://us-east-1a/vol-081e3be3718d088b3". [AfterEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:42:24.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-4664" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":2,"skipped":5,"failed":0} [BeforeEach] [sig-api-machinery] API priority and fairness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:42:24.326: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename apf [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 35 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [36mDriver local doesn't support ext3 -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121 [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":1,"skipped":4,"failed":0} [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:42:20.210: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename replicaset [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 18 lines ... [32m• [SLOW TEST:6.590 seconds][0m [sig-apps] ReplicaSet [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should adopt matching pods on creation and release no longer matching pods [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 18 lines ... Jun 22 08:41:56.862: INFO: PersistentVolumeClaim pvc-mdxfp found but phase is Pending instead of Bound. Jun 22 08:41:58.894: INFO: PersistentVolumeClaim pvc-mdxfp found and phase=Bound (4.095699979s) Jun 22 08:41:58.894: INFO: Waiting up to 3m0s for PersistentVolume local-nrqgr to have phase Bound Jun 22 08:41:58.926: INFO: PersistentVolume local-nrqgr found and phase=Bound (31.264576ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-cxqw [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:41:59.020: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-cxqw" in namespace "provisioning-2677" to be "Succeeded or Failed" Jun 22 08:41:59.051: INFO: Pod "pod-subpath-test-preprovisionedpv-cxqw": Phase="Pending", Reason="", readiness=false. Elapsed: 30.957034ms Jun 22 08:42:01.084: INFO: Pod "pod-subpath-test-preprovisionedpv-cxqw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063461209s Jun 22 08:42:03.116: INFO: Pod "pod-subpath-test-preprovisionedpv-cxqw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096365873s Jun 22 08:42:05.149: INFO: Pod "pod-subpath-test-preprovisionedpv-cxqw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128769952s Jun 22 08:42:07.182: INFO: Pod "pod-subpath-test-preprovisionedpv-cxqw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.161733834s Jun 22 08:42:09.216: INFO: Pod "pod-subpath-test-preprovisionedpv-cxqw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.196243662s Jun 22 08:42:11.249: INFO: Pod "pod-subpath-test-preprovisionedpv-cxqw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.228863441s Jun 22 08:42:13.281: INFO: Pod "pod-subpath-test-preprovisionedpv-cxqw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.260869582s Jun 22 08:42:15.315: INFO: Pod "pod-subpath-test-preprovisionedpv-cxqw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.294645978s [1mSTEP[0m: Saw pod success Jun 22 08:42:15.315: INFO: Pod "pod-subpath-test-preprovisionedpv-cxqw" satisfied condition "Succeeded or Failed" Jun 22 08:42:15.346: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod pod-subpath-test-preprovisionedpv-cxqw container test-container-subpath-preprovisionedpv-cxqw: <nil> [1mSTEP[0m: delete the pod Jun 22 08:42:15.418: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-cxqw to disappear Jun 22 08:42:15.452: INFO: Pod pod-subpath-test-preprovisionedpv-cxqw no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-cxqw Jun 22 08:42:15.452: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-cxqw" in namespace "provisioning-2677" [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-cxqw [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:42:15.533: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-cxqw" in namespace "provisioning-2677" to be "Succeeded or Failed" Jun 22 08:42:15.573: INFO: Pod "pod-subpath-test-preprovisionedpv-cxqw": Phase="Pending", Reason="", readiness=false. Elapsed: 39.446871ms Jun 22 08:42:17.604: INFO: Pod "pod-subpath-test-preprovisionedpv-cxqw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071119508s Jun 22 08:42:19.637: INFO: Pod "pod-subpath-test-preprovisionedpv-cxqw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10337459s Jun 22 08:42:21.668: INFO: Pod "pod-subpath-test-preprovisionedpv-cxqw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134869924s Jun 22 08:42:23.701: INFO: Pod "pod-subpath-test-preprovisionedpv-cxqw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167688895s Jun 22 08:42:25.733: INFO: Pod "pod-subpath-test-preprovisionedpv-cxqw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.200018536s [1mSTEP[0m: Saw pod success Jun 22 08:42:25.733: INFO: Pod "pod-subpath-test-preprovisionedpv-cxqw" satisfied condition "Succeeded or Failed" Jun 22 08:42:25.765: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod pod-subpath-test-preprovisionedpv-cxqw container test-container-subpath-preprovisionedpv-cxqw: <nil> [1mSTEP[0m: delete the pod Jun 22 08:42:25.838: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-cxqw to disappear Jun 22 08:42:25.869: INFO: Pod pod-subpath-test-preprovisionedpv-cxqw no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-cxqw Jun 22 08:42:25.869: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-cxqw" in namespace "provisioning-2677" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directories when readOnly specified in the volumeSource [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":6,"skipped":83,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-instrumentation] MetricsGrabber /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 11 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:42:27.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "metrics-grabber-6377" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":3,"skipped":5,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:42:27.759: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 179 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m Granular Checks: Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:143[0m should function for endpoint-Service: udp [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:248[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp","total":-1,"completed":2,"skipped":4,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:42:30.135: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 173 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should create read/write inline ephemeral volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:194[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":1,"skipped":5,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:42:37.068: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 186 lines ... Jun 22 08:42:11.141: INFO: PersistentVolumeClaim pvc-pr6qv found but phase is Pending instead of Bound. Jun 22 08:42:13.170: INFO: PersistentVolumeClaim pvc-pr6qv found and phase=Bound (12.210253888s) Jun 22 08:42:13.170: INFO: Waiting up to 3m0s for PersistentVolume local-p6222 to have phase Bound Jun 22 08:42:13.205: INFO: PersistentVolume local-p6222 found and phase=Bound (35.23411ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-hzjp [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:42:13.299: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hzjp" in namespace "provisioning-2937" to be "Succeeded or Failed" Jun 22 08:42:13.327: INFO: Pod "pod-subpath-test-preprovisionedpv-hzjp": Phase="Pending", Reason="", readiness=false. Elapsed: 28.716848ms Jun 22 08:42:15.357: INFO: Pod "pod-subpath-test-preprovisionedpv-hzjp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058361339s Jun 22 08:42:17.386: INFO: Pod "pod-subpath-test-preprovisionedpv-hzjp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087338228s Jun 22 08:42:19.418: INFO: Pod "pod-subpath-test-preprovisionedpv-hzjp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119466759s Jun 22 08:42:21.448: INFO: Pod "pod-subpath-test-preprovisionedpv-hzjp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.148806999s Jun 22 08:42:23.478: INFO: Pod "pod-subpath-test-preprovisionedpv-hzjp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.179128719s Jun 22 08:42:25.508: INFO: Pod "pod-subpath-test-preprovisionedpv-hzjp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.209114103s Jun 22 08:42:27.552: INFO: Pod "pod-subpath-test-preprovisionedpv-hzjp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.253553015s Jun 22 08:42:29.652: INFO: Pod "pod-subpath-test-preprovisionedpv-hzjp": Phase="Pending", Reason="", readiness=false. Elapsed: 16.3537248s Jun 22 08:42:31.681: INFO: Pod "pod-subpath-test-preprovisionedpv-hzjp": Phase="Pending", Reason="", readiness=false. Elapsed: 18.382296227s Jun 22 08:42:33.710: INFO: Pod "pod-subpath-test-preprovisionedpv-hzjp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.411344719s [1mSTEP[0m: Saw pod success Jun 22 08:42:33.710: INFO: Pod "pod-subpath-test-preprovisionedpv-hzjp" satisfied condition "Succeeded or Failed" Jun 22 08:42:33.739: INFO: Trying to get logs from node ip-172-20-0-238.ec2.internal pod pod-subpath-test-preprovisionedpv-hzjp container test-container-subpath-preprovisionedpv-hzjp: <nil> [1mSTEP[0m: delete the pod Jun 22 08:42:33.811: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hzjp to disappear Jun 22 08:42:33.840: INFO: Pod pod-subpath-test-preprovisionedpv-hzjp no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-hzjp Jun 22 08:42:33.840: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hzjp" in namespace "provisioning-2937" ... skipping 69 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m CustomResourceDefinition Watch [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42[0m watch on custom resource definition objects [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":5,"skipped":28,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 15 lines ... [32m• [SLOW TEST:15.883 seconds][0m [sig-node] InitContainer [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should invoke init containers on a RestartAlways pod [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 25 lines ... [32m• [SLOW TEST:22.511 seconds][0m [sig-apps] DisruptionController [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m evictions: enough pods, replicaSet, percentage => should allow an eviction [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:286[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":3,"skipped":10,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:42:47.064: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 54 lines ... [32m• [SLOW TEST:39.537 seconds][0m [sig-node] PreStop [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m should call prestop when killing a pod [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":8,"skipped":47,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:42:51.040: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping ... skipping 80 lines ... Jun 22 08:42:27.146: INFO: PersistentVolumeClaim pvc-prd9t found but phase is Pending instead of Bound. Jun 22 08:42:29.181: INFO: PersistentVolumeClaim pvc-prd9t found and phase=Bound (14.266166291s) Jun 22 08:42:29.182: INFO: Waiting up to 3m0s for PersistentVolume local-m4nff to have phase Bound Jun 22 08:42:29.211: INFO: PersistentVolume local-m4nff found and phase=Bound (29.301527ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-vfzt [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:42:29.314: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vfzt" in namespace "provisioning-250" to be "Succeeded or Failed" Jun 22 08:42:29.345: INFO: Pod "pod-subpath-test-preprovisionedpv-vfzt": Phase="Pending", Reason="", readiness=false. Elapsed: 31.052953ms Jun 22 08:42:31.376: INFO: Pod "pod-subpath-test-preprovisionedpv-vfzt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061955488s Jun 22 08:42:33.407: INFO: Pod "pod-subpath-test-preprovisionedpv-vfzt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093030879s Jun 22 08:42:35.440: INFO: Pod "pod-subpath-test-preprovisionedpv-vfzt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125938475s Jun 22 08:42:37.470: INFO: Pod "pod-subpath-test-preprovisionedpv-vfzt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156051968s Jun 22 08:42:39.501: INFO: Pod "pod-subpath-test-preprovisionedpv-vfzt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.18698192s Jun 22 08:42:41.531: INFO: Pod "pod-subpath-test-preprovisionedpv-vfzt": Phase="Pending", Reason="", readiness=false. Elapsed: 12.217531631s Jun 22 08:42:43.564: INFO: Pod "pod-subpath-test-preprovisionedpv-vfzt": Phase="Pending", Reason="", readiness=false. Elapsed: 14.25017058s Jun 22 08:42:45.594: INFO: Pod "pod-subpath-test-preprovisionedpv-vfzt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.280035272s [1mSTEP[0m: Saw pod success Jun 22 08:42:45.594: INFO: Pod "pod-subpath-test-preprovisionedpv-vfzt" satisfied condition "Succeeded or Failed" Jun 22 08:42:45.623: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod pod-subpath-test-preprovisionedpv-vfzt container test-container-subpath-preprovisionedpv-vfzt: <nil> [1mSTEP[0m: delete the pod Jun 22 08:42:45.743: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vfzt to disappear Jun 22 08:42:45.779: INFO: Pod pod-subpath-test-preprovisionedpv-vfzt no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-vfzt Jun 22 08:42:45.779: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vfzt" in namespace "provisioning-250" [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-vfzt [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:42:45.845: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vfzt" in namespace "provisioning-250" to be "Succeeded or Failed" Jun 22 08:42:45.874: INFO: Pod "pod-subpath-test-preprovisionedpv-vfzt": Phase="Pending", Reason="", readiness=false. Elapsed: 29.30856ms Jun 22 08:42:47.904: INFO: Pod "pod-subpath-test-preprovisionedpv-vfzt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058887177s Jun 22 08:42:49.953: INFO: Pod "pod-subpath-test-preprovisionedpv-vfzt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108472717s Jun 22 08:42:51.984: INFO: Pod "pod-subpath-test-preprovisionedpv-vfzt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13878769s Jun 22 08:42:54.013: INFO: Pod "pod-subpath-test-preprovisionedpv-vfzt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.168364071s Jun 22 08:42:56.045: INFO: Pod "pod-subpath-test-preprovisionedpv-vfzt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.200102337s [1mSTEP[0m: Saw pod success Jun 22 08:42:56.045: INFO: Pod "pod-subpath-test-preprovisionedpv-vfzt" satisfied condition "Succeeded or Failed" Jun 22 08:42:56.080: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod pod-subpath-test-preprovisionedpv-vfzt container test-container-subpath-preprovisionedpv-vfzt: <nil> [1mSTEP[0m: delete the pod Jun 22 08:42:56.159: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vfzt to disappear Jun 22 08:42:56.189: INFO: Pod pod-subpath-test-preprovisionedpv-vfzt no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-vfzt Jun 22 08:42:56.189: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vfzt" in namespace "provisioning-250" ... skipping 48 lines ... Jun 22 08:42:27.752: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} [1mSTEP[0m: creating a StorageClass volume-expand-3110gkqml [1mSTEP[0m: creating a claim Jun 22 08:42:27.784: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Expanding non-expandable pvc Jun 22 08:42:27.874: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>} BinarySI} Jun 22 08:42:27.952: INFO: Error updating pvc aws8zdhj: PersistentVolumeClaim "aws8zdhj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3110gkqml", ... // 3 identical fields } Jun 22 08:42:30.034: INFO: Error updating pvc aws8zdhj: PersistentVolumeClaim "aws8zdhj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3110gkqml", ... // 3 identical fields } Jun 22 08:42:32.016: INFO: Error updating pvc aws8zdhj: PersistentVolumeClaim "aws8zdhj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3110gkqml", ... // 3 identical fields } Jun 22 08:42:34.016: INFO: Error updating pvc aws8zdhj: PersistentVolumeClaim "aws8zdhj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3110gkqml", ... // 3 identical fields } Jun 22 08:42:36.016: INFO: Error updating pvc aws8zdhj: PersistentVolumeClaim "aws8zdhj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3110gkqml", ... // 3 identical fields } Jun 22 08:42:38.017: INFO: Error updating pvc aws8zdhj: PersistentVolumeClaim "aws8zdhj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3110gkqml", ... // 3 identical fields } Jun 22 08:42:40.015: INFO: Error updating pvc aws8zdhj: PersistentVolumeClaim "aws8zdhj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3110gkqml", ... // 3 identical fields } Jun 22 08:42:42.015: INFO: Error updating pvc aws8zdhj: PersistentVolumeClaim "aws8zdhj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3110gkqml", ... // 3 identical fields } Jun 22 08:42:44.018: INFO: Error updating pvc aws8zdhj: PersistentVolumeClaim "aws8zdhj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3110gkqml", ... // 3 identical fields } Jun 22 08:42:46.015: INFO: Error updating pvc aws8zdhj: PersistentVolumeClaim "aws8zdhj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3110gkqml", ... // 3 identical fields } Jun 22 08:42:48.017: INFO: Error updating pvc aws8zdhj: PersistentVolumeClaim "aws8zdhj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3110gkqml", ... // 3 identical fields } Jun 22 08:42:50.031: INFO: Error updating pvc aws8zdhj: PersistentVolumeClaim "aws8zdhj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3110gkqml", ... // 3 identical fields } Jun 22 08:42:52.017: INFO: Error updating pvc aws8zdhj: PersistentVolumeClaim "aws8zdhj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3110gkqml", ... // 3 identical fields } Jun 22 08:42:54.016: INFO: Error updating pvc aws8zdhj: PersistentVolumeClaim "aws8zdhj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3110gkqml", ... // 3 identical fields } Jun 22 08:42:56.017: INFO: Error updating pvc aws8zdhj: PersistentVolumeClaim "aws8zdhj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3110gkqml", ... // 3 identical fields } Jun 22 08:42:58.017: INFO: Error updating pvc aws8zdhj: PersistentVolumeClaim "aws8zdhj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3110gkqml", ... // 3 identical fields } Jun 22 08:42:58.080: INFO: Error updating pvc aws8zdhj: PersistentVolumeClaim "aws8zdhj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 24 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] volume-expand [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should not allow expansion of pvcs without AllowVolumeExpansion property [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":7,"skipped":85,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:42:58.256: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 148 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSI online volume expansion [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:752[0m should expand volume without restarting pod if attach=off, nodeExpansion=on [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:767[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":1,"skipped":8,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode ... skipping 40 lines ... [1mSTEP[0m: Deleting pod hostexec-ip-172-20-0-238.ec2.internal-wmmtf in namespace volumemode-1359 Jun 22 08:42:42.080: INFO: Deleting pod "pod-f4f67707-a58b-4a31-893e-36536b687d81" in namespace "volumemode-1359" Jun 22 08:42:42.114: INFO: Wait up to 5m0s for pod "pod-f4f67707-a58b-4a31-893e-36536b687d81" to be fully deleted [1mSTEP[0m: Deleting pv and pvc Jun 22 08:42:54.174: INFO: Deleting PersistentVolumeClaim "pvc-rlxw7" Jun 22 08:42:54.207: INFO: Deleting PersistentVolume "aws-9wn6m" Jun 22 08:42:54.363: INFO: Couldn't delete PD "aws://us-east-1a/vol-0bf1dfc2dcd9031e5", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0bf1dfc2dcd9031e5 is currently attached to i-0bb731c09f583fc01 status code: 400, request id: 467f17e2-f694-4aec-b2e7-9059f6332f52 Jun 22 08:42:59.698: INFO: Successfully deleted PD "aws://us-east-1a/vol-0bf1dfc2dcd9031e5". [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:42:59.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volumemode-1359" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (block volmode)] volumeMode [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should not mount / map unused volumes in a pod [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":3,"skipped":50,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:42:59.763: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 85 lines ... [32m• [SLOW TEST:53.201 seconds][0m [sig-node] Probing container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:219[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":4,"skipped":62,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:42:43.171: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 [1mSTEP[0m: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 22 08:42:43.388: INFO: Waiting up to 5m0s for pod "security-context-55b49cef-e81f-4815-90d9-4e2c8cd09cd2" in namespace "security-context-5214" to be "Succeeded or Failed" Jun 22 08:42:43.423: INFO: Pod "security-context-55b49cef-e81f-4815-90d9-4e2c8cd09cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 35.524028ms Jun 22 08:42:45.456: INFO: Pod "security-context-55b49cef-e81f-4815-90d9-4e2c8cd09cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067984484s Jun 22 08:42:47.488: INFO: Pod "security-context-55b49cef-e81f-4815-90d9-4e2c8cd09cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09984096s Jun 22 08:42:49.521: INFO: Pod "security-context-55b49cef-e81f-4815-90d9-4e2c8cd09cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132834756s Jun 22 08:42:51.565: INFO: Pod "security-context-55b49cef-e81f-4815-90d9-4e2c8cd09cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.176957493s Jun 22 08:42:53.605: INFO: Pod "security-context-55b49cef-e81f-4815-90d9-4e2c8cd09cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.217723468s Jun 22 08:42:55.638: INFO: Pod "security-context-55b49cef-e81f-4815-90d9-4e2c8cd09cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.249944678s Jun 22 08:42:57.674: INFO: Pod "security-context-55b49cef-e81f-4815-90d9-4e2c8cd09cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.286432746s Jun 22 08:42:59.707: INFO: Pod "security-context-55b49cef-e81f-4815-90d9-4e2c8cd09cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.319028814s Jun 22 08:43:01.740: INFO: Pod "security-context-55b49cef-e81f-4815-90d9-4e2c8cd09cd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.351796554s [1mSTEP[0m: Saw pod success Jun 22 08:43:01.740: INFO: Pod "security-context-55b49cef-e81f-4815-90d9-4e2c8cd09cd2" satisfied condition "Succeeded or Failed" Jun 22 08:43:01.776: INFO: Trying to get logs from node ip-172-20-0-238.ec2.internal pod security-context-55b49cef-e81f-4815-90d9-4e2c8cd09cd2 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:43:01.855: INFO: Waiting for pod security-context-55b49cef-e81f-4815-90d9-4e2c8cd09cd2 to disappear Jun 22 08:43:01.886: INFO: Pod security-context-55b49cef-e81f-4815-90d9-4e2c8cd09cd2 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:18.781 seconds][0m [sig-node] Security Context [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m should support seccomp default which is unconfined [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":6,"skipped":29,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:01.960: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 104 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should create read/write inline ephemeral volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:194[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":3,"skipped":3,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:02.080: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 34 lines ... [36mOnly supported for providers [openstack] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092 [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":6,"skipped":58,"failed":0} [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:42:37.314: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename job [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are not locally restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:227 [1mSTEP[0m: Looking for a node to schedule job pod [1mSTEP[0m: Creating a job [1mSTEP[0m: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:43:03.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "job-3631" for this suite. [32m• [SLOW TEST:26.311 seconds][0m [sig-apps] Job [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should run a job to completion when tasks sometimes fail and are not locally restarted [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:227[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":7,"skipped":58,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:03.631: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping ... skipping 176 lines ... [32m• [SLOW TEST:58.817 seconds][0m [sig-apps] Deployment [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m iterative rollouts should eventually progress [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:133[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":5,"skipped":48,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:03.981: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping ... skipping 35 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:43:04.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-6697" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":42,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:04.390: INFO: Only supported for providers [azure] (not aws) ... skipping 38 lines ... [32m• [SLOW TEST:18.438 seconds][0m [sig-apps] DisruptionController [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m evictions: too few pods, absolute => should not allow an eviction [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:286[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":4,"skipped":21,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:04.481: INFO: Driver hostPath doesn't support DynamicPV -- skipping ... skipping 235 lines ... [It] should support readOnly file specified in the volumeMount [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380 Jun 22 08:42:51.196: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Jun 22 08:42:51.230: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-72b2 [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:42:51.267: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-72b2" in namespace "provisioning-733" to be "Succeeded or Failed" Jun 22 08:42:51.297: INFO: Pod "pod-subpath-test-inlinevolume-72b2": Phase="Pending", Reason="", readiness=false. Elapsed: 30.222398ms Jun 22 08:42:53.329: INFO: Pod "pod-subpath-test-inlinevolume-72b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06213367s Jun 22 08:42:55.359: INFO: Pod "pod-subpath-test-inlinevolume-72b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092438852s Jun 22 08:42:57.390: INFO: Pod "pod-subpath-test-inlinevolume-72b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123104958s Jun 22 08:42:59.420: INFO: Pod "pod-subpath-test-inlinevolume-72b2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.153134538s Jun 22 08:43:01.452: INFO: Pod "pod-subpath-test-inlinevolume-72b2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.18555251s Jun 22 08:43:03.492: INFO: Pod "pod-subpath-test-inlinevolume-72b2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.225586135s Jun 22 08:43:05.522: INFO: Pod "pod-subpath-test-inlinevolume-72b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.255416114s [1mSTEP[0m: Saw pod success Jun 22 08:43:05.522: INFO: Pod "pod-subpath-test-inlinevolume-72b2" satisfied condition "Succeeded or Failed" Jun 22 08:43:05.552: INFO: Trying to get logs from node ip-172-20-0-238.ec2.internal pod pod-subpath-test-inlinevolume-72b2 container test-container-subpath-inlinevolume-72b2: <nil> [1mSTEP[0m: delete the pod Jun 22 08:43:05.624: INFO: Waiting for pod pod-subpath-test-inlinevolume-72b2 to disappear Jun 22 08:43:05.653: INFO: Pod pod-subpath-test-inlinevolume-72b2 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-72b2 Jun 22 08:43:05.653: INFO: Deleting pod "pod-subpath-test-inlinevolume-72b2" in namespace "provisioning-733" ... skipping 12 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":49,"failed":0} [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:43:05.775: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename proxy [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 83 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:43:06.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "proxy-1836" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":10,"skipped":49,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:06.699: INFO: Only supported for providers [vsphere] (not aws) ... skipping 47 lines ... [32m• [SLOW TEST:8.685 seconds][0m [sig-apps] DisruptionController [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should update/patch PodDisruptionBudget status [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":69,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:42:03.987: INFO: >>> kubeConfig: /root/.kube/config ... skipping 59 lines ... Jun 22 08:42:11.938: INFO: PersistentVolumeClaim csi-hostpathrh69g found but phase is Pending instead of Bound. Jun 22 08:42:13.969: INFO: PersistentVolumeClaim csi-hostpathrh69g found but phase is Pending instead of Bound. Jun 22 08:42:16.003: INFO: PersistentVolumeClaim csi-hostpathrh69g found but phase is Pending instead of Bound. Jun 22 08:42:18.033: INFO: PersistentVolumeClaim csi-hostpathrh69g found and phase=Bound (12.231325486s) [1mSTEP[0m: Expanding non-expandable pvc Jun 22 08:42:18.092: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>} BinarySI} Jun 22 08:42:18.154: INFO: Error updating pvc csi-hostpathrh69g: persistentvolumeclaims "csi-hostpathrh69g" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jun 22 08:42:20.214: INFO: Error updating pvc csi-hostpathrh69g: persistentvolumeclaims "csi-hostpathrh69g" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jun 22 08:42:22.215: INFO: Error updating pvc csi-hostpathrh69g: persistentvolumeclaims "csi-hostpathrh69g" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jun 22 08:42:24.229: INFO: Error updating pvc csi-hostpathrh69g: persistentvolumeclaims "csi-hostpathrh69g" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jun 22 08:42:26.219: INFO: Error updating pvc csi-hostpathrh69g: persistentvolumeclaims "csi-hostpathrh69g" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jun 22 08:42:28.226: INFO: Error updating pvc csi-hostpathrh69g: persistentvolumeclaims "csi-hostpathrh69g" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jun 22 08:42:30.219: INFO: Error updating pvc csi-hostpathrh69g: persistentvolumeclaims "csi-hostpathrh69g" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jun 22 08:42:32.218: INFO: Error updating pvc csi-hostpathrh69g: persistentvolumeclaims "csi-hostpathrh69g" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jun 22 08:42:34.214: INFO: Error updating pvc csi-hostpathrh69g: persistentvolumeclaims "csi-hostpathrh69g" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jun 22 08:42:36.218: INFO: Error updating pvc csi-hostpathrh69g: persistentvolumeclaims "csi-hostpathrh69g" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jun 22 08:42:38.214: INFO: Error updating pvc csi-hostpathrh69g: persistentvolumeclaims "csi-hostpathrh69g" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jun 22 08:42:40.219: INFO: Error updating pvc csi-hostpathrh69g: persistentvolumeclaims "csi-hostpathrh69g" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jun 22 08:42:42.216: INFO: Error updating pvc csi-hostpathrh69g: persistentvolumeclaims "csi-hostpathrh69g" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jun 22 08:42:44.217: INFO: Error updating pvc csi-hostpathrh69g: persistentvolumeclaims "csi-hostpathrh69g" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jun 22 08:42:46.215: INFO: Error updating pvc csi-hostpathrh69g: persistentvolumeclaims "csi-hostpathrh69g" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jun 22 08:42:48.215: INFO: Error updating pvc csi-hostpathrh69g: persistentvolumeclaims "csi-hostpathrh69g" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jun 22 08:42:48.279: INFO: Error updating pvc csi-hostpathrh69g: persistentvolumeclaims "csi-hostpathrh69g" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize [1mSTEP[0m: Deleting pvc Jun 22 08:42:48.279: INFO: Deleting PersistentVolumeClaim "csi-hostpathrh69g" Jun 22 08:42:48.311: INFO: Waiting up to 5m0s for PersistentVolume pvc-deb8ad46-e11c-494d-a5f3-10aa345c264a to get deleted Jun 22 08:42:48.340: INFO: PersistentVolume pvc-deb8ad46-e11c-494d-a5f3-10aa345c264a found and phase=Released (29.406668ms) Jun 22 08:42:53.375: INFO: PersistentVolume pvc-deb8ad46-e11c-494d-a5f3-10aa345c264a was removed [1mSTEP[0m: Deleting sc ... skipping 53 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Dynamic PV (block volmode)] volume-expand [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should not allow expansion of pvcs without AllowVolumeExpansion property [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":6,"skipped":69,"failed":0} [BeforeEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:43:07.049: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename flexvolume [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 128 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should provision a volume and schedule a pod with AllowedTopologies [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":8,"skipped":60,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:43:08.094: INFO: >>> kubeConfig: /root/.kube/config ... skipping 90 lines ... [32m• [SLOW TEST:7.766 seconds][0m [sig-scheduling] LimitRange [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40[0m should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":4,"skipped":8,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:09.858: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 42 lines ... [1mSTEP[0m: Building a namespace api object, basename security-context-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 Jun 22 08:43:03.822: INFO: Waiting up to 5m0s for pod "busybox-user-0-bc6b84f1-01a1-42d6-be36-f1d5b9d369f1" in namespace "security-context-test-902" to be "Succeeded or Failed" Jun 22 08:43:03.850: INFO: Pod "busybox-user-0-bc6b84f1-01a1-42d6-be36-f1d5b9d369f1": Phase="Pending", Reason="", readiness=false. Elapsed: 28.453313ms Jun 22 08:43:05.883: INFO: Pod "busybox-user-0-bc6b84f1-01a1-42d6-be36-f1d5b9d369f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061471695s Jun 22 08:43:07.918: INFO: Pod "busybox-user-0-bc6b84f1-01a1-42d6-be36-f1d5b9d369f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096185873s Jun 22 08:43:09.958: INFO: Pod "busybox-user-0-bc6b84f1-01a1-42d6-be36-f1d5b9d369f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.135553499s Jun 22 08:43:09.958: INFO: Pod "busybox-user-0-bc6b84f1-01a1-42d6-be36-f1d5b9d369f1" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:43:09.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-902" for this suite. ... skipping 2 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m When creating a container with runAsUser [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50[0m should run the container with uid 0 [LinuxOnly] [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":8,"skipped":67,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:10.040: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 77 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should provision a volume and schedule a pod with AllowedTopologies [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":2,"skipped":11,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:13.144: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 33 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:43:13.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "resourcequota-8403" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes ... skipping 186 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Dynamic PV (block volmode)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":3,"skipped":19,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:15.504: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 79 lines ... [32m• [SLOW TEST:28.579 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:924[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":4,"skipped":11,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:15.657: INFO: Only supported for providers [openstack] (not aws) ... skipping 93 lines ... Jun 22 08:43:12.617: INFO: PersistentVolumeClaim pvc-7hww2 found but phase is Pending instead of Bound. Jun 22 08:43:14.648: INFO: PersistentVolumeClaim pvc-7hww2 found and phase=Bound (12.225024124s) Jun 22 08:43:14.648: INFO: Waiting up to 3m0s for PersistentVolume local-hzpgl to have phase Bound Jun 22 08:43:14.679: INFO: PersistentVolume local-hzpgl found and phase=Bound (30.82359ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-8dpc [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:43:14.773: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-8dpc" in namespace "provisioning-763" to be "Succeeded or Failed" Jun 22 08:43:14.803: INFO: Pod "pod-subpath-test-preprovisionedpv-8dpc": Phase="Pending", Reason="", readiness=false. Elapsed: 29.734359ms Jun 22 08:43:16.835: INFO: Pod "pod-subpath-test-preprovisionedpv-8dpc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062463548s Jun 22 08:43:18.869: INFO: Pod "pod-subpath-test-preprovisionedpv-8dpc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09617616s Jun 22 08:43:20.901: INFO: Pod "pod-subpath-test-preprovisionedpv-8dpc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.128439793s [1mSTEP[0m: Saw pod success Jun 22 08:43:20.901: INFO: Pod "pod-subpath-test-preprovisionedpv-8dpc" satisfied condition "Succeeded or Failed" Jun 22 08:43:20.934: INFO: Trying to get logs from node ip-172-20-0-114.ec2.internal pod pod-subpath-test-preprovisionedpv-8dpc container test-container-subpath-preprovisionedpv-8dpc: <nil> [1mSTEP[0m: delete the pod Jun 22 08:43:21.066: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-8dpc to disappear Jun 22 08:43:21.099: INFO: Pod pod-subpath-test-preprovisionedpv-8dpc no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-8dpc Jun 22 08:43:21.100: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-8dpc" in namespace "provisioning-763" ... skipping 26 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":57,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:21.915: INFO: Only supported for providers [azure] (not aws) [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 33 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:43:22.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-1545" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":5,"skipped":64,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:22.364: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 21 lines ... [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-a69e254c-4ca3-44c2-8039-c9112ba9095d [1mSTEP[0m: Creating a pod to test consume configMaps Jun 22 08:43:15.881: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a04b9c80-6ac7-462c-878b-42ddc9a41559" in namespace "projected-8379" to be "Succeeded or Failed" Jun 22 08:43:15.915: INFO: Pod "pod-projected-configmaps-a04b9c80-6ac7-462c-878b-42ddc9a41559": Phase="Pending", Reason="", readiness=false. Elapsed: 33.48013ms Jun 22 08:43:17.954: INFO: Pod "pod-projected-configmaps-a04b9c80-6ac7-462c-878b-42ddc9a41559": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0725053s Jun 22 08:43:19.991: INFO: Pod "pod-projected-configmaps-a04b9c80-6ac7-462c-878b-42ddc9a41559": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11002836s Jun 22 08:43:22.023: INFO: Pod "pod-projected-configmaps-a04b9c80-6ac7-462c-878b-42ddc9a41559": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141480154s Jun 22 08:43:24.053: INFO: Pod "pod-projected-configmaps-a04b9c80-6ac7-462c-878b-42ddc9a41559": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.171732835s [1mSTEP[0m: Saw pod success Jun 22 08:43:24.053: INFO: Pod "pod-projected-configmaps-a04b9c80-6ac7-462c-878b-42ddc9a41559" satisfied condition "Succeeded or Failed" Jun 22 08:43:24.083: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod pod-projected-configmaps-a04b9c80-6ac7-462c-878b-42ddc9a41559 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:43:24.151: INFO: Waiting for pod pod-projected-configmaps-a04b9c80-6ac7-462c-878b-42ddc9a41559 to disappear Jun 22 08:43:24.181: INFO: Pod pod-projected-configmaps-a04b9c80-6ac7-462c-878b-42ddc9a41559 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 27 lines ... [32m• [SLOW TEST:25.624 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m works for multiple CRDs of same group and version but different kinds [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":5,"skipped":69,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:25.578: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping ... skipping 12 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [36mDriver hostPathSymlink doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":4,"skipped":49,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:43:17.606: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename services [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 29 lines ... [32m• [SLOW TEST:8.329 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should allow pods to hairpin back to themselves through services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1007[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":5,"skipped":49,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:25.947: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 106 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":2,"skipped":22,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:27.838: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern ... skipping 174 lines ... Jun 22 08:43:22.229: INFO: Received response from host: affinity-clusterip-transition-btmk5 Jun 22 08:43:22.229: INFO: Received response from host: affinity-clusterip-transition-2b5ln Jun 22 08:43:22.229: INFO: Received response from host: affinity-clusterip-transition-btmk5 Jun 22 08:43:22.229: INFO: Received response from host: affinity-clusterip-transition-2b5ln Jun 22 08:43:22.229: INFO: Received response from host: affinity-clusterip-transition-btmk5 Jun 22 08:43:22.229: INFO: [affinity-clusterip-transition-2b5ln affinity-clusterip-transition-2b5ln affinity-clusterip-transition-2b5ln affinity-clusterip-transition-btmk5 affinity-clusterip-transition-nz72v affinity-clusterip-transition-nz72v affinity-clusterip-transition-btmk5 affinity-clusterip-transition-2b5ln affinity-clusterip-transition-nz72v affinity-clusterip-transition-btmk5 affinity-clusterip-transition-btmk5 affinity-clusterip-transition-nz72v affinity-clusterip-transition-2b5ln affinity-clusterip-transition-nz72v affinity-clusterip-transition-btmk5 affinity-clusterip-transition-btmk5 affinity-clusterip-transition-btmk5 affinity-clusterip-transition-nz72v affinity-clusterip-transition-nz72v affinity-clusterip-transition-2b5ln affinity-clusterip-transition-2b5ln affinity-clusterip-transition-nz72v affinity-clusterip-transition-nz72v affinity-clusterip-transition-nz72v affinity-clusterip-transition-2b5ln affinity-clusterip-transition-2b5ln affinity-clusterip-transition-2b5ln affinity-clusterip-transition-2b5ln affinity-clusterip-transition-nz72v affinity-clusterip-transition-2b5ln affinity-clusterip-transition-2b5ln affinity-clusterip-transition-2b5ln affinity-clusterip-transition-2b5ln affinity-clusterip-transition-nz72v affinity-clusterip-transition-2b5ln affinity-clusterip-transition-btmk5 affinity-clusterip-transition-2b5ln affinity-clusterip-transition-nz72v affinity-clusterip-transition-btmk5 affinity-clusterip-transition-2b5ln affinity-clusterip-transition-nz72v affinity-clusterip-transition-2b5ln affinity-clusterip-transition-nz72v affinity-clusterip-transition-btmk5 affinity-clusterip-transition-2b5ln affinity-clusterip-transition-nz72v affinity-clusterip-transition-nz72v affinity-clusterip-transition-2b5ln affinity-clusterip-transition-btmk5 affinity-clusterip-transition-btmk5 affinity-clusterip-transition-2b5ln affinity-clusterip-transition-btmk5 affinity-clusterip-transition-2b5ln affinity-clusterip-transition-2b5ln affinity-clusterip-transition-btmk5 affinity-clusterip-transition-btmk5 affinity-clusterip-transition-2b5ln affinity-clusterip-transition-2b5ln affinity-clusterip-transition-nz72v affinity-clusterip-transition-nz72v affinity-clusterip-transition-2b5ln affinity-clusterip-transition-btmk5 affinity-clusterip-transition-btmk5 affinity-clusterip-transition-btmk5 affinity-clusterip-transition-btmk5 affinity-clusterip-transition-2b5ln affinity-clusterip-transition-2b5ln affinity-clusterip-transition-2b5ln affinity-clusterip-transition-2b5ln affinity-clusterip-transition-nz72v affinity-clusterip-transition-nz72v affinity-clusterip-transition-2b5ln affinity-clusterip-transition-btmk5 affinity-clusterip-transition-2b5ln affinity-clusterip-transition-btmk5 affinity-clusterip-transition-nz72v affinity-clusterip-transition-nz72v affinity-clusterip-transition-nz72v affinity-clusterip-transition-btmk5 affinity-clusterip-transition-nz72v affinity-clusterip-transition-btmk5 affinity-clusterip-transition-2b5ln affinity-clusterip-transition-2b5ln affinity-clusterip-transition-2b5ln affinity-clusterip-transition-nz72v affinity-clusterip-transition-2b5ln affinity-clusterip-transition-2b5ln affinity-clusterip-transition-nz72v affinity-clusterip-transition-btmk5 affinity-clusterip-transition-btmk5 affinity-clusterip-transition-2b5ln affinity-clusterip-transition-btmk5 affinity-clusterip-transition-2b5ln affinity-clusterip-transition-btmk5 affinity-clusterip-transition-2b5ln affinity-clusterip-transition-btmk5] Jun 22 08:43:22.229: FAIL: Affinity should hold but didn't. Full Stack Trace k8s.io/kubernetes/test/e2e/network.checkAffinity({0x78e7e70, 0xc003cbc000}, 0x0, {0xc003d70a80, 0x0}, 0x0, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:209 +0x1b7 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0x6f3e9fc, {0x78e7e70, 0xc003cbc000}, 0xc003c66000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2975 +0x7cc ... skipping 36 lines ... Jun 22 08:43:24.833: INFO: At 2022-06-22 08:41:05 +0000 UTC - event for affinity-clusterip-transition-nz72v: {kubelet ip-172-20-0-138.ec2.internal} Started: Started container affinity-clusterip-transition Jun 22 08:43:24.833: INFO: At 2022-06-22 08:41:05 +0000 UTC - event for affinity-clusterip-transition-nz72v: {kubelet ip-172-20-0-138.ec2.internal} Created: Created container affinity-clusterip-transition Jun 22 08:43:24.833: INFO: At 2022-06-22 08:41:09 +0000 UTC - event for execpod-affinityk7h6r: {default-scheduler } Scheduled: Successfully assigned services-3311/execpod-affinityk7h6r to ip-172-20-0-138.ec2.internal Jun 22 08:43:24.833: INFO: At 2022-06-22 08:41:12 +0000 UTC - event for execpod-affinityk7h6r: {kubelet ip-172-20-0-138.ec2.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.33" already present on machine Jun 22 08:43:24.833: INFO: At 2022-06-22 08:41:12 +0000 UTC - event for execpod-affinityk7h6r: {kubelet ip-172-20-0-138.ec2.internal} Started: Started container agnhost-container Jun 22 08:43:24.833: INFO: At 2022-06-22 08:41:12 +0000 UTC - event for execpod-affinityk7h6r: {kubelet ip-172-20-0-138.ec2.internal} Created: Created container agnhost-container Jun 22 08:43:24.833: INFO: At 2022-06-22 08:43:22 +0000 UTC - event for affinity-clusterip-transition: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint services-3311/affinity-clusterip-transition: Operation cannot be fulfilled on endpoints "affinity-clusterip-transition": the object has been modified; please apply your changes to the latest version and try again Jun 22 08:43:24.833: INFO: At 2022-06-22 08:43:22 +0000 UTC - event for affinity-clusterip-transition-2b5ln: {kubelet ip-172-20-0-238.ec2.internal} Killing: Stopping container affinity-clusterip-transition Jun 22 08:43:24.833: INFO: At 2022-06-22 08:43:22 +0000 UTC - event for affinity-clusterip-transition-btmk5: {kubelet ip-172-20-0-114.ec2.internal} Killing: Stopping container affinity-clusterip-transition Jun 22 08:43:24.833: INFO: At 2022-06-22 08:43:22 +0000 UTC - event for affinity-clusterip-transition-nz72v: {kubelet ip-172-20-0-138.ec2.internal} Killing: Stopping container affinity-clusterip-transition Jun 22 08:43:24.833: INFO: At 2022-06-22 08:43:22 +0000 UTC - event for execpod-affinityk7h6r: {kubelet ip-172-20-0-138.ec2.internal} Killing: Stopping container agnhost-container Jun 22 08:43:24.833: INFO: At 2022-06-22 08:43:24 +0000 UTC - event for affinity-clusterip-transition-nz72v: {kubelet ip-172-20-0-138.ec2.internal} FailedKillPod: error killing pod: failed to "KillContainer" for "affinity-clusterip-transition" with KillContainerError: "rpc error: code = NotFound desc = an error occurred when try to find container \"7494813c0e0a55fd07a342d02a50958040a2efc070da1f4b4eb2e29e52e2c65f\": not found" Jun 22 08:43:24.863: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 08:43:24.863: INFO: Jun 22 08:43:24.894: INFO: Logging node info for node ip-172-20-0-114.ec2.internal Jun 22 08:43:24.924: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-0-114.ec2.internal 844470ee-972e-4ab9-8790-9e1f5909cdeb 17561 0 2022-06-22 08:36:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-0-114.ec2.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:ip-172-20-0-114.ec2.internal topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0d031b23519ce8ef0"} io.cilium.network.ipv4-cilium-host:100.96.9.254 io.cilium.network.ipv4-health-ip:100.96.9.209 io.cilium.network.ipv4-pod-cidr:100.96.9.0/24 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{Go-http-client Update v1 2022-06-22 08:36:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kops-controller Update v1 2022-06-22 08:36:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2022-06-22 08:36:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.9.0/24\"":{}}}} } {cilium-agent Update v1 2022-06-22 08:36:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:io.cilium.network.ipv4-cilium-host":{},"f:io.cilium.network.ipv4-health-ip":{},"f:io.cilium.network.ipv4-pod-cidr":{}}}} } {cilium-agent Update v1 2022-06-22 08:36:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {Go-http-client Update v1 2022-06-22 08:42:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {kube-controller-manager Update v1 2022-06-22 08:43:16 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.9.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-0d031b23519ce8ef0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.9.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133167038464 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4064743424 0} {<nil>} 3969476Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119850334420 0} {<nil>} 119850334420 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3959885824 0} {<nil>} 3867076Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-22 08:36:30 +0000 UTC,LastTransitionTime:2022-06-22 08:36:30 +0000 UTC,Reason:CiliumIsUp,Message:Cilium is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-22 08:43:22 +0000 UTC,LastTransitionTime:2022-06-22 08:36:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-22 08:43:22 +0000 UTC,LastTransitionTime:2022-06-22 08:36:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-22 08:43:22 +0000 UTC,LastTransitionTime:2022-06-22 08:36:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-22 08:43:22 +0000 UTC,LastTransitionTime:2022-06-22 08:36:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.114,},NodeAddress{Type:ExternalIP,Address:54.226.232.142,},NodeAddress{Type:Hostname,Address:ip-172-20-0-114.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-0-114.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-226-232-142.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2d3f972b373e1a9f12d2b8968549e4,SystemUUID:ec2d3f97-2b37-3e1a-9f12-d2b8968549e4,BootID:35726d94-ac3c-4b1c-9769-f4366dde6d56,KernelVersion:5.4.0-1029-aws,OSImage:Ubuntu 20.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.23.1,KubeProxyVersion:v1.23.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:0612218e28288db360c63677c09fafa2d17edda4f13867bcabf87056046b33bb quay.io/cilium/cilium:v1.10.5],SizeBytes:149643860,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:ddd1b2e650ce5a10b3f5e9ae706cc384fc7e1a15940e07bba75f27369bc6a1ac k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.5.0],SizeBytes:114728287,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43 k8s.gcr.io/e2e-test-images/agnhost:2.33],SizeBytes:49628485,},ContainerImage{Names:[k8s.gcr.io/dns/k8s-dns-node-cache@sha256:94f4b59b3b85a38ada50c0772b67a23877a19b30b64e0313e6e81ebcf5cd7e91 k8s.gcr.io/dns/k8s-dns-node-cache:1.21.3],SizeBytes:42475608,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:e40f3a28721588affcf187f3f246d1e078157dabe274003eaa2957a83f7170c8 k8s.gcr.io/kube-proxy:v1.23.1],SizeBytes:39272869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6c5603956c0aed6b4087a8716afce8eb22f664b13162346ee852b4fab305ca15 k8s.gcr.io/metrics-server/metrics-server:v0.5.0],SizeBytes:25804692,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:43a9f52f5dce39bf1816afe6141724cc2d08811e466dd46e6628c925e2419bdc k8s.gcr.io/coredns/coredns:v1.8.5],SizeBytes:13581340,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-06d21282b93fccb06],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-06d21282b93fccb06,DevicePath:,},},Config:nil,},} Jun 22 08:43:24.924: INFO: ... skipping 233 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [91mJun 22 08:43:22.229: Affinity should hold but didn't.[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:209 [90m------------------------------[0m {"msg":"FAILED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":0,"skipped":6,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:28.066: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 31 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:43:28.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "custom-resource-definition-1912" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":1,"skipped":7,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode ... skipping 35 lines ... Jun 22 08:43:13.578: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to unmount after the subpath directory is deleted [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445 Jun 22 08:43:13.735: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Jun 22 08:43:13.803: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-351" in namespace "provisioning-351" to be "Succeeded or Failed" Jun 22 08:43:13.833: INFO: Pod "hostpath-symlink-prep-provisioning-351": Phase="Pending", Reason="", readiness=false. Elapsed: 30.605649ms Jun 22 08:43:15.865: INFO: Pod "hostpath-symlink-prep-provisioning-351": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062326428s Jun 22 08:43:17.897: INFO: Pod "hostpath-symlink-prep-provisioning-351": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093758797s [1mSTEP[0m: Saw pod success Jun 22 08:43:17.897: INFO: Pod "hostpath-symlink-prep-provisioning-351" satisfied condition "Succeeded or Failed" Jun 22 08:43:17.897: INFO: Deleting pod "hostpath-symlink-prep-provisioning-351" in namespace "provisioning-351" Jun 22 08:43:17.933: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-351" to be fully deleted Jun 22 08:43:17.965: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-b428 Jun 22 08:43:22.075: INFO: Running '/logs/artifacts/403903f7-f202-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=provisioning-351 exec pod-subpath-test-inlinevolume-b428 --container test-container-volume-inlinevolume-b428 -- /bin/sh -c rm -r /test-volume/provisioning-351' Jun 22 08:43:22.540: INFO: stderr: "" Jun 22 08:43:22.540: INFO: stdout: "" [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-b428 Jun 22 08:43:22.540: INFO: Deleting pod "pod-subpath-test-inlinevolume-b428" in namespace "provisioning-351" Jun 22 08:43:22.575: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-b428" to be fully deleted [1mSTEP[0m: Deleting pod Jun 22 08:43:24.638: INFO: Deleting pod "pod-subpath-test-inlinevolume-b428" in namespace "provisioning-351" Jun 22 08:43:24.707: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-351" in namespace "provisioning-351" to be "Succeeded or Failed" Jun 22 08:43:24.741: INFO: Pod "hostpath-symlink-prep-provisioning-351": Phase="Pending", Reason="", readiness=false. Elapsed: 33.886754ms Jun 22 08:43:26.774: INFO: Pod "hostpath-symlink-prep-provisioning-351": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067591681s Jun 22 08:43:28.805: INFO: Pod "hostpath-symlink-prep-provisioning-351": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098564717s [1mSTEP[0m: Saw pod success Jun 22 08:43:28.805: INFO: Pod "hostpath-symlink-prep-provisioning-351" satisfied condition "Succeeded or Failed" Jun 22 08:43:28.806: INFO: Deleting pod "hostpath-symlink-prep-provisioning-351" in namespace "provisioning-351" Jun 22 08:43:28.848: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-351" to be fully deleted [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:43:28.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "provisioning-351" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should be able to unmount after the subpath directory is deleted [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":16,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:28.944: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 31 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:43:29.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-5772" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":5,"skipped":21,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 19 lines ... [32m• [SLOW TEST:65.417 seconds][0m [sig-node] Probing container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be restarted with a failing exec liveness probe that took longer than the timeout [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:260[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":8,"skipped":48,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral ... skipping 138 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support two pods which have the same volume definition [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:214[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition","total":-1,"completed":1,"skipped":0,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:29.527: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 42 lines ... [32m• [SLOW TEST:28.485 seconds][0m [sig-node] PreStop [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m graceful pod terminated should wait until preStop hook completes the process [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":8,"skipped":75,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:33.152: INFO: Driver hostPathSymlink doesn't support GenericEphemeralVolume -- skipping [AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 61 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41[0m when running a container with a new image [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266[0m should be able to pull image [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":6,"skipped":77,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:33.170: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 75 lines ... [32m• [SLOW TEST:20.592 seconds][0m [sig-api-machinery] Watchers [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should observe add, update, and delete watch notifications on configmaps [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":4,"skipped":31,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral ... skipping 125 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should create read-only inline ephemeral volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":4,"skipped":13,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:36.196: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 99 lines ... [32m• [SLOW TEST:10.570 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m patching/updating a mutating webhook should work [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":6,"skipped":56,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:36.524: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 11 lines ... [36mDriver local doesn't support InlineVolume -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":18,"failed":0} [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:43:24.246: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename persistent-local-volumes-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 64 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m One pod requesting one prebound PVC [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and write from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":6,"skipped":18,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:36.663: INFO: Only supported for providers [gce gke] (not aws) ... skipping 55 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:43:36.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "sysctl-9684" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":7,"skipped":59,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 6 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:43:36.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "runtimeclass-2608" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":8,"skipped":60,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:37.048: INFO: Only supported for providers [gce gke] (not aws) ... skipping 51 lines ... [sig-storage] CSI Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: csi-hostpath] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver "csi-hostpath" does not support topology - skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92 [90m------------------------------[0m ... skipping 34 lines ... [32m• [SLOW TEST:29.717 seconds][0m [sig-node] Pods [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should delete a collection of pods [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":5,"skipped":11,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:39.592: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 168 lines ... Jun 22 08:42:58.936: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volume-9965p78px [1mSTEP[0m: creating a claim Jun 22 08:42:58.967: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-zsvd [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 22 08:42:59.063: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-zsvd" in namespace "volume-9965" to be "Succeeded or Failed" Jun 22 08:42:59.094: INFO: Pod "exec-volume-test-dynamicpv-zsvd": Phase="Pending", Reason="", readiness=false. Elapsed: 30.375489ms Jun 22 08:43:01.175: INFO: Pod "exec-volume-test-dynamicpv-zsvd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111977568s Jun 22 08:43:03.250: INFO: Pod "exec-volume-test-dynamicpv-zsvd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186217077s Jun 22 08:43:05.281: INFO: Pod "exec-volume-test-dynamicpv-zsvd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.217272458s Jun 22 08:43:07.314: INFO: Pod "exec-volume-test-dynamicpv-zsvd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.251020268s Jun 22 08:43:09.345: INFO: Pod "exec-volume-test-dynamicpv-zsvd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.282038674s ... skipping 3 lines ... Jun 22 08:43:17.478: INFO: Pod "exec-volume-test-dynamicpv-zsvd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.414259475s Jun 22 08:43:19.511: INFO: Pod "exec-volume-test-dynamicpv-zsvd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.447336134s Jun 22 08:43:21.544: INFO: Pod "exec-volume-test-dynamicpv-zsvd": Phase="Pending", Reason="", readiness=false. Elapsed: 22.480981815s Jun 22 08:43:23.577: INFO: Pod "exec-volume-test-dynamicpv-zsvd": Phase="Pending", Reason="", readiness=false. Elapsed: 24.51332082s Jun 22 08:43:25.609: INFO: Pod "exec-volume-test-dynamicpv-zsvd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.545662522s [1mSTEP[0m: Saw pod success Jun 22 08:43:25.609: INFO: Pod "exec-volume-test-dynamicpv-zsvd" satisfied condition "Succeeded or Failed" Jun 22 08:43:25.639: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod exec-volume-test-dynamicpv-zsvd container exec-container-dynamicpv-zsvd: <nil> [1mSTEP[0m: delete the pod Jun 22 08:43:25.710: INFO: Waiting for pod exec-volume-test-dynamicpv-zsvd to disappear Jun 22 08:43:25.744: INFO: Pod exec-volume-test-dynamicpv-zsvd no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-zsvd Jun 22 08:43:25.744: INFO: Deleting pod "exec-volume-test-dynamicpv-zsvd" in namespace "volume-9965" ... skipping 18 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":10,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:41.119: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 154 lines ... Jun 22 08:43:26.737: INFO: PersistentVolumeClaim pvc-dfrgm found but phase is Pending instead of Bound. Jun 22 08:43:28.767: INFO: PersistentVolumeClaim pvc-dfrgm found and phase=Bound (4.094345575s) Jun 22 08:43:28.767: INFO: Waiting up to 3m0s for PersistentVolume local-r4426 to have phase Bound Jun 22 08:43:28.795: INFO: PersistentVolume local-r4426 found and phase=Bound (28.17061ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-2pkr [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:43:28.889: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-2pkr" in namespace "provisioning-6356" to be "Succeeded or Failed" Jun 22 08:43:28.917: INFO: Pod "pod-subpath-test-preprovisionedpv-2pkr": Phase="Pending", Reason="", readiness=false. Elapsed: 28.092313ms Jun 22 08:43:30.952: INFO: Pod "pod-subpath-test-preprovisionedpv-2pkr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063181187s Jun 22 08:43:32.984: INFO: Pod "pod-subpath-test-preprovisionedpv-2pkr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094457828s Jun 22 08:43:35.018: INFO: Pod "pod-subpath-test-preprovisionedpv-2pkr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129055258s Jun 22 08:43:37.048: INFO: Pod "pod-subpath-test-preprovisionedpv-2pkr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158790272s Jun 22 08:43:39.078: INFO: Pod "pod-subpath-test-preprovisionedpv-2pkr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.188770642s Jun 22 08:43:41.107: INFO: Pod "pod-subpath-test-preprovisionedpv-2pkr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.218266877s [1mSTEP[0m: Saw pod success Jun 22 08:43:41.107: INFO: Pod "pod-subpath-test-preprovisionedpv-2pkr" satisfied condition "Succeeded or Failed" Jun 22 08:43:41.140: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod pod-subpath-test-preprovisionedpv-2pkr container test-container-subpath-preprovisionedpv-2pkr: <nil> [1mSTEP[0m: delete the pod Jun 22 08:43:41.219: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-2pkr to disappear Jun 22 08:43:41.252: INFO: Pod pod-subpath-test-preprovisionedpv-2pkr no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-2pkr Jun 22 08:43:41.252: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-2pkr" in namespace "provisioning-6356" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly directory specified in the volumeMount [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":9,"skipped":71,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath ... skipping 5 lines ... [It] should support existing single file [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219 Jun 22 08:43:29.511: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics Jun 22 08:43:29.511: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-kn5d [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:43:29.548: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-kn5d" in namespace "provisioning-1554" to be "Succeeded or Failed" Jun 22 08:43:29.579: INFO: Pod "pod-subpath-test-inlinevolume-kn5d": Phase="Pending", Reason="", readiness=false. Elapsed: 31.395044ms Jun 22 08:43:31.611: INFO: Pod "pod-subpath-test-inlinevolume-kn5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063351348s Jun 22 08:43:33.658: INFO: Pod "pod-subpath-test-inlinevolume-kn5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109796097s Jun 22 08:43:35.690: INFO: Pod "pod-subpath-test-inlinevolume-kn5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142216679s Jun 22 08:43:37.721: INFO: Pod "pod-subpath-test-inlinevolume-kn5d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.17332997s Jun 22 08:43:39.752: INFO: Pod "pod-subpath-test-inlinevolume-kn5d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.204630164s Jun 22 08:43:41.793: INFO: Pod "pod-subpath-test-inlinevolume-kn5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.244840266s [1mSTEP[0m: Saw pod success Jun 22 08:43:41.793: INFO: Pod "pod-subpath-test-inlinevolume-kn5d" satisfied condition "Succeeded or Failed" Jun 22 08:43:41.826: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod pod-subpath-test-inlinevolume-kn5d container test-container-subpath-inlinevolume-kn5d: <nil> [1mSTEP[0m: delete the pod Jun 22 08:43:41.942: INFO: Waiting for pod pod-subpath-test-inlinevolume-kn5d to disappear Jun 22 08:43:41.974: INFO: Pod pod-subpath-test-inlinevolume-kn5d no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-kn5d Jun 22 08:43:41.974: INFO: Deleting pod "pod-subpath-test-inlinevolume-kn5d" in namespace "provisioning-1554" ... skipping 12 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing single file [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":9,"skipped":50,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:42.126: INFO: Only supported for providers [vsphere] (not aws) ... skipping 12 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m [36mOnly supported for providers [vsphere] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438 [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":5,"skipped":63,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:42:57.666: INFO: >>> kubeConfig: /root/.kube/config ... skipping 21 lines ... Jun 22 08:43:12.512: INFO: PersistentVolumeClaim pvc-2qgbx found but phase is Pending instead of Bound. Jun 22 08:43:14.546: INFO: PersistentVolumeClaim pvc-2qgbx found and phase=Bound (14.25612415s) Jun 22 08:43:14.546: INFO: Waiting up to 3m0s for PersistentVolume local-79s6t to have phase Bound Jun 22 08:43:14.575: INFO: PersistentVolume local-79s6t found and phase=Bound (29.474669ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-26hd [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 22 08:43:14.668: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-26hd" in namespace "provisioning-7541" to be "Succeeded or Failed" Jun 22 08:43:14.697: INFO: Pod "pod-subpath-test-preprovisionedpv-26hd": Phase="Pending", Reason="", readiness=false. Elapsed: 29.189107ms Jun 22 08:43:16.728: INFO: Pod "pod-subpath-test-preprovisionedpv-26hd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059564094s Jun 22 08:43:18.759: INFO: Pod "pod-subpath-test-preprovisionedpv-26hd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090629548s Jun 22 08:43:20.790: INFO: Pod "pod-subpath-test-preprovisionedpv-26hd": Phase="Running", Reason="", readiness=true. Elapsed: 6.121530196s Jun 22 08:43:22.821: INFO: Pod "pod-subpath-test-preprovisionedpv-26hd": Phase="Running", Reason="", readiness=true. Elapsed: 8.152538506s Jun 22 08:43:24.851: INFO: Pod "pod-subpath-test-preprovisionedpv-26hd": Phase="Running", Reason="", readiness=true. Elapsed: 10.182811835s ... skipping 3 lines ... Jun 22 08:43:32.989: INFO: Pod "pod-subpath-test-preprovisionedpv-26hd": Phase="Running", Reason="", readiness=true. Elapsed: 18.320530293s Jun 22 08:43:35.019: INFO: Pod "pod-subpath-test-preprovisionedpv-26hd": Phase="Running", Reason="", readiness=true. Elapsed: 20.350613116s Jun 22 08:43:37.048: INFO: Pod "pod-subpath-test-preprovisionedpv-26hd": Phase="Running", Reason="", readiness=true. Elapsed: 22.380212128s Jun 22 08:43:39.079: INFO: Pod "pod-subpath-test-preprovisionedpv-26hd": Phase="Running", Reason="", readiness=true. Elapsed: 24.410771302s Jun 22 08:43:41.110: INFO: Pod "pod-subpath-test-preprovisionedpv-26hd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.441503638s [1mSTEP[0m: Saw pod success Jun 22 08:43:41.110: INFO: Pod "pod-subpath-test-preprovisionedpv-26hd" satisfied condition "Succeeded or Failed" Jun 22 08:43:41.141: INFO: Trying to get logs from node ip-172-20-0-114.ec2.internal pod pod-subpath-test-preprovisionedpv-26hd container test-container-subpath-preprovisionedpv-26hd: <nil> [1mSTEP[0m: delete the pod Jun 22 08:43:41.229: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-26hd to disappear Jun 22 08:43:41.259: INFO: Pod pod-subpath-test-preprovisionedpv-26hd no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-26hd Jun 22 08:43:41.259: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-26hd" in namespace "provisioning-7541" ... skipping 28 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support file as subpath [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":6,"skipped":63,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 18 lines ... Jun 22 08:43:27.103: INFO: PersistentVolumeClaim pvc-lb7mf found but phase is Pending instead of Bound. Jun 22 08:43:29.136: INFO: PersistentVolumeClaim pvc-lb7mf found and phase=Bound (4.104700847s) Jun 22 08:43:29.136: INFO: Waiting up to 3m0s for PersistentVolume local-6wg7w to have phase Bound Jun 22 08:43:29.168: INFO: PersistentVolume local-6wg7w found and phase=Bound (31.407888ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-fwg6 [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:43:29.269: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fwg6" in namespace "provisioning-4326" to be "Succeeded or Failed" Jun 22 08:43:29.309: INFO: Pod "pod-subpath-test-preprovisionedpv-fwg6": Phase="Pending", Reason="", readiness=false. Elapsed: 39.649692ms Jun 22 08:43:31.344: INFO: Pod "pod-subpath-test-preprovisionedpv-fwg6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074761752s Jun 22 08:43:33.419: INFO: Pod "pod-subpath-test-preprovisionedpv-fwg6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149752909s Jun 22 08:43:35.453: INFO: Pod "pod-subpath-test-preprovisionedpv-fwg6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.183636427s Jun 22 08:43:37.487: INFO: Pod "pod-subpath-test-preprovisionedpv-fwg6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.218056704s Jun 22 08:43:39.521: INFO: Pod "pod-subpath-test-preprovisionedpv-fwg6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.2520952s Jun 22 08:43:41.565: INFO: Pod "pod-subpath-test-preprovisionedpv-fwg6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.295769183s [1mSTEP[0m: Saw pod success Jun 22 08:43:41.565: INFO: Pod "pod-subpath-test-preprovisionedpv-fwg6" satisfied condition "Succeeded or Failed" Jun 22 08:43:41.609: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod pod-subpath-test-preprovisionedpv-fwg6 container test-container-subpath-preprovisionedpv-fwg6: <nil> [1mSTEP[0m: delete the pod Jun 22 08:43:41.713: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fwg6 to disappear Jun 22 08:43:41.749: INFO: Pod pod-subpath-test-preprovisionedpv-fwg6 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-fwg6 Jun 22 08:43:41.749: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fwg6" in namespace "provisioning-4326" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":73,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:42.314: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 132 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:43:42.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replicaset-7700" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","total":-1,"completed":3,"skipped":31,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:42.683: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 25 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 22 08:43:33.518: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bdfef638-203a-4717-b7ae-68f3fb965d0b" in namespace "projected-8155" to be "Succeeded or Failed" Jun 22 08:43:33.607: INFO: Pod "downwardapi-volume-bdfef638-203a-4717-b7ae-68f3fb965d0b": Phase="Pending", Reason="", readiness=false. Elapsed: 89.442725ms Jun 22 08:43:35.643: INFO: Pod "downwardapi-volume-bdfef638-203a-4717-b7ae-68f3fb965d0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125538507s Jun 22 08:43:37.674: INFO: Pod "downwardapi-volume-bdfef638-203a-4717-b7ae-68f3fb965d0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156406892s Jun 22 08:43:39.706: INFO: Pod "downwardapi-volume-bdfef638-203a-4717-b7ae-68f3fb965d0b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.188222239s Jun 22 08:43:41.744: INFO: Pod "downwardapi-volume-bdfef638-203a-4717-b7ae-68f3fb965d0b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.226398782s Jun 22 08:43:43.776: INFO: Pod "downwardapi-volume-bdfef638-203a-4717-b7ae-68f3fb965d0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.258560168s [1mSTEP[0m: Saw pod success Jun 22 08:43:43.776: INFO: Pod "downwardapi-volume-bdfef638-203a-4717-b7ae-68f3fb965d0b" satisfied condition "Succeeded or Failed" Jun 22 08:43:43.817: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod downwardapi-volume-bdfef638-203a-4717-b7ae-68f3fb965d0b container client-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:43:43.909: INFO: Waiting for pod downwardapi-volume-bdfef638-203a-4717-b7ae-68f3fb965d0b to disappear Jun 22 08:43:43.940: INFO: Pod downwardapi-volume-bdfef638-203a-4717-b7ae-68f3fb965d0b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 12 lines ... [1mSTEP[0m: Creating a kubernetes client Jun 22 08:43:37.074: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename init-container [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: creating the pod Jun 22 08:43:37.220: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:43:48.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "init-container-7852" for this suite. [32m• [SLOW TEST:11.149 seconds][0m [sig-node] InitContainer [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":9,"skipped":74,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:48.225: INFO: Driver local doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 136 lines ... [It] should support existing directory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205 Jun 22 08:43:42.001: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Jun 22 08:43:42.033: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-nptq [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:43:42.079: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-nptq" in namespace "provisioning-1594" to be "Succeeded or Failed" Jun 22 08:43:42.116: INFO: Pod "pod-subpath-test-inlinevolume-nptq": Phase="Pending", Reason="", readiness=false. Elapsed: 37.097579ms Jun 22 08:43:44.148: INFO: Pod "pod-subpath-test-inlinevolume-nptq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068854687s Jun 22 08:43:46.177: INFO: Pod "pod-subpath-test-inlinevolume-nptq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097673049s Jun 22 08:43:48.215: INFO: Pod "pod-subpath-test-inlinevolume-nptq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135499642s Jun 22 08:43:50.244: INFO: Pod "pod-subpath-test-inlinevolume-nptq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.164368457s [1mSTEP[0m: Saw pod success Jun 22 08:43:50.244: INFO: Pod "pod-subpath-test-inlinevolume-nptq" satisfied condition "Succeeded or Failed" Jun 22 08:43:50.272: INFO: Trying to get logs from node ip-172-20-0-238.ec2.internal pod pod-subpath-test-inlinevolume-nptq container test-container-volume-inlinevolume-nptq: <nil> [1mSTEP[0m: delete the pod Jun 22 08:43:50.345: INFO: Waiting for pod pod-subpath-test-inlinevolume-nptq to disappear Jun 22 08:43:50.374: INFO: Pod pod-subpath-test-inlinevolume-nptq no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-nptq Jun 22 08:43:50.374: INFO: Deleting pod "pod-subpath-test-inlinevolume-nptq" in namespace "provisioning-1594" ... skipping 12 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directory [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":10,"skipped":74,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:50.495: INFO: Only supported for providers [openstack] (not aws) ... skipping 49 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: block] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 28 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name configmap-projected-all-test-volume-19bf6479-bc87-4a77-8ffb-9bcc5c30b9c6 [1mSTEP[0m: Creating secret with name secret-projected-all-test-volume-3cec1e47-1bc7-4870-a75a-1363408fb44a [1mSTEP[0m: Creating a pod to test Check all projections for projected volume plugin Jun 22 08:43:42.589: INFO: Waiting up to 5m0s for pod "projected-volume-2b9c5075-cb34-4f26-95be-14199f8b3610" in namespace "projected-4731" to be "Succeeded or Failed" Jun 22 08:43:42.621: INFO: Pod "projected-volume-2b9c5075-cb34-4f26-95be-14199f8b3610": Phase="Pending", Reason="", readiness=false. Elapsed: 31.450992ms Jun 22 08:43:44.656: INFO: Pod "projected-volume-2b9c5075-cb34-4f26-95be-14199f8b3610": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066979191s Jun 22 08:43:46.695: INFO: Pod "projected-volume-2b9c5075-cb34-4f26-95be-14199f8b3610": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105664658s Jun 22 08:43:48.730: INFO: Pod "projected-volume-2b9c5075-cb34-4f26-95be-14199f8b3610": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140477184s Jun 22 08:43:50.776: INFO: Pod "projected-volume-2b9c5075-cb34-4f26-95be-14199f8b3610": Phase="Pending", Reason="", readiness=false. Elapsed: 8.187020436s Jun 22 08:43:52.815: INFO: Pod "projected-volume-2b9c5075-cb34-4f26-95be-14199f8b3610": Phase="Pending", Reason="", readiness=false. Elapsed: 10.225139262s Jun 22 08:43:54.847: INFO: Pod "projected-volume-2b9c5075-cb34-4f26-95be-14199f8b3610": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.257848163s [1mSTEP[0m: Saw pod success Jun 22 08:43:54.847: INFO: Pod "projected-volume-2b9c5075-cb34-4f26-95be-14199f8b3610" satisfied condition "Succeeded or Failed" Jun 22 08:43:54.879: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod projected-volume-2b9c5075-cb34-4f26-95be-14199f8b3610 container projected-all-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 22 08:43:54.953: INFO: Waiting for pod projected-volume-2b9c5075-cb34-4f26-95be-14199f8b3610 to disappear Jun 22 08:43:54.985: INFO: Pod projected-volume-2b9c5075-cb34-4f26-95be-14199f8b3610 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:12.732 seconds][0m [sig-storage] Projected combined [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should project all components that make up the projection API [Projection][NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":82,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:55.057: INFO: Only supported for providers [gce gke] (not aws) ... skipping 89 lines ... Jun 22 08:43:42.121: INFO: PersistentVolumeClaim pvc-4bplr found but phase is Pending instead of Bound. Jun 22 08:43:44.152: INFO: PersistentVolumeClaim pvc-4bplr found and phase=Bound (12.252039192s) Jun 22 08:43:44.152: INFO: Waiting up to 3m0s for PersistentVolume local-sjkz7 to have phase Bound Jun 22 08:43:44.188: INFO: PersistentVolume local-sjkz7 found and phase=Bound (35.431647ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-vscj [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:43:44.281: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vscj" in namespace "provisioning-466" to be "Succeeded or Failed" Jun 22 08:43:44.312: INFO: Pod "pod-subpath-test-preprovisionedpv-vscj": Phase="Pending", Reason="", readiness=false. Elapsed: 30.936453ms Jun 22 08:43:46.343: INFO: Pod "pod-subpath-test-preprovisionedpv-vscj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062389145s Jun 22 08:43:48.376: INFO: Pod "pod-subpath-test-preprovisionedpv-vscj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094958185s Jun 22 08:43:50.407: INFO: Pod "pod-subpath-test-preprovisionedpv-vscj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126073473s Jun 22 08:43:52.439: INFO: Pod "pod-subpath-test-preprovisionedpv-vscj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157694917s Jun 22 08:43:54.470: INFO: Pod "pod-subpath-test-preprovisionedpv-vscj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.188904464s [1mSTEP[0m: Saw pod success Jun 22 08:43:54.470: INFO: Pod "pod-subpath-test-preprovisionedpv-vscj" satisfied condition "Succeeded or Failed" Jun 22 08:43:54.501: INFO: Trying to get logs from node ip-172-20-0-114.ec2.internal pod pod-subpath-test-preprovisionedpv-vscj container test-container-subpath-preprovisionedpv-vscj: <nil> [1mSTEP[0m: delete the pod Jun 22 08:43:54.621: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vscj to disappear Jun 22 08:43:54.662: INFO: Pod pod-subpath-test-preprovisionedpv-vscj no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-vscj Jun 22 08:43:54.662: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vscj" in namespace "provisioning-466" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":25,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":8,"skipped":90,"failed":0} [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:43:06.951: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename job [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 15 lines ... [32m• [SLOW TEST:48.467 seconds][0m [sig-apps] Job [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should remove pods when job is deleted [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:185[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":9,"skipped":90,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:55.421: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping ... skipping 93 lines ... Jun 22 08:43:41.279: INFO: PersistentVolumeClaim pvc-ggfmd found but phase is Pending instead of Bound. Jun 22 08:43:43.331: INFO: PersistentVolumeClaim pvc-ggfmd found and phase=Bound (4.123047869s) Jun 22 08:43:43.331: INFO: Waiting up to 3m0s for PersistentVolume local-qp6kb to have phase Bound Jun 22 08:43:43.403: INFO: PersistentVolume local-qp6kb found and phase=Bound (72.481715ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-xlgd [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 22 08:43:43.548: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-xlgd" in namespace "volume-3756" to be "Succeeded or Failed" Jun 22 08:43:43.581: INFO: Pod "exec-volume-test-preprovisionedpv-xlgd": Phase="Pending", Reason="", readiness=false. Elapsed: 32.890196ms Jun 22 08:43:45.610: INFO: Pod "exec-volume-test-preprovisionedpv-xlgd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062437094s Jun 22 08:43:47.643: INFO: Pod "exec-volume-test-preprovisionedpv-xlgd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095233006s Jun 22 08:43:49.673: INFO: Pod "exec-volume-test-preprovisionedpv-xlgd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125107232s Jun 22 08:43:51.703: INFO: Pod "exec-volume-test-preprovisionedpv-xlgd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.15491281s Jun 22 08:43:53.733: INFO: Pod "exec-volume-test-preprovisionedpv-xlgd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.185187649s Jun 22 08:43:55.764: INFO: Pod "exec-volume-test-preprovisionedpv-xlgd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.216731398s Jun 22 08:43:57.795: INFO: Pod "exec-volume-test-preprovisionedpv-xlgd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.247195668s [1mSTEP[0m: Saw pod success Jun 22 08:43:57.795: INFO: Pod "exec-volume-test-preprovisionedpv-xlgd" satisfied condition "Succeeded or Failed" Jun 22 08:43:57.829: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod exec-volume-test-preprovisionedpv-xlgd container exec-container-preprovisionedpv-xlgd: <nil> [1mSTEP[0m: delete the pod Jun 22 08:43:57.960: INFO: Waiting for pod exec-volume-test-preprovisionedpv-xlgd to disappear Jun 22 08:43:58.016: INFO: Pod exec-volume-test-preprovisionedpv-xlgd no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-xlgd Jun 22 08:43:58.016: INFO: Deleting pod "exec-volume-test-preprovisionedpv-xlgd" in namespace "volume-3756" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":22,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:43:58.716: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 187 lines ... [32m• [SLOW TEST:20.313 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should be able to change the type from ExternalName to ClusterIP [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":6,"skipped":38,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 19 lines ... [32m• [SLOW TEST:12.495 seconds][0m [sig-apps] ReplicaSet [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should list and delete a collection of ReplicaSets [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":11,"skipped":91,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 17 lines ... [32m• [SLOW TEST:31.265 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m updates the published spec when one version gets renamed [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":9,"skipped":81,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] SCTP [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 11 lines ... Jun 22 08:43:38.413: INFO: ExecWithOptions: Clientset creation Jun 22 08:43:38.413: INFO: ExecWithOptions: execute(POST https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/namespaces/sctp-4915/pods/hostexec-ip-172-20-0-114.ec2.internal-gf4m8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Jun 22 08:43:38.678: INFO: exec ip-172-20-0-114.ec2.internal: command: lsmod | grep sctp Jun 22 08:43:38.678: INFO: exec ip-172-20-0-114.ec2.internal: stdout: "" Jun 22 08:43:38.678: INFO: exec ip-172-20-0-114.ec2.internal: stderr: "" Jun 22 08:43:38.678: INFO: exec ip-172-20-0-114.ec2.internal: exit code: 0 Jun 22 08:43:38.678: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1 Jun 22 08:43:38.678: INFO: the sctp module is not loaded on node: ip-172-20-0-114.ec2.internal Jun 22 08:43:38.678: INFO: Executing cmd "lsmod | grep sctp" on node ip-172-20-0-238.ec2.internal Jun 22 08:43:40.780: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c lsmod | grep sctp] Namespace:sctp-4915 PodName:hostexec-ip-172-20-0-238.ec2.internal-jjskv ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 22 08:43:40.780: INFO: >>> kubeConfig: /root/.kube/config Jun 22 08:43:40.781: INFO: ExecWithOptions: Clientset creation Jun 22 08:43:40.781: INFO: ExecWithOptions: execute(POST https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/namespaces/sctp-4915/pods/hostexec-ip-172-20-0-238.ec2.internal-jjskv/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Jun 22 08:43:41.024: INFO: exec ip-172-20-0-238.ec2.internal: command: lsmod | grep sctp Jun 22 08:43:41.024: INFO: exec ip-172-20-0-238.ec2.internal: stdout: "" Jun 22 08:43:41.024: INFO: exec ip-172-20-0-238.ec2.internal: stderr: "" Jun 22 08:43:41.025: INFO: exec ip-172-20-0-238.ec2.internal: exit code: 0 Jun 22 08:43:41.025: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1 Jun 22 08:43:41.025: INFO: the sctp module is not loaded on node: ip-172-20-0-238.ec2.internal [1mSTEP[0m: Deleting pod hostexec-ip-172-20-0-238.ec2.internal-jjskv in namespace sctp-4915 [1mSTEP[0m: Deleting pod hostexec-ip-172-20-0-114.ec2.internal-gf4m8 in namespace sctp-4915 [1mSTEP[0m: creating service sctp-endpoint-test in namespace sctp-4915 Jun 22 08:43:41.175: INFO: Service sctp-endpoint-test in namespace sctp-4915 found. [1mSTEP[0m: validating endpoints do not exist yet ... skipping 39 lines ... [32m• [SLOW TEST:28.641 seconds][0m [sig-network] SCTP [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should allow creating a basic SCTP service with pod and endpoints [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3258[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] SCTP [LinuxOnly] should allow creating a basic SCTP service with pod and endpoints","total":-1,"completed":5,"skipped":33,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 39 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m when create a pod with lifecycle hook [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44[0m should execute poststart exec hook properly [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":45,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:05.366: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 178 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99[0m should not deadlock when a pod's predecessor fails [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:254[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails","total":-1,"completed":2,"skipped":8,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:49 [It] volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:70 [1mSTEP[0m: Creating a pod to test emptydir volume type on node default medium Jun 22 08:43:55.631: INFO: Waiting up to 5m0s for pod "pod-bdc2f78d-bb85-42e2-81fc-980be7adbe32" in namespace "emptydir-5669" to be "Succeeded or Failed" Jun 22 08:43:55.662: INFO: Pod "pod-bdc2f78d-bb85-42e2-81fc-980be7adbe32": Phase="Pending", Reason="", readiness=false. Elapsed: 31.220472ms Jun 22 08:43:57.699: INFO: Pod "pod-bdc2f78d-bb85-42e2-81fc-980be7adbe32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067854399s Jun 22 08:43:59.730: INFO: Pod "pod-bdc2f78d-bb85-42e2-81fc-980be7adbe32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099442289s Jun 22 08:44:01.762: INFO: Pod "pod-bdc2f78d-bb85-42e2-81fc-980be7adbe32": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131459802s Jun 22 08:44:03.801: INFO: Pod "pod-bdc2f78d-bb85-42e2-81fc-980be7adbe32": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170500538s Jun 22 08:44:05.835: INFO: Pod "pod-bdc2f78d-bb85-42e2-81fc-980be7adbe32": Phase="Pending", Reason="", readiness=false. Elapsed: 10.204080408s Jun 22 08:44:07.875: INFO: Pod "pod-bdc2f78d-bb85-42e2-81fc-980be7adbe32": Phase="Pending", Reason="", readiness=false. Elapsed: 12.244587035s Jun 22 08:44:09.955: INFO: Pod "pod-bdc2f78d-bb85-42e2-81fc-980be7adbe32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.324006109s [1mSTEP[0m: Saw pod success Jun 22 08:44:09.955: INFO: Pod "pod-bdc2f78d-bb85-42e2-81fc-980be7adbe32" satisfied condition "Succeeded or Failed" Jun 22 08:44:10.003: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod pod-bdc2f78d-bb85-42e2-81fc-980be7adbe32 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:44:10.211: INFO: Waiting for pod pod-bdc2f78d-bb85-42e2-81fc-980be7adbe32 to disappear Jun 22 08:44:10.252: INFO: Pod pod-bdc2f78d-bb85-42e2-81fc-980be7adbe32 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:47[0m volume on default medium should have the correct mode using FSGroup [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:70[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":10,"skipped":96,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 76 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m One pod requesting one prebound PVC [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and read from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":12,"skipped":95,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:12.562: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 156 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSI Volume expansion [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:641[0m should expand volume by restarting pod if attach=on, nodeExpansion=on [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:670[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":5,"skipped":40,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 22 lines ... Jun 22 08:43:42.540: INFO: PersistentVolumeClaim pvc-vgfpj found but phase is Pending instead of Bound. Jun 22 08:43:44.572: INFO: PersistentVolumeClaim pvc-vgfpj found and phase=Bound (2.071057419s) Jun 22 08:43:44.572: INFO: Waiting up to 3m0s for PersistentVolume local-wzngt to have phase Bound Jun 22 08:43:44.603: INFO: PersistentVolume local-wzngt found and phase=Bound (31.101532ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-w4tn [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 22 08:43:44.700: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-w4tn" in namespace "provisioning-9051" to be "Succeeded or Failed" Jun 22 08:43:44.731: INFO: Pod "pod-subpath-test-preprovisionedpv-w4tn": Phase="Pending", Reason="", readiness=false. Elapsed: 31.520095ms Jun 22 08:43:46.763: INFO: Pod "pod-subpath-test-preprovisionedpv-w4tn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0628573s Jun 22 08:43:48.795: INFO: Pod "pod-subpath-test-preprovisionedpv-w4tn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095131579s Jun 22 08:43:50.826: INFO: Pod "pod-subpath-test-preprovisionedpv-w4tn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126511945s Jun 22 08:43:52.857: INFO: Pod "pod-subpath-test-preprovisionedpv-w4tn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157127048s Jun 22 08:43:54.889: INFO: Pod "pod-subpath-test-preprovisionedpv-w4tn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.189349593s ... skipping 3 lines ... Jun 22 08:44:03.022: INFO: Pod "pod-subpath-test-preprovisionedpv-w4tn": Phase="Running", Reason="", readiness=true. Elapsed: 18.322249018s Jun 22 08:44:05.055: INFO: Pod "pod-subpath-test-preprovisionedpv-w4tn": Phase="Running", Reason="", readiness=true. Elapsed: 20.35473419s Jun 22 08:44:07.086: INFO: Pod "pod-subpath-test-preprovisionedpv-w4tn": Phase="Running", Reason="", readiness=true. Elapsed: 22.386505513s Jun 22 08:44:09.117: INFO: Pod "pod-subpath-test-preprovisionedpv-w4tn": Phase="Running", Reason="", readiness=true. Elapsed: 24.417460584s Jun 22 08:44:11.186: INFO: Pod "pod-subpath-test-preprovisionedpv-w4tn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.485810245s [1mSTEP[0m: Saw pod success Jun 22 08:44:11.186: INFO: Pod "pod-subpath-test-preprovisionedpv-w4tn" satisfied condition "Succeeded or Failed" Jun 22 08:44:11.279: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod pod-subpath-test-preprovisionedpv-w4tn container test-container-subpath-preprovisionedpv-w4tn: <nil> [1mSTEP[0m: delete the pod Jun 22 08:44:11.587: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-w4tn to disappear Jun 22 08:44:11.670: INFO: Pod pod-subpath-test-preprovisionedpv-w4tn no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-w4tn Jun 22 08:44:11.670: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-w4tn" in namespace "provisioning-9051" ... skipping 30 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support file as subpath [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":6,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:13.095: INFO: Only supported for providers [azure] (not aws) ... skipping 125 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219[0m [36mOnly supported for providers [gce gke] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302 [90m------------------------------[0m {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy","total":-1,"completed":10,"skipped":91,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:13.103: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 211 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSIStorageCapacity [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1336[0m CSIStorageCapacity disabled [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1379[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":-1,"completed":9,"skipped":67,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:13.572: INFO: Only supported for providers [openstack] (not aws) ... skipping 82 lines ... Jun 22 08:43:57.179: INFO: PersistentVolumeClaim pvc-2ln79 found but phase is Pending instead of Bound. Jun 22 08:43:59.211: INFO: PersistentVolumeClaim pvc-2ln79 found and phase=Bound (2.061496321s) Jun 22 08:43:59.211: INFO: Waiting up to 3m0s for PersistentVolume local-dj67v to have phase Bound Jun 22 08:43:59.245: INFO: PersistentVolume local-dj67v found and phase=Bound (34.150193ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-lnl8 [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:43:59.356: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lnl8" in namespace "provisioning-7901" to be "Succeeded or Failed" Jun 22 08:43:59.392: INFO: Pod "pod-subpath-test-preprovisionedpv-lnl8": Phase="Pending", Reason="", readiness=false. Elapsed: 35.812451ms Jun 22 08:44:01.423: INFO: Pod "pod-subpath-test-preprovisionedpv-lnl8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066641298s Jun 22 08:44:03.453: INFO: Pod "pod-subpath-test-preprovisionedpv-lnl8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096645845s Jun 22 08:44:05.483: INFO: Pod "pod-subpath-test-preprovisionedpv-lnl8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126073165s Jun 22 08:44:07.517: INFO: Pod "pod-subpath-test-preprovisionedpv-lnl8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.16007022s Jun 22 08:44:09.708: INFO: Pod "pod-subpath-test-preprovisionedpv-lnl8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.352022498s Jun 22 08:44:11.760: INFO: Pod "pod-subpath-test-preprovisionedpv-lnl8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.4032974s [1mSTEP[0m: Saw pod success Jun 22 08:44:11.760: INFO: Pod "pod-subpath-test-preprovisionedpv-lnl8" satisfied condition "Succeeded or Failed" Jun 22 08:44:11.824: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod pod-subpath-test-preprovisionedpv-lnl8 container test-container-subpath-preprovisionedpv-lnl8: <nil> [1mSTEP[0m: delete the pod Jun 22 08:44:11.952: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lnl8 to disappear Jun 22 08:44:12.022: INFO: Pod pod-subpath-test-preprovisionedpv-lnl8 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-lnl8 Jun 22 08:44:12.022: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lnl8" in namespace "provisioning-7901" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing single file [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":10,"skipped":106,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 20 lines ... Jun 22 08:43:56.906: INFO: PersistentVolumeClaim pvc-x65r8 found but phase is Pending instead of Bound. Jun 22 08:43:58.936: INFO: PersistentVolumeClaim pvc-x65r8 found and phase=Bound (6.122769829s) Jun 22 08:43:58.936: INFO: Waiting up to 3m0s for PersistentVolume local-6qssm to have phase Bound Jun 22 08:43:58.967: INFO: PersistentVolume local-6qssm found and phase=Bound (31.758875ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-lvv4 [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:43:59.077: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lvv4" in namespace "provisioning-3164" to be "Succeeded or Failed" Jun 22 08:43:59.108: INFO: Pod "pod-subpath-test-preprovisionedpv-lvv4": Phase="Pending", Reason="", readiness=false. Elapsed: 31.345814ms Jun 22 08:44:01.139: INFO: Pod "pod-subpath-test-preprovisionedpv-lvv4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06198638s Jun 22 08:44:03.168: INFO: Pod "pod-subpath-test-preprovisionedpv-lvv4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091730036s Jun 22 08:44:05.203: INFO: Pod "pod-subpath-test-preprovisionedpv-lvv4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126094493s Jun 22 08:44:07.241: INFO: Pod "pod-subpath-test-preprovisionedpv-lvv4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.164264797s Jun 22 08:44:09.271: INFO: Pod "pod-subpath-test-preprovisionedpv-lvv4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.194437013s Jun 22 08:44:11.336: INFO: Pod "pod-subpath-test-preprovisionedpv-lvv4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.259288238s [1mSTEP[0m: Saw pod success Jun 22 08:44:11.336: INFO: Pod "pod-subpath-test-preprovisionedpv-lvv4" satisfied condition "Succeeded or Failed" Jun 22 08:44:11.406: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod pod-subpath-test-preprovisionedpv-lvv4 container test-container-volume-preprovisionedpv-lvv4: <nil> [1mSTEP[0m: delete the pod Jun 22 08:44:11.629: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lvv4 to disappear Jun 22 08:44:11.711: INFO: Pod pod-subpath-test-preprovisionedpv-lvv4 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-lvv4 Jun 22 08:44:11.711: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lvv4" in namespace "provisioning-3164" ... skipping 26 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directory [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":7,"skipped":69,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:13.824: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 52 lines ... [1mSTEP[0m: Destroying namespace "apply-7534" for this suite. [AfterEach] [sig-api-machinery] ServerSideApply /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56 [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request","total":-1,"completed":11,"skipped":94,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 21 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:44:14.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "events-8207" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":3,"skipped":41,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:14.902: INFO: Only supported for providers [gce gke] (not aws) ... skipping 164 lines ... [32m• [SLOW TEST:20.879 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should mutate custom resource with pruning [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":7,"skipped":28,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:16.031: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 14 lines ... [36mDriver local doesn't support InlineVolume -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":4,"skipped":30,"failed":0} [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:42:51.635: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 23 lines ... [32m• [SLOW TEST:84.599 seconds][0m [sig-storage] Projected configMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m optional updates should be reflected in volume [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":30,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:16.236: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 449 lines ... [32m• [SLOW TEST:17.518 seconds][0m [sig-network] Service endpoints latency [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should not be very high [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":3,"skipped":48,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:16.278: INFO: Only supported for providers [azure] (not aws) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 167 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":6,"skipped":57,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:44:12.576: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 Jun 22 08:44:12.765: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-f2d5996c-ef90-4292-b84c-bcc774fc6d31" in namespace "security-context-test-6951" to be "Succeeded or Failed" Jun 22 08:44:12.798: INFO: Pod "busybox-privileged-true-f2d5996c-ef90-4292-b84c-bcc774fc6d31": Phase="Pending", Reason="", readiness=false. Elapsed: 32.745077ms Jun 22 08:44:14.856: INFO: Pod "busybox-privileged-true-f2d5996c-ef90-4292-b84c-bcc774fc6d31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090414091s Jun 22 08:44:16.884: INFO: Pod "busybox-privileged-true-f2d5996c-ef90-4292-b84c-bcc774fc6d31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11913853s Jun 22 08:44:18.916: INFO: Pod "busybox-privileged-true-f2d5996c-ef90-4292-b84c-bcc774fc6d31": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150507476s Jun 22 08:44:20.945: INFO: Pod "busybox-privileged-true-f2d5996c-ef90-4292-b84c-bcc774fc6d31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.180185057s Jun 22 08:44:20.945: INFO: Pod "busybox-privileged-true-f2d5996c-ef90-4292-b84c-bcc774fc6d31" satisfied condition "Succeeded or Failed" Jun 22 08:44:20.975: INFO: Got logs for pod "busybox-privileged-true-f2d5996c-ef90-4292-b84c-bcc774fc6d31": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:44:20.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-6951" for this suite. ... skipping 3 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m When creating a pod with privileged [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232[0m should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":13,"skipped":105,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:21.039: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping ... skipping 182 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Kubectl expose [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246[0m should create services for rc [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":3,"skipped":9,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:21.137: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 88 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should support retrieving logs from the container over websockets [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":98,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:21.149: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 11 lines ... [36mDriver local doesn't support GenericEphemeralVolume -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":8,"skipped":32,"failed":0} [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:44:16.766: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 Jun 22 08:44:16.960: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-3803" to be "Succeeded or Failed" Jun 22 08:44:16.993: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 33.425457ms Jun 22 08:44:19.027: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066649721s Jun 22 08:44:21.058: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098265011s Jun 22 08:44:21.058: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:44:21.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-3803" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":9,"skipped":32,"failed":0} [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:21.180: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 143 lines ... [32m• [SLOW TEST:27.045 seconds][0m [sig-apps] Deployment [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should run the lifecycle of a Deployment [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":8,"skipped":91,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:44:17.615: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 22 08:44:17.815: INFO: Waiting up to 5m0s for pod "busybox-user-65534-029f05e7-3e8a-4245-a65f-f3bc6314cf10" in namespace "security-context-test-2496" to be "Succeeded or Failed" Jun 22 08:44:17.845: INFO: Pod "busybox-user-65534-029f05e7-3e8a-4245-a65f-f3bc6314cf10": Phase="Pending", Reason="", readiness=false. Elapsed: 30.139716ms Jun 22 08:44:19.886: INFO: Pod "busybox-user-65534-029f05e7-3e8a-4245-a65f-f3bc6314cf10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071473956s Jun 22 08:44:21.927: INFO: Pod "busybox-user-65534-029f05e7-3e8a-4245-a65f-f3bc6314cf10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112091129s Jun 22 08:44:23.963: INFO: Pod "busybox-user-65534-029f05e7-3e8a-4245-a65f-f3bc6314cf10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.148773949s Jun 22 08:44:23.963: INFO: Pod "busybox-user-65534-029f05e7-3e8a-4245-a65f-f3bc6314cf10" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:44:23.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-2496" for this suite. ... skipping 2 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m When creating a container with runAsUser [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50[0m should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":62,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:24.059: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 64 lines ... [32m• [SLOW TEST:13.059 seconds][0m [sig-apps] ReplicationController [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should serve a basic image on each replica with a public image [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":8,"skipped":74,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:26.889: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 66 lines ... Jun 22 08:44:11.597: INFO: PersistentVolumeClaim pvc-znt99 found but phase is Pending instead of Bound. Jun 22 08:44:13.655: INFO: PersistentVolumeClaim pvc-znt99 found and phase=Bound (6.247379949s) Jun 22 08:44:13.655: INFO: Waiting up to 3m0s for PersistentVolume local-tv668 to have phase Bound Jun 22 08:44:13.710: INFO: PersistentVolume local-tv668 found and phase=Bound (55.865705ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-hrtx [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:44:14.068: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hrtx" in namespace "provisioning-5060" to be "Succeeded or Failed" Jun 22 08:44:14.229: INFO: Pod "pod-subpath-test-preprovisionedpv-hrtx": Phase="Pending", Reason="", readiness=false. Elapsed: 160.78423ms Jun 22 08:44:16.283: INFO: Pod "pod-subpath-test-preprovisionedpv-hrtx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214355603s Jun 22 08:44:18.323: INFO: Pod "pod-subpath-test-preprovisionedpv-hrtx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254057016s Jun 22 08:44:20.353: INFO: Pod "pod-subpath-test-preprovisionedpv-hrtx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.284843105s Jun 22 08:44:22.425: INFO: Pod "pod-subpath-test-preprovisionedpv-hrtx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.356933656s Jun 22 08:44:24.466: INFO: Pod "pod-subpath-test-preprovisionedpv-hrtx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.39770214s Jun 22 08:44:26.511: INFO: Pod "pod-subpath-test-preprovisionedpv-hrtx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.44232105s [1mSTEP[0m: Saw pod success Jun 22 08:44:26.511: INFO: Pod "pod-subpath-test-preprovisionedpv-hrtx" satisfied condition "Succeeded or Failed" Jun 22 08:44:26.543: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod pod-subpath-test-preprovisionedpv-hrtx container test-container-volume-preprovisionedpv-hrtx: <nil> [1mSTEP[0m: delete the pod Jun 22 08:44:26.647: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hrtx to disappear Jun 22 08:44:26.678: INFO: Pod pod-subpath-test-preprovisionedpv-hrtx no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-hrtx Jun 22 08:44:26.679: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hrtx" in namespace "provisioning-5060" ... skipping 65 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41[0m on terminated container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134[0m should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":51,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:29.212: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 81 lines ... [36mOnly supported for node OS distro [gci ubuntu custom] (not debian)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":102,"failed":0} [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:43:44.006: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename csi-mock-volumes [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 101 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSI online volume expansion [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:752[0m should expand volume without restarting pod if attach=on, nodeExpansion=on [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:767[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":8,"skipped":102,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:31.826: INFO: Only supported for providers [openstack] (not aws) ... skipping 197 lines ... [32m• [SLOW TEST:16.710 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":6,"skipped":39,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 4 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 [1mSTEP[0m: Setting up data [It] should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating pod pod-subpath-test-secret-v5zv [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 22 08:44:05.650: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-v5zv" in namespace "subpath-9680" to be "Succeeded or Failed" Jun 22 08:44:05.682: INFO: Pod "pod-subpath-test-secret-v5zv": Phase="Pending", Reason="", readiness=false. Elapsed: 31.290855ms Jun 22 08:44:07.735: INFO: Pod "pod-subpath-test-secret-v5zv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08449361s Jun 22 08:44:09.827: INFO: Pod "pod-subpath-test-secret-v5zv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176571211s Jun 22 08:44:11.871: INFO: Pod "pod-subpath-test-secret-v5zv": Phase="Running", Reason="", readiness=true. Elapsed: 6.220671526s Jun 22 08:44:13.993: INFO: Pod "pod-subpath-test-secret-v5zv": Phase="Running", Reason="", readiness=true. Elapsed: 8.342398588s Jun 22 08:44:16.052: INFO: Pod "pod-subpath-test-secret-v5zv": Phase="Running", Reason="", readiness=true. Elapsed: 10.401838028s ... skipping 4 lines ... Jun 22 08:44:26.230: INFO: Pod "pod-subpath-test-secret-v5zv": Phase="Running", Reason="", readiness=true. Elapsed: 20.579584313s Jun 22 08:44:28.264: INFO: Pod "pod-subpath-test-secret-v5zv": Phase="Running", Reason="", readiness=true. Elapsed: 22.613734537s Jun 22 08:44:30.319: INFO: Pod "pod-subpath-test-secret-v5zv": Phase="Running", Reason="", readiness=true. Elapsed: 24.668990395s Jun 22 08:44:32.358: INFO: Pod "pod-subpath-test-secret-v5zv": Phase="Running", Reason="", readiness=true. Elapsed: 26.70739916s Jun 22 08:44:34.391: INFO: Pod "pod-subpath-test-secret-v5zv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.740621923s [1mSTEP[0m: Saw pod success Jun 22 08:44:34.391: INFO: Pod "pod-subpath-test-secret-v5zv" satisfied condition "Succeeded or Failed" Jun 22 08:44:34.426: INFO: Trying to get logs from node ip-172-20-0-238.ec2.internal pod pod-subpath-test-secret-v5zv container test-container-subpath-secret-v5zv: <nil> [1mSTEP[0m: delete the pod Jun 22 08:44:34.502: INFO: Waiting for pod pod-subpath-test-secret-v5zv to disappear Jun 22 08:44:34.532: INFO: Pod pod-subpath-test-secret-v5zv no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-secret-v5zv Jun 22 08:44:34.532: INFO: Deleting pod "pod-subpath-test-secret-v5zv" in namespace "subpath-9680" ... skipping 8 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m Atomic writer volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34[0m should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":5,"skipped":59,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:34.637: INFO: Only supported for providers [openstack] (not aws) ... skipping 114 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379[0m should return command exit codes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499[0m running a failing command [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:519[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command","total":-1,"completed":6,"skipped":41,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 15 lines ... [32m• [SLOW TEST:15.344 seconds][0m [sig-node] InitContainer [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should invoke init containers on a RestartNever pod [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":10,"skipped":38,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:36.547: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 48 lines ... [32m• [SLOW TEST:12.757 seconds][0m [sig-storage] EmptyDir volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m pod should support memory backed volumes of specified size [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:297[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":8,"skipped":70,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 6 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:44:36.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "podtemplate-2155" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":11,"skipped":47,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:44:37.017: INFO: >>> kubeConfig: /root/.kube/config ... skipping 104 lines ... Jun 22 08:43:48.004: INFO: PersistentVolumeClaim pvc-2r7zc found and phase=Bound (2.100051494s) [1mSTEP[0m: Deleting the previously created pod Jun 22 08:44:09.216: INFO: Deleting pod "pvc-volume-tester-xqs6q" in namespace "csi-mock-volumes-279" Jun 22 08:44:09.256: INFO: Wait up to 5m0s for pod "pvc-volume-tester-xqs6q" to be fully deleted [1mSTEP[0m: Checking CSI driver logs Jun 22 08:44:13.390: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6InF4cmtYRXM1UHNIdXZ4SWtrc2Vua2lQeHFrQnRTTjNYc3JoRFE2Qkxtc0kifQ.eyJhdWQiOlsia3ViZXJuZXRlcy5zdmMuZGVmYXVsdCJdLCJleHAiOjE2NTU4ODgwMzYsImlhdCI6MTY1NTg4NzQzNiwiaXNzIjoiaHR0cHM6Ly9hcGkuaW50ZXJuYWwuZTJlLTE0Mzc0NWNlYTMtYzgzZmUudGVzdC1jbmNmLWF3cy5rOHMuaW8iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImNzaS1tb2NrLXZvbHVtZXMtMjc5IiwicG9kIjp7Im5hbWUiOiJwdmMtdm9sdW1lLXRlc3Rlci14cXM2cSIsInVpZCI6IjFjNGQ1M2IzLWJhOTYtNGE1Zi05YWMxLTllNjhlNmIxYzk4YyJ9LCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZGVmYXVsdCIsInVpZCI6Ijg2OTYwOTllLTEwY2UtNGM5NS04NzFkLTZhMmY0NjFmZjZiMCJ9fSwibmJmIjoxNjU1ODg3NDM2LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6Y3NpLW1vY2stdm9sdW1lcy0yNzk6ZGVmYXVsdCJ9.JwFbs_9LBO41TEkOu8FwDxBjvS34oOxXiumQXz-n5TqA8uw7hSRmDzLjUeR6i4y2sp8ldAaU8Sq0IQhCCh1jCckJS_LL9jWeX9CqB77T8PxOF8iIAq-8mY6z-z-rJTFtihOlxCN1c2Gq9i7arMR0ej8UwY8iSkNZGTb5TxHYglIhJ9LYZ9NLjjO3XzEQhCYb5BsP1hob8gmtBIqi9OSqhO7E4dw0NOI9qBA9XeaVLloLiWB6qL7wwB_aM_UwTv_GSKl2A_27RFUUx7A9o4kb-8MZ810ifsBMOiXMV11I8KhESZHuyz847wEI5ccmlYQ8j0EV7fKqmlw0VhTCvJuurw","expirationTimestamp":"2022-06-22T08:53:56Z"}} Jun 22 08:44:13.390: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/1c4d53b3-ba96-4a5f-9ac1-9e68e6b1c98c/volumes/kubernetes.io~csi/pvc-c7a5f58d-d546-4918-b786-287055a3b8ee/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} [1mSTEP[0m: Deleting pod pvc-volume-tester-xqs6q Jun 22 08:44:13.390: INFO: Deleting pod "pvc-volume-tester-xqs6q" in namespace "csi-mock-volumes-279" [1mSTEP[0m: Deleting claim pvc-2r7zc Jun 22 08:44:13.571: INFO: Waiting up to 2m0s for PersistentVolume pvc-c7a5f58d-d546-4918-b786-287055a3b8ee to get deleted Jun 22 08:44:13.655: INFO: PersistentVolume pvc-c7a5f58d-d546-4918-b786-287055a3b8ee found and phase=Bound (84.478391ms) Jun 22 08:44:15.853: INFO: PersistentVolume pvc-c7a5f58d-d546-4918-b786-287055a3b8ee was removed ... skipping 45 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSIServiceAccountToken [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1576[0m token should be plumbed down when csiServiceAccountTokenEnabled=true [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1604[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true","total":-1,"completed":3,"skipped":24,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:37.512: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 102 lines ... &Pod{ObjectMeta:{webserver-deployment-566f96c878-grfr2 webserver-deployment-566f96c878- deployment-581 97b779ab-e73e-46cc-9fc2-8f5a96b0405f 23037 0 2022-06-22 08:44:38 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 a8a53b45-f710-42c0-897b-3115c9b6a5ea 0xc002dd1410 0xc002dd1411}] [] [{kube-controller-manager Update v1 2022-06-22 08:44:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8a53b45-f710-42c0-897b-3115c9b6a5ea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4pgfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4pgfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-0-92.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-22 08:44:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 08:44:40.500: INFO: Pod "webserver-deployment-566f96c878-j7kt6" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-j7kt6 webserver-deployment-566f96c878- deployment-581 7555a4ec-2197-41b1-a68c-39654b2050de 22872 0 2022-06-22 08:44:35 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 a8a53b45-f710-42c0-897b-3115c9b6a5ea 0xc002dd1570 0xc002dd1571}] [] [{kube-controller-manager Update v1 2022-06-22 08:44:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8a53b45-f710-42c0-897b-3115c9b6a5ea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cdlkn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cdlkn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-0-92.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-22 08:44:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 08:44:40.500: INFO: Pod "webserver-deployment-566f96c878-lkp26" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-lkp26 webserver-deployment-566f96c878- deployment-581 3022628c-3efc-4794-b697-a923e4b4a0c0 23057 0 2022-06-22 08:44:35 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 a8a53b45-f710-42c0-897b-3115c9b6a5ea 0xc002dd16d0 0xc002dd16d1}] [] [{kube-controller-manager Update v1 2022-06-22 08:44:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8a53b45-f710-42c0-897b-3115c9b6a5ea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-06-22 08:44:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dk4xd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dk4xd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-0-138.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-22 08:44:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-22 08:44:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-22 08:44:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-22 08:44:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.0.138,PodIP:,StartTime:2022-06-22 08:44:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 08:44:40.501: INFO: Pod "webserver-deployment-566f96c878-wz8q8" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-wz8q8 webserver-deployment-566f96c878- deployment-581 82d9465f-30de-4cdd-a4cf-8366b4ab3bf6 22995 0 2022-06-22 08:44:35 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 a8a53b45-f710-42c0-897b-3115c9b6a5ea 0xc002dd18b7 0xc002dd18b8}] [] [{kube-controller-manager Update v1 2022-06-22 08:44:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8a53b45-f710-42c0-897b-3115c9b6a5ea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-06-22 08:44:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.9.239\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-54z75,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-54z75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-0-114.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-22 08:44:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-22 08:44:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-22 08:44:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-22 08:44:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.0.114,PodIP:100.96.9.239,StartTime:2022-06-22 08:44:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.9.239,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 08:44:40.501: INFO: Pod "webserver-deployment-5d9fdcc779-6jjwl" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-6jjwl webserver-deployment-5d9fdcc779- deployment-581 b607b766-6554-4a57-ad5a-18449de9c6ec 23048 0 2022-06-22 08:44:38 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 ea054ed1-f6a4-4fa8-b734-5fb99d4237e2 0xc002dd1ac7 0xc002dd1ac8}] [] [{Go-http-client Update v1 2022-06-22 08:44:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {kube-controller-manager Update v1 2022-06-22 08:44:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ea054ed1-f6a4-4fa8-b734-5fb99d4237e2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hgzcx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hgzcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-0-114.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-22 08:44:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-22 08:44:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-22 08:44:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-22 08:44:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.0.114,PodIP:,StartTime:2022-06-22 08:44:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 08:44:40.501: INFO: Pod "webserver-deployment-5d9fdcc779-6p5qs" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-6p5qs webserver-deployment-5d9fdcc779- deployment-581 12149219-b96d-4bd7-96b7-f8d61b895c77 23005 0 2022-06-22 08:44:38 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 ea054ed1-f6a4-4fa8-b734-5fb99d4237e2 0xc002dd1c87 0xc002dd1c88}] [] [{kube-controller-manager Update v1 2022-06-22 08:44:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ea054ed1-f6a4-4fa8-b734-5fb99d4237e2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sls7z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sls7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-0-238.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-22 08:44:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 08:44:40.501: INFO: Pod "webserver-deployment-5d9fdcc779-7tb98" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-7tb98 webserver-deployment-5d9fdcc779- deployment-581 2118abca-342c-4ff3-8d70-68c2f42c1210 23041 0 2022-06-22 08:44:38 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 ea054ed1-f6a4-4fa8-b734-5fb99d4237e2 0xc002dd1e00 0xc002dd1e01}] [] [{kube-controller-manager Update v1 2022-06-22 08:44:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ea054ed1-f6a4-4fa8-b734-5fb99d4237e2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hwr44,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hwr44,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-0-92.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-22 08:44:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} ... skipping 40 lines ... [32m• [SLOW TEST:19.419 seconds][0m [sig-apps] Deployment [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m deployment should support proportional scaling [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":12,"skipped":100,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 21 lines ... [32m• [SLOW TEST:9.253 seconds][0m [sig-node] Pods [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be submitted and removed [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":40,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 17 lines ... Jun 22 08:44:27.849: INFO: PersistentVolumeClaim pvc-dgzcp found but phase is Pending instead of Bound. Jun 22 08:44:29.928: INFO: PersistentVolumeClaim pvc-dgzcp found and phase=Bound (2.108892918s) Jun 22 08:44:29.928: INFO: Waiting up to 3m0s for PersistentVolume local-g56rl to have phase Bound Jun 22 08:44:29.974: INFO: PersistentVolume local-g56rl found and phase=Bound (45.809041ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-m82d [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:44:30.152: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-m82d" in namespace "provisioning-4529" to be "Succeeded or Failed" Jun 22 08:44:30.206: INFO: Pod "pod-subpath-test-preprovisionedpv-m82d": Phase="Pending", Reason="", readiness=false. Elapsed: 54.704257ms Jun 22 08:44:32.248: INFO: Pod "pod-subpath-test-preprovisionedpv-m82d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096210988s Jun 22 08:44:34.280: INFO: Pod "pod-subpath-test-preprovisionedpv-m82d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128422101s Jun 22 08:44:36.309: INFO: Pod "pod-subpath-test-preprovisionedpv-m82d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157775496s Jun 22 08:44:38.372: INFO: Pod "pod-subpath-test-preprovisionedpv-m82d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.219876333s Jun 22 08:44:40.401: INFO: Pod "pod-subpath-test-preprovisionedpv-m82d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.249606566s Jun 22 08:44:42.431: INFO: Pod "pod-subpath-test-preprovisionedpv-m82d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.279152051s [1mSTEP[0m: Saw pod success Jun 22 08:44:42.431: INFO: Pod "pod-subpath-test-preprovisionedpv-m82d" satisfied condition "Succeeded or Failed" Jun 22 08:44:42.464: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod pod-subpath-test-preprovisionedpv-m82d container test-container-subpath-preprovisionedpv-m82d: <nil> [1mSTEP[0m: delete the pod Jun 22 08:44:42.580: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-m82d to disappear Jun 22 08:44:42.612: INFO: Pod pod-subpath-test-preprovisionedpv-m82d no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-m82d Jun 22 08:44:42.612: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-m82d" in namespace "provisioning-4529" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly directory specified in the volumeMount [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":14,"skipped":138,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:43.108: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 141 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Simple pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379[0m should support exec through an HTTP proxy [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:439[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":5,"skipped":68,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:46.727: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping ... skipping 14 lines ... [36mDriver hostPathSymlink doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":-1,"completed":7,"skipped":28,"failed":0} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:43:57.699: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename dns [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 21 lines ... [1mSTEP[0m: retrieving the pod [1mSTEP[0m: looking for the results for each expected name from probers Jun 22 08:44:12.444: INFO: File wheezy_udp@dns-test-service-3.dns-7050.svc.cluster.local from pod dns-7050/dns-test-7ea920f7-32b5-403c-a91c-13aa6d53678b contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 08:44:12.477: INFO: File jessie_udp@dns-test-service-3.dns-7050.svc.cluster.local from pod dns-7050/dns-test-7ea920f7-32b5-403c-a91c-13aa6d53678b contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 08:44:12.477: INFO: Lookups using dns-7050/dns-test-7ea920f7-32b5-403c-a91c-13aa6d53678b failed for: [wheezy_udp@dns-test-service-3.dns-7050.svc.cluster.local jessie_udp@dns-test-service-3.dns-7050.svc.cluster.local] Jun 22 08:44:17.509: INFO: File wheezy_udp@dns-test-service-3.dns-7050.svc.cluster.local from pod dns-7050/dns-test-7ea920f7-32b5-403c-a91c-13aa6d53678b contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 08:44:17.540: INFO: File jessie_udp@dns-test-service-3.dns-7050.svc.cluster.local from pod dns-7050/dns-test-7ea920f7-32b5-403c-a91c-13aa6d53678b contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 08:44:17.540: INFO: Lookups using dns-7050/dns-test-7ea920f7-32b5-403c-a91c-13aa6d53678b failed for: [wheezy_udp@dns-test-service-3.dns-7050.svc.cluster.local jessie_udp@dns-test-service-3.dns-7050.svc.cluster.local] Jun 22 08:44:22.546: INFO: File wheezy_udp@dns-test-service-3.dns-7050.svc.cluster.local from pod dns-7050/dns-test-7ea920f7-32b5-403c-a91c-13aa6d53678b contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 08:44:22.647: INFO: File jessie_udp@dns-test-service-3.dns-7050.svc.cluster.local from pod dns-7050/dns-test-7ea920f7-32b5-403c-a91c-13aa6d53678b contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 08:44:22.647: INFO: Lookups using dns-7050/dns-test-7ea920f7-32b5-403c-a91c-13aa6d53678b failed for: [wheezy_udp@dns-test-service-3.dns-7050.svc.cluster.local jessie_udp@dns-test-service-3.dns-7050.svc.cluster.local] Jun 22 08:44:27.518: INFO: File wheezy_udp@dns-test-service-3.dns-7050.svc.cluster.local from pod dns-7050/dns-test-7ea920f7-32b5-403c-a91c-13aa6d53678b contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 08:44:27.563: INFO: File jessie_udp@dns-test-service-3.dns-7050.svc.cluster.local from pod dns-7050/dns-test-7ea920f7-32b5-403c-a91c-13aa6d53678b contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 08:44:27.563: INFO: Lookups using dns-7050/dns-test-7ea920f7-32b5-403c-a91c-13aa6d53678b failed for: [wheezy_udp@dns-test-service-3.dns-7050.svc.cluster.local jessie_udp@dns-test-service-3.dns-7050.svc.cluster.local] Jun 22 08:44:32.693: INFO: DNS probes using dns-test-7ea920f7-32b5-403c-a91c-13aa6d53678b succeeded [1mSTEP[0m: deleting the pod [1mSTEP[0m: changing the service to type=ClusterIP [1mSTEP[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7050.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7050.svc.cluster.local; sleep 1; done ... skipping 17 lines ... [32m• [SLOW TEST:51.514 seconds][0m [sig-network] DNS [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should provide DNS for ExternalName services [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":8,"skipped":28,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:49.221: INFO: Only supported for providers [gce gke] (not aws) ... skipping 21 lines ... Jun 22 08:44:14.923: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename svcaccounts [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 22 08:44:15.920: INFO: created pod Jun 22 08:44:15.920: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-2471" to be "Succeeded or Failed" Jun 22 08:44:15.979: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 59.307096ms Jun 22 08:44:18.013: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093223877s Jun 22 08:44:20.044: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.124728423s [1mSTEP[0m: Saw pod success Jun 22 08:44:20.044: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Jun 22 08:44:50.048: INFO: polling logs Jun 22 08:44:50.080: INFO: Pod logs: 2022/06/22 08:44:18 OK: Got token 2022/06/22 08:44:18 validating with in-cluster discovery 2022/06/22 08:44:18 OK: got issuer https://api.internal.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io 2022/06/22 08:44:18 Full, not-validated claims: ... skipping 14 lines ... [32m• [SLOW TEST:35.251 seconds][0m [sig-auth] ServiceAccounts [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23[0m ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":4,"skipped":59,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:50.177: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 71 lines ... [32m• [SLOW TEST:13.398 seconds][0m [sig-apps] ReplicationController [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should test the lifecycle of a ReplicationController [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":12,"skipped":52,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 [1mSTEP[0m: Creating a pod to test hostPath mode Jun 22 08:44:37.721: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5932" to be "Succeeded or Failed" Jun 22 08:44:37.753: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 31.550147ms Jun 22 08:44:39.786: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06485589s Jun 22 08:44:41.819: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097599199s Jun 22 08:44:43.852: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130122748s Jun 22 08:44:45.887: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.165476899s Jun 22 08:44:47.919: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.197925708s Jun 22 08:44:49.956: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.234806225s Jun 22 08:44:51.989: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.267464069s [1mSTEP[0m: Saw pod success Jun 22 08:44:51.989: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Jun 22 08:44:52.021: INFO: Trying to get logs from node ip-172-20-0-238.ec2.internal pod pod-host-path-test container test-container-1: <nil> [1mSTEP[0m: delete the pod Jun 22 08:44:52.096: INFO: Waiting for pod pod-host-path-test to disappear Jun 22 08:44:52.130: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:14.679 seconds][0m [sig-storage] HostPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should give a volume the correct mode [LinuxOnly] [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":28,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:52.201: INFO: Only supported for providers [azure] (not aws) ... skipping 72 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should be able to unmount after the subpath directory is deleted [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":7,"skipped":42,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 46 lines ... [36mRequires at least 2 nodes (not -1)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":11,"skipped":107,"failed":0} [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:44:39.435: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename persistent-local-volumes-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 64 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m One pod requesting one prebound PVC [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and write from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":12,"skipped":107,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:44:43.124: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-a352e348-b2f8-47c5-94bc-ec17cc41f06b [1mSTEP[0m: Creating a pod to test consume configMaps Jun 22 08:44:43.329: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aafa697f-0f03-4cce-8fa3-80c92aa7c21d" in namespace "projected-9138" to be "Succeeded or Failed" Jun 22 08:44:43.358: INFO: Pod "pod-projected-configmaps-aafa697f-0f03-4cce-8fa3-80c92aa7c21d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.077437ms Jun 22 08:44:45.386: INFO: Pod "pod-projected-configmaps-aafa697f-0f03-4cce-8fa3-80c92aa7c21d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056990378s Jun 22 08:44:47.419: INFO: Pod "pod-projected-configmaps-aafa697f-0f03-4cce-8fa3-80c92aa7c21d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089673903s Jun 22 08:44:49.449: INFO: Pod "pod-projected-configmaps-aafa697f-0f03-4cce-8fa3-80c92aa7c21d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119359654s Jun 22 08:44:51.482: INFO: Pod "pod-projected-configmaps-aafa697f-0f03-4cce-8fa3-80c92aa7c21d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.152255345s Jun 22 08:44:53.511: INFO: Pod "pod-projected-configmaps-aafa697f-0f03-4cce-8fa3-80c92aa7c21d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.181495566s Jun 22 08:44:55.539: INFO: Pod "pod-projected-configmaps-aafa697f-0f03-4cce-8fa3-80c92aa7c21d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.210006861s Jun 22 08:44:57.571: INFO: Pod "pod-projected-configmaps-aafa697f-0f03-4cce-8fa3-80c92aa7c21d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.241973717s [1mSTEP[0m: Saw pod success Jun 22 08:44:57.571: INFO: Pod "pod-projected-configmaps-aafa697f-0f03-4cce-8fa3-80c92aa7c21d" satisfied condition "Succeeded or Failed" Jun 22 08:44:57.600: INFO: Trying to get logs from node ip-172-20-0-238.ec2.internal pod pod-projected-configmaps-aafa697f-0f03-4cce-8fa3-80c92aa7c21d container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:44:57.668: INFO: Waiting for pod pod-projected-configmaps-aafa697f-0f03-4cce-8fa3-80c92aa7c21d to disappear Jun 22 08:44:57.696: INFO: Pod pod-projected-configmaps-aafa697f-0f03-4cce-8fa3-80c92aa7c21d no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:14.632 seconds][0m [sig-storage] Projected configMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":146,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 7 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:44:59.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "custom-resource-definition-7944" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":9,"skipped":96,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:44:59.827: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 102 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m Two pods mounting a local volume one after the other [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254[0m should be able to write from pod1 and read from pod2 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":129,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:45:00.032: INFO: Only supported for providers [gce gke] (not aws) ... skipping 150 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:45:00.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "metrics-grabber-9947" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":10,"skipped":101,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:45:00.980: INFO: Only supported for providers [gce gke] (not aws) ... skipping 35 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:214[0m [36mDriver local doesn't support GenericEphemeralVolume -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":95,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:44:28.888: INFO: >>> kubeConfig: /root/.kube/config ... skipping 19 lines ... Jun 22 08:44:41.913: INFO: PersistentVolumeClaim pvc-slgc2 found but phase is Pending instead of Bound. Jun 22 08:44:43.945: INFO: PersistentVolumeClaim pvc-slgc2 found and phase=Bound (2.061588232s) Jun 22 08:44:43.945: INFO: Waiting up to 3m0s for PersistentVolume local-wfgsj to have phase Bound Jun 22 08:44:43.974: INFO: PersistentVolume local-wfgsj found and phase=Bound (29.575083ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-whhm [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:44:44.064: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-whhm" in namespace "provisioning-1447" to be "Succeeded or Failed" Jun 22 08:44:44.093: INFO: Pod "pod-subpath-test-preprovisionedpv-whhm": Phase="Pending", Reason="", readiness=false. Elapsed: 29.353343ms Jun 22 08:44:46.132: INFO: Pod "pod-subpath-test-preprovisionedpv-whhm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068322302s Jun 22 08:44:48.163: INFO: Pod "pod-subpath-test-preprovisionedpv-whhm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099088966s Jun 22 08:44:50.193: INFO: Pod "pod-subpath-test-preprovisionedpv-whhm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129182028s Jun 22 08:44:52.228: INFO: Pod "pod-subpath-test-preprovisionedpv-whhm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.163483842s Jun 22 08:44:54.260: INFO: Pod "pod-subpath-test-preprovisionedpv-whhm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.195838666s Jun 22 08:44:56.291: INFO: Pod "pod-subpath-test-preprovisionedpv-whhm": Phase="Pending", Reason="", readiness=false. Elapsed: 12.226578298s Jun 22 08:44:58.378: INFO: Pod "pod-subpath-test-preprovisionedpv-whhm": Phase="Pending", Reason="", readiness=false. Elapsed: 14.31366539s Jun 22 08:45:00.411: INFO: Pod "pod-subpath-test-preprovisionedpv-whhm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.347064126s [1mSTEP[0m: Saw pod success Jun 22 08:45:00.411: INFO: Pod "pod-subpath-test-preprovisionedpv-whhm" satisfied condition "Succeeded or Failed" Jun 22 08:45:00.442: INFO: Trying to get logs from node ip-172-20-0-238.ec2.internal pod pod-subpath-test-preprovisionedpv-whhm container test-container-subpath-preprovisionedpv-whhm: <nil> [1mSTEP[0m: delete the pod Jun 22 08:45:00.539: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-whhm to disappear Jun 22 08:45:00.571: INFO: Pod pod-subpath-test-preprovisionedpv-whhm no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-whhm Jun 22 08:45:00.571: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-whhm" in namespace "provisioning-1447" ... skipping 30 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly directory specified in the volumeMount [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":13,"skipped":95,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:49 [It] nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:62 [1mSTEP[0m: Creating a pod to test emptydir subpath on tmpfs Jun 22 08:44:57.953: INFO: Waiting up to 5m0s for pod "pod-6a513499-324e-43d4-9e2d-b2eb85291cfb" in namespace "emptydir-9683" to be "Succeeded or Failed" Jun 22 08:44:57.993: INFO: Pod "pod-6a513499-324e-43d4-9e2d-b2eb85291cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 40.058362ms Jun 22 08:45:00.024: INFO: Pod "pod-6a513499-324e-43d4-9e2d-b2eb85291cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070873424s Jun 22 08:45:02.057: INFO: Pod "pod-6a513499-324e-43d4-9e2d-b2eb85291cfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103904137s [1mSTEP[0m: Saw pod success Jun 22 08:45:02.057: INFO: Pod "pod-6a513499-324e-43d4-9e2d-b2eb85291cfb" satisfied condition "Succeeded or Failed" Jun 22 08:45:02.086: INFO: Trying to get logs from node ip-172-20-0-114.ec2.internal pod pod-6a513499-324e-43d4-9e2d-b2eb85291cfb container test-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:45:02.172: INFO: Waiting for pod pod-6a513499-324e-43d4-9e2d-b2eb85291cfb to disappear Jun 22 08:45:02.201: INFO: Pod pod-6a513499-324e-43d4-9e2d-b2eb85291cfb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 30 lines ... Jun 22 08:44:27.145: INFO: PersistentVolumeClaim pvc-tl75f found but phase is Pending instead of Bound. Jun 22 08:44:29.191: INFO: PersistentVolumeClaim pvc-tl75f found and phase=Bound (12.288899949s) Jun 22 08:44:29.191: INFO: Waiting up to 3m0s for PersistentVolume local-jqshn to have phase Bound Jun 22 08:44:29.226: INFO: PersistentVolume local-jqshn found and phase=Bound (34.816943ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-q9t2 [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 22 08:44:29.339: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-q9t2" in namespace "provisioning-6794" to be "Succeeded or Failed" Jun 22 08:44:29.372: INFO: Pod "pod-subpath-test-preprovisionedpv-q9t2": Phase="Pending", Reason="", readiness=false. Elapsed: 33.506922ms Jun 22 08:44:31.407: INFO: Pod "pod-subpath-test-preprovisionedpv-q9t2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068071712s Jun 22 08:44:33.439: INFO: Pod "pod-subpath-test-preprovisionedpv-q9t2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100407489s Jun 22 08:44:35.479: INFO: Pod "pod-subpath-test-preprovisionedpv-q9t2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140063679s Jun 22 08:44:37.510: INFO: Pod "pod-subpath-test-preprovisionedpv-q9t2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.171506911s Jun 22 08:44:39.542: INFO: Pod "pod-subpath-test-preprovisionedpv-q9t2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.203142922s ... skipping 6 lines ... Jun 22 08:44:53.772: INFO: Pod "pod-subpath-test-preprovisionedpv-q9t2": Phase="Running", Reason="", readiness=true. Elapsed: 24.433504728s Jun 22 08:44:55.806: INFO: Pod "pod-subpath-test-preprovisionedpv-q9t2": Phase="Running", Reason="", readiness=true. Elapsed: 26.467162535s Jun 22 08:44:57.838: INFO: Pod "pod-subpath-test-preprovisionedpv-q9t2": Phase="Running", Reason="", readiness=true. Elapsed: 28.499130841s Jun 22 08:44:59.888: INFO: Pod "pod-subpath-test-preprovisionedpv-q9t2": Phase="Running", Reason="", readiness=true. Elapsed: 30.549270953s Jun 22 08:45:01.920: INFO: Pod "pod-subpath-test-preprovisionedpv-q9t2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.581478039s [1mSTEP[0m: Saw pod success Jun 22 08:45:01.920: INFO: Pod "pod-subpath-test-preprovisionedpv-q9t2" satisfied condition "Succeeded or Failed" Jun 22 08:45:01.952: INFO: Trying to get logs from node ip-172-20-0-238.ec2.internal pod pod-subpath-test-preprovisionedpv-q9t2 container test-container-subpath-preprovisionedpv-q9t2: <nil> [1mSTEP[0m: delete the pod Jun 22 08:45:02.024: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-q9t2 to disappear Jun 22 08:45:02.057: INFO: Pod pod-subpath-test-preprovisionedpv-q9t2 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-q9t2 Jun 22 08:45:02.057: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-q9t2" in namespace "provisioning-6794" ... skipping 102 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Simple pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379[0m should support inline execution and attach [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:563[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":7,"skipped":98,"failed":0} [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":7,"skipped":43,"failed":0} [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:44:29.005: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename nettest [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 94 lines ... Jun 22 08:45:00.362: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0777 on node default medium Jun 22 08:45:00.575: INFO: Waiting up to 5m0s for pod "pod-25e685de-f0a0-45c3-8a99-a6e70e51fc96" in namespace "emptydir-3842" to be "Succeeded or Failed" Jun 22 08:45:00.641: INFO: Pod "pod-25e685de-f0a0-45c3-8a99-a6e70e51fc96": Phase="Pending", Reason="", readiness=false. Elapsed: 65.561524ms Jun 22 08:45:02.674: INFO: Pod "pod-25e685de-f0a0-45c3-8a99-a6e70e51fc96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099388069s Jun 22 08:45:04.718: INFO: Pod "pod-25e685de-f0a0-45c3-8a99-a6e70e51fc96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142849041s Jun 22 08:45:06.753: INFO: Pod "pod-25e685de-f0a0-45c3-8a99-a6e70e51fc96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.17790811s Jun 22 08:45:08.785: INFO: Pod "pod-25e685de-f0a0-45c3-8a99-a6e70e51fc96": Phase="Pending", Reason="", readiness=false. Elapsed: 8.209781053s Jun 22 08:45:10.816: INFO: Pod "pod-25e685de-f0a0-45c3-8a99-a6e70e51fc96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.24070951s [1mSTEP[0m: Saw pod success Jun 22 08:45:10.816: INFO: Pod "pod-25e685de-f0a0-45c3-8a99-a6e70e51fc96" satisfied condition "Succeeded or Failed" Jun 22 08:45:10.853: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod pod-25e685de-f0a0-45c3-8a99-a6e70e51fc96 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:45:10.927: INFO: Waiting for pod pod-25e685de-f0a0-45c3-8a99-a6e70e51fc96 to disappear Jun 22 08:45:10.957: INFO: Pod pod-25e685de-f0a0-45c3-8a99-a6e70e51fc96 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:10.659 seconds][0m [sig-storage] EmptyDir volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":142,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:45:11.026: INFO: Only supported for providers [openstack] (not aws) ... skipping 5 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: cinder] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mOnly supported for providers [openstack] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092 [90m------------------------------[0m ... skipping 50 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474[0m that expects a client request [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475[0m should support a client that connects, sends DATA, and disconnects [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:479[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":9,"skipped":71,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":10,"skipped":79,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:45:02.586: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename webhook [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 28 lines ... [32m• [SLOW TEST:10.249 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should deny crd creation [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":11,"skipped":79,"failed":0} [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":16,"skipped":152,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:45:02.262: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename kubectl [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 34 lines ... [1mSTEP[0m: Building a namespace api object, basename configmap [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name configmap-test-volume-ff7e1e11-b82c-4979-a2ef-5b9a89df8601 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 22 08:44:57.966: INFO: Waiting up to 5m0s for pod "pod-configmaps-0894e577-31d6-4067-9700-4ca33df959e8" in namespace "configmap-1797" to be "Succeeded or Failed" Jun 22 08:44:57.995: INFO: Pod "pod-configmaps-0894e577-31d6-4067-9700-4ca33df959e8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.292532ms Jun 22 08:45:00.027: INFO: Pod "pod-configmaps-0894e577-31d6-4067-9700-4ca33df959e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061176089s Jun 22 08:45:02.057: INFO: Pod "pod-configmaps-0894e577-31d6-4067-9700-4ca33df959e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091598411s Jun 22 08:45:04.088: INFO: Pod "pod-configmaps-0894e577-31d6-4067-9700-4ca33df959e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122585502s Jun 22 08:45:06.119: INFO: Pod "pod-configmaps-0894e577-31d6-4067-9700-4ca33df959e8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.153687049s Jun 22 08:45:08.150: INFO: Pod "pod-configmaps-0894e577-31d6-4067-9700-4ca33df959e8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.184510622s Jun 22 08:45:10.181: INFO: Pod "pod-configmaps-0894e577-31d6-4067-9700-4ca33df959e8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.215529352s Jun 22 08:45:12.212: INFO: Pod "pod-configmaps-0894e577-31d6-4067-9700-4ca33df959e8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.246027787s Jun 22 08:45:14.242: INFO: Pod "pod-configmaps-0894e577-31d6-4067-9700-4ca33df959e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.276757661s [1mSTEP[0m: Saw pod success Jun 22 08:45:14.242: INFO: Pod "pod-configmaps-0894e577-31d6-4067-9700-4ca33df959e8" satisfied condition "Succeeded or Failed" Jun 22 08:45:14.295: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod pod-configmaps-0894e577-31d6-4067-9700-4ca33df959e8 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:45:14.387: INFO: Waiting for pod pod-configmaps-0894e577-31d6-4067-9700-4ca33df959e8 to disappear Jun 22 08:45:14.416: INFO: Pod pod-configmaps-0894e577-31d6-4067-9700-4ca33df959e8 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:16.758 seconds][0m [sig-storage] ConfigMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":113,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:45:14.490: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 72 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:45:14.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "discovery-7555" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":12,"skipped":80,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:45:15.074: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 14 lines ... [36mDriver local doesn't support GenericEphemeralVolume -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":5,"skipped":60,"failed":0} [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:44:50.607: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename persistent-local-volumes-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 140 lines ... [32m• [SLOW TEST:15.706 seconds][0m [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should be able to convert a non homogeneous list of CRs [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":11,"skipped":108,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:45:16.698: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 215 lines ... [32m• [SLOW TEST:16.795 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should be able to deny pod and configmap creation [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":14,"skipped":96,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:45:11.029: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename containers [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test override all Jun 22 08:45:11.217: INFO: Waiting up to 5m0s for pod "client-containers-d052365a-5373-4e0b-bd8d-8d9189e01197" in namespace "containers-908" to be "Succeeded or Failed" Jun 22 08:45:11.249: INFO: Pod "client-containers-d052365a-5373-4e0b-bd8d-8d9189e01197": Phase="Pending", Reason="", readiness=false. Elapsed: 31.217011ms Jun 22 08:45:13.300: INFO: Pod "client-containers-d052365a-5373-4e0b-bd8d-8d9189e01197": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082532515s Jun 22 08:45:15.341: INFO: Pod "client-containers-d052365a-5373-4e0b-bd8d-8d9189e01197": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123480307s Jun 22 08:45:17.387: INFO: Pod "client-containers-d052365a-5373-4e0b-bd8d-8d9189e01197": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169429643s Jun 22 08:45:19.418: INFO: Pod "client-containers-d052365a-5373-4e0b-bd8d-8d9189e01197": Phase="Pending", Reason="", readiness=false. Elapsed: 8.200365402s Jun 22 08:45:21.449: INFO: Pod "client-containers-d052365a-5373-4e0b-bd8d-8d9189e01197": Phase="Pending", Reason="", readiness=false. Elapsed: 10.231637392s Jun 22 08:45:23.484: INFO: Pod "client-containers-d052365a-5373-4e0b-bd8d-8d9189e01197": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.266508509s [1mSTEP[0m: Saw pod success Jun 22 08:45:23.484: INFO: Pod "client-containers-d052365a-5373-4e0b-bd8d-8d9189e01197" satisfied condition "Succeeded or Failed" Jun 22 08:45:23.515: INFO: Trying to get logs from node ip-172-20-0-92.ec2.internal pod client-containers-d052365a-5373-4e0b-bd8d-8d9189e01197 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:45:23.589: INFO: Waiting for pod client-containers-d052365a-5373-4e0b-bd8d-8d9189e01197 to disappear Jun 22 08:45:23.619: INFO: Pod client-containers-d052365a-5373-4e0b-bd8d-8d9189e01197 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:12.660 seconds][0m [sig-node] Docker Containers [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be able to override the image's default command and arguments [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":145,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 101 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSI attach test using mock driver [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:332[0m should not require VolumeAttach for drivers without attachment [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:360[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":13,"skipped":53,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:45:23.711: INFO: Only supported for providers [openstack] (not aws) [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 100 lines ... Jun 22 08:45:21.164: INFO: The status of Pod pod-update-activedeadlineseconds-3f58de74-6891-4298-8fe3-660ec309f395 is Running (Ready = true) [1mSTEP[0m: verifying the pod is in kubernetes [1mSTEP[0m: updating the pod Jun 22 08:45:21.804: INFO: Successfully updated pod "pod-update-activedeadlineseconds-3f58de74-6891-4298-8fe3-660ec309f395" Jun 22 08:45:21.804: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-3f58de74-6891-4298-8fe3-660ec309f395" in namespace "pods-6547" to be "terminated due to deadline exceeded" Jun 22 08:45:21.836: INFO: Pod "pod-update-activedeadlineseconds-3f58de74-6891-4298-8fe3-660ec309f395": Phase="Running", Reason="", readiness=true. Elapsed: 31.830406ms Jun 22 08:45:23.869: INFO: Pod "pod-update-activedeadlineseconds-3f58de74-6891-4298-8fe3-660ec309f395": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 2.064792463s Jun 22 08:45:23.869: INFO: Pod "pod-update-activedeadlineseconds-3f58de74-6891-4298-8fe3-660ec309f395" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:45:23.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-6547" for this suite. [32m• [SLOW TEST:7.185 seconds][0m [sig-node] Pods [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":145,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes ... skipping 127 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (ext4)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":13,"skipped":101,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:45:25.637: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 83 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should be able to unmount after the subpath directory is deleted [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":5,"skipped":35,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:45:26.818: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 2 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: gluster] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mOnly supported for node OS distro [gci ubuntu custom] (not debian)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263 [90m------------------------------[0m ... skipping 175 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:45:27.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-9712" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":6,"skipped":67,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:45:27.297: INFO: Only supported for providers [openstack] (not aws) ... skipping 40544 lines ... /dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0622 08:22:37.243277 10 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0622 08:22:37.255563 10 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0622 08:22:37.256974 10 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0622 08:22:37.257144 10 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0622 08:22:37.257707 10 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0622 08:22:37.312526 10 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0622 08:22:37.313199 10 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0622 08:22:37.506752 10 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0622 08:22:37.506883 10 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0622 08:22:37.517521 10 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0622 08:22:37.517636 10 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0622 08:22:37.573427 10 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0622 08:22:37.573568 10 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0622 08:22:37.683858 10 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0622 08:22:37.684032 10 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0622 08:22:37.687246 10 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1beta1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0622 08:22:37.687371 10 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1beta1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0622 08:22:37.749319 10 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0622 08:22:37.751954 10 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0622 08:22:37.782969 10 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0622 08:22:37.786230 10 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0622 08:22:37.786152 10 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0622 08:22:37.786372 10 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0622 08:22:37.921009 10 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0622 08:22:37.921214 10 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0622 08:22:38.007974 10 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0622 08:22:38.008026 10 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0622 08:22:38.030606 10 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0622 08:22:38.030670 10 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nW0622 08:22:38.888161 10 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0622 08:22:38.888195 10 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nI0622 08:22:42.532972 10 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file \nI0622 08:22:42.533327 10 tlsconfig.go:178] \"Loaded client CA\" index=0 certName=\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\" certDetail=\"\\\"kubernetes-ca\\\" [] issuer=\\\"<self>\\\" (2022-06-20 08:09:18 +0000 UTC to 2032-06-19 08:09:18 +0000 UTC (now=2022-06-22 08:22:42.53329554 +0000 UTC))\"\nI0622 08:22:42.533551 10 tlsconfig.go:200] \"Loaded serving cert\" certName=\"serving-cert::/srv/kubernetes/kube-scheduler/server.crt::/srv/kubernetes/kube-scheduler/server.key\" certDetail=\"\\\"kube-scheduler\\\" [serving] validServingFor=[kube-scheduler.kube-system.svc.cluster.local] issuer=\\\"kubernetes-ca\\\" (2022-06-20 08:20:54 +0000 UTC to 2023-09-21 12:20:54 +0000 UTC (now=2022-06-22 08:22:42.533524408 +0000 UTC))\"\nI0622 08:22:42.533767 10 named_certificates.go:53] \"Loaded SNI cert\" index=0 certName=\"self-signed loopback\" certDetail=\"\\\"apiserver-loopback-client@1655886156\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\"apiserver-loopback-client-ca@1655886156\\\" (2022-06-22 07:22:36 +0000 UTC to 2023-06-22 07:22:36 +0000 UTC (now=2022-06-22 08:22:42.533743337 +0000 UTC))\"\nI0622 08:22:43.409725 10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"ip-172-20-0-145.ec2.internal\" zone=\"us-east-1:\\x00:us-east-1a\"\nI0622 08:22:43.409860 10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"ip-172-20-0-16.ec2.internal\" zone=\"us-east-1:\\x00:us-east-1a\"\nI0622 08:22:43.409937 10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"ip-172-20-0-180.ec2.internal\" zone=\"us-east-1:\\x00:us-east-1a\"\nI0622 08:22:43.410053 10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"ip-172-20-0-206.ec2.internal\" zone=\"us-east-1:\\x00:us-east-1a\"\nI0622 08:22:43.410102 10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"ip-172-20-0-74.ec2.internal\" zone=\"us-east-1:\\x00:us-east-1a\"\nI0622 08:22:43.410180 10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"ip-172-20-0-28.ec2.internal\" zone=\"us-east-1:\\x00:us-east-1a\"\nI0622 08:22:43.434362 10 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-scheduler...\nI0622 08:23:02.579052 10 leaderelection.go:258] successfully acquired lease kube-system/kube-scheduler\nI0622 08:23:02.579831 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/cert-manager-699d66b4b-5c6s9\" err=\"0/6 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 1 node(s) were unschedulable, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0622 08:23:02.607563 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/aws-load-balancer-controller-694f898955-j5stl\" err=\"0/6 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 1 node(s) were unschedulable, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0622 08:23:02.621916 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/cluster-autoscaler-5f8fdb7d5c-n9tzg\" err=\"0/6 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 1 node(s) were unschedulable, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0622 08:23:02.640521 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/dns-controller-5b485948f-dmd8s\" node=\"ip-172-20-0-28.ec2.internal\" evaluatedNodes=6 feasibleNodes=1\nI0622 08:23:02.640939 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/cert-manager-cainjector-6465ccdb69-dr2fr\" err=\"0/6 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 1 node(s) were unschedulable, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0622 08:23:02.646936 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/cert-manager-webhook-6d4d986bbd-6bcfp\" err=\"0/6 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 1 node(s) were unschedulable, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0622 08:23:04.580818 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/cert-manager-cainjector-6465ccdb69-dr2fr\" err=\"0/6 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 1 node(s) were unschedulable, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0622 08:23:13.402203 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-gsj22\" node=\"ip-172-20-0-28.ec2.internal\" evaluatedNodes=6 feasibleNodes=1\nI0622 08:23:13.420007 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-k98hw\" node=\"ip-172-20-0-28.ec2.internal\" evaluatedNodes=6 feasibleNodes=1\nI0622 08:23:13.545635 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/kops-controller-kzxmw\" node=\"ip-172-20-0-28.ec2.internal\" evaluatedNodes=6 feasibleNodes=1\nI0622 08:23:13.607203 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-nmw5h\" node=\"ip-172-20-0-28.ec2.internal\" evaluatedNodes=6 feasibleNodes=1\nI0622 08:23:27.537655 10 node_tree.go:79] \"Removed node in listed group from NodeTree\" node=\"ip-172-20-0-180.ec2.internal\" zone=\"us-east-1:\\x00:us-east-1a\"\nI0622 08:23:33.730040 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cluster-autoscaler-5f8fdb7d5c-n9tzg\" node=\"ip-172-20-0-28.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:23:33.730374 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cert-manager-webhook-6d4d986bbd-6bcfp\" node=\"ip-172-20-0-28.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:23:33.730434 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/aws-load-balancer-controller-694f898955-j5stl\" node=\"ip-172-20-0-28.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:23:33.730495 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cert-manager-cainjector-6465ccdb69-dr2fr\" node=\"ip-172-20-0-28.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:23:33.739119 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cert-manager-699d66b4b-5c6s9\" node=\"ip-172-20-0-28.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:24:12.863599 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-operator-7fb7bf5c7-57cq4\" node=\"ip-172-20-0-28.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:24:12.894030 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/aws-node-termination-handler-566d67f964-nwzcp\" node=\"ip-172-20-0-28.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:24:13.036564 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-controller-774fbb7f45-5pf5j\" node=\"ip-172-20-0-28.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:32:38.218473 10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"ip-172-20-0-92.ec2.internal\" zone=\"us-east-1:\\x00:us-east-1a\"\nI0622 08:32:38.284616 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-ndkpk\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=6 feasibleNodes=1\nI0622 08:32:38.284900 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-zlwcq\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=6 feasibleNodes=1\nI0622 08:32:38.309926 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-t99f2\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=6 feasibleNodes=1\nI0622 08:36:02.454414 10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"ip-172-20-0-138.ec2.internal\" zone=\"us-east-1:\\x00:us-east-1a\"\nI0622 08:36:02.501295 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-zrm79\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=7 feasibleNodes=1\nI0622 08:36:02.501892 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-r52qv\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=7 feasibleNodes=1\nI0622 08:36:02.539356 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-t6dkg\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=7 feasibleNodes=1\nI0622 08:36:09.145772 10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"ip-172-20-0-238.ec2.internal\" zone=\"us-east-1:\\x00:us-east-1a\"\nI0622 08:36:09.294141 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-hqtmv\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=8 feasibleNodes=1\nI0622 08:36:09.309010 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-zjm57\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=8 feasibleNodes=1\nI0622 08:36:09.356833 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-bs67t\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=8 feasibleNodes=1\nI0622 08:36:14.963568 10 node_tree.go:65] \"Added node in listed group to NodeTree\" node=\"ip-172-20-0-114.ec2.internal\" zone=\"us-east-1:\\x00:us-east-1a\"\nI0622 08:36:15.028025 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-9dkkb\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:36:15.028420 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-lhflf\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:36:15.076136 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-8sptj\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:05.133835 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/hubble-relay-55846f56fb-2qwsn\" node=\"ip-172-20-0-145.ec2.internal\" evaluatedNodes=9 feasibleNodes=8\nI0622 08:37:05.157471 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/coredns-autoscaler-57dd87df6c-9sskp\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=9 feasibleNodes=4\nI0622 08:37:05.226786 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/coredns-7884856795-v698g\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=9 feasibleNodes=4\nI0622 08:37:05.274476 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/metrics-server-655dc594b4-wctbl\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=9 feasibleNodes=4\nI0622 08:37:05.514052 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/hubble-relay-55846f56fb-tmlhh\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=9 feasibleNodes=8\nI0622 08:37:10.456704 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/coredns-7884856795-zsblk\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=9 feasibleNodes=4\nI0622 08:37:10.940747 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/hubble-relay-55846f56fb-v2l66\" node=\"ip-172-20-0-145.ec2.internal\" evaluatedNodes=9 feasibleNodes=8\nI0622 08:37:11.013558 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/hubble-relay-55846f56fb-h6p8c\" node=\"ip-172-20-0-145.ec2.internal\" evaluatedNodes=9 feasibleNodes=8\nI0622 08:37:11.306949 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/hubble-relay-55846f56fb-7g47c\" node=\"ip-172-20-0-145.ec2.internal\" evaluatedNodes=9 feasibleNodes=8\nI0622 08:37:12.147671 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-84jnb\" node=\"ip-172-20-0-145.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:13.542622 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-lbbk5\" node=\"ip-172-20-0-145.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:14.332366 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-rbz87\" node=\"ip-172-20-0-145.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:15.115804 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/hubble-relay-55846f56fb-vsb7b\" node=\"ip-172-20-0-145.ec2.internal\" evaluatedNodes=9 feasibleNodes=8\nI0622 08:37:16.831126 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-4gq7x\" node=\"ip-172-20-0-16.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:16.917414 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-5smg2\" node=\"ip-172-20-0-16.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:16.976689 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-vtlxv\" node=\"ip-172-20-0-16.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:17.028799 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-pdw5h\" node=\"ip-172-20-0-145.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:18.113395 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-vkvvz\" node=\"ip-172-20-0-145.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:18.525831 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-8ghss\" node=\"ip-172-20-0-145.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:18.610345 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-b4f57\" node=\"ip-172-20-0-16.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:18.799778 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-kw6vl\" node=\"ip-172-20-0-16.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:19.306183 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/hubble-relay-55846f56fb-pttwj\" node=\"ip-172-20-0-16.ec2.internal\" evaluatedNodes=9 feasibleNodes=8\nI0622 08:37:19.948018 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-krnmk\" node=\"ip-172-20-0-145.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:20.601462 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-szvzk\" node=\"ip-172-20-0-16.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:20.925226 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-shxr7\" node=\"ip-172-20-0-145.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:21.314342 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-j5j69\" node=\"ip-172-20-0-145.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:21.405869 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-lrvwt\" node=\"ip-172-20-0-16.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:22.790748 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-hqnj8\" node=\"ip-172-20-0-16.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:23.131698 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-tl5rz\" node=\"ip-172-20-0-206.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:23.205218 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-pcvgj\" node=\"ip-172-20-0-206.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:23.347999 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-p5989\" node=\"ip-172-20-0-206.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:23.393546 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/hubble-relay-55846f56fb-g86rs\" node=\"ip-172-20-0-206.ec2.internal\" evaluatedNodes=9 feasibleNodes=8\nI0622 08:37:23.593471 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-749fs\" node=\"ip-172-20-0-16.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:24.316615 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-xq8cp\" node=\"ip-172-20-0-206.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:25.053151 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-dtlc2\" node=\"ip-172-20-0-206.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:26.283040 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-rmbq9\" node=\"ip-172-20-0-16.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:26.333179 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-84745\" node=\"ip-172-20-0-206.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:27.644311 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/hubble-relay-55846f56fb-w6v58\" node=\"ip-172-20-0-145.ec2.internal\" evaluatedNodes=9 feasibleNodes=7\nI0622 08:37:28.397656 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-n9p99\" node=\"ip-172-20-0-206.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:28.795827 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-xjp6t\" node=\"ip-172-20-0-16.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:29.186852 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-k7cwt\" node=\"ip-172-20-0-16.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:29.459605 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-zbc92\" node=\"ip-172-20-0-206.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:30.461409 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-n7tzz\" node=\"ip-172-20-0-206.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:34.888773 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-2z6lf\" node=\"ip-172-20-0-16.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:34.901228 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-f59n7\" node=\"ip-172-20-0-206.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:35.267038 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-x69sr\" node=\"ip-172-20-0-206.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:35.656639 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-75gkr\" node=\"ip-172-20-0-206.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:45.804130 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/metrics-server-655dc594b4-h7bxn\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=9 feasibleNodes=3\nI0622 08:37:55.447433 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-zf9lw\" node=\"ip-172-20-0-74.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:55.477379 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-lc782\" node=\"ip-172-20-0-74.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:55.516136 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-s42dg\" node=\"ip-172-20-0-74.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:56.774723 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-cnplk\" node=\"ip-172-20-0-74.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:57.005736 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-bsx92\" node=\"ip-172-20-0-74.ec2.internal\" evaluatedNodes=9 feasibleNodes=1\nI0622 08:37:58.546337 10 node_tree.go:79] \"Removed node in listed group from NodeTree\" node=\"ip-172-20-0-145.ec2.internal\" zone=\"us-east-1:\\x00:us-east-1a\"\nI0622 08:37:58.621714 10 node_tree.go:79] \"Removed node in listed group from NodeTree\" node=\"ip-172-20-0-206.ec2.internal\" zone=\"us-east-1:\\x00:us-east-1a\"\nI0622 08:37:58.678746 10 node_tree.go:79] \"Removed node in listed group from NodeTree\" node=\"ip-172-20-0-16.ec2.internal\" zone=\"us-east-1:\\x00:us-east-1a\"\nI0622 08:37:58.830171 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-kz6p6\" node=\"ip-172-20-0-74.ec2.internal\" evaluatedNodes=6 feasibleNodes=1\nI0622 08:38:00.190079 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-wx9c9\" node=\"ip-172-20-0-74.ec2.internal\" evaluatedNodes=6 feasibleNodes=1\nI0622 08:38:00.995126 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-kvh4f\" node=\"ip-172-20-0-74.ec2.internal\" evaluatedNodes=6 feasibleNodes=1\nI0622 08:38:01.993132 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-fr97h\" node=\"ip-172-20-0-74.ec2.internal\" evaluatedNodes=6 feasibleNodes=1\nI0622 08:38:06.380981 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/node-local-dns-v6tb6\" node=\"ip-172-20-0-74.ec2.internal\" evaluatedNodes=6 feasibleNodes=1\nI0622 08:38:06.780524 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/cilium-mnr7d\" node=\"ip-172-20-0-74.ec2.internal\" evaluatedNodes=6 feasibleNodes=1\nI0622 08:38:07.180029 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/ebs-csi-node-2mbxd\" node=\"ip-172-20-0-74.ec2.internal\" evaluatedNodes=6 feasibleNodes=1\nI0622 08:38:39.747772 10 node_tree.go:79] \"Removed node in listed group from NodeTree\" node=\"ip-172-20-0-74.ec2.internal\" zone=\"us-east-1:\\x00:us-east-1a\"\nI0622 08:38:53.437204 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kube-system/hubble-relay-55846f56fb-bhhxv\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:40:54.100599 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-2938/inline-volume-47fx8\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-47fx8-my-volume\\\".\"\nI0622 08:40:54.164690 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-8663/inline-volume-8gzcz\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-8gzcz-my-volume\\\".\"\nI0622 08:40:54.208896 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-1855/downwardapi-volume-466bce02-f25e-4d17-a24c-c372b01f070f\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:40:54.253939 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"e2e-kubelet-etc-hosts-1373/test-pod\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:40:54.265202 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replicaset-2979/test-rs-gxkmm\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:40:54.279627 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1531/externalsvc-zp565\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:40:54.297616 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1531/externalsvc-7jwmq\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:40:54.335323 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-2576/hostexec-ip-172-20-0-238.ec2.internal-4kr66\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:40:54.335841 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"downward-api-8986/downwardapi-volume-659b0612-5eb1-4d85-9981-babd61f6c4f1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:40:54.335897 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-220/pod-projected-configmaps-313fe96a-bf20-4708-90e3-97dc983f2f6f\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:40:54.416542 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-7761/hostexec-ip-172-20-0-92.ec2.internal-7rchw\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:40:54.425802 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"dns-7495/e2e-configmap-dns-server-5ac2d2db-51ef-4659-84b4-31ea61c244f0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:40:54.565311 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-8159/pod-projected-configmaps-b9e9f982-e924-4f32-8ce9-a11b836a0e65\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:40:54.599028 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-3311/affinity-clusterip-transition-btmk5\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:40:54.649263 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-3311/affinity-clusterip-transition-nz72v\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:40:54.654656 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-3311/affinity-clusterip-transition-2b5ln\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:40:54.930349 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-6598/test-recreate-deployment-7d659f7dc9-fhlq6\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:40:55.248757 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9649/inline-volume-jrp6f\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-jrp6f-my-volume\\\".\"\nI0622 08:40:55.600394 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6464/hostexec-ip-172-20-0-238.ec2.internal-zr28c\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:40:55.793683 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2438/hostexec-ip-172-20-0-138.ec2.internal-7wvmj\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:40:55.898060 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replicaset-6145/test-rs-b4tbq\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:40:56.391292 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8119/hostexec-ip-172-20-0-114.ec2.internal-ns49w\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:40:56.454453 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-9966/hostexec-ip-172-20-0-138.ec2.internal-fl2mf\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:40:57.032855 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-7193/test-orphan-deployment-5d9fdcc779-w9mj7\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:40:58.021622 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-9649-4756/csi-hostpathplugin-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:40:58.106590 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2909-3546/csi-mockplugin-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:40:58.142220 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9649/inline-volume-tester-8w6kf\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-8w6kf-my-volume-1\\\".\"\nI0622 08:40:58.172691 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2909-3546/csi-mockplugin-resizer-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:40:58.466618 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"svcaccounts-8317/test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:40:58.662322 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-2938-5374/csi-hostpathplugin-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:40:58.727961 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-2938/inline-volume-tester-lxjqc\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-lxjqc-my-volume-0\\\".\"\nI0622 08:40:58.793682 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-8663-5410/csi-hostpathplugin-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:40:58.873591 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-8663/inline-volume-tester-mr6km\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-mr6km-my-volume-0\\\".\"\nI0622 08:41:00.383237 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"e2e-kubelet-etc-hosts-1373/test-host-network-pod\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:01.189449 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-6598/test-recreate-deployment-5b99bd5487-wshbc\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:02.387473 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replicaset-2979/test-rs-4phhs\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:02.448637 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replicaset-2979/test-rs-cw2cv\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:03.514956 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-expand-6106/pod-9f8b59b1-ef29-45e6-a554-7e53d82fb80f\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:04.534331 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"dns-7495/e2e-dns-utils\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:04.854755 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-6827/metadata-volume-f44e65df-a9f5-44ae-897c-0d45620d5819\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:05.006580 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"emptydir-3890/pod-68b969b1-0954-4049-b5dd-42b648b0ed6c\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:05.511664 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"port-forwarding-4484/pfpod\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:05.713937 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-expand-2726/pod-1364c347-f2a8-4bc6-ac6b-a9f7fc500339\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:06.413609 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1531/execpod459mq\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:06.932191 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4050-4171/csi-mockplugin-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:06.985149 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4050-4171/csi-mockplugin-resizer-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:07.761533 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2438/pod-e4575383-8935-4fd8-b87c-33775fd88c57\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:07.803667 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8119/pod-0f23cf32-78a2-42c7-8b1e-6818be1286d3\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:08.022777 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-4664/aws-injector\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:08.866569 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"svcaccounts-8317/test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:09.777539 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-3311/execpod-affinityk7h6r\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:12.332617 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-8663/inline-volume-tester-mr6km\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:12.348247 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-2938/inline-volume-tester-lxjqc\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:12.356017 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-9649/inline-volume-tester-8w6kf\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:12.390936 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-2416/test-deployment-z6twk-764bc7c4b7-f54gr\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:12.527646 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-8447/netserver-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:12.568777 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-8447/netserver-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:12.601225 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-5578/nodeport-test-t6rmr\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:12.631433 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-5578/nodeport-test-26x77\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:12.641401 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-8447/netserver-2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:12.652425 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-8447/netserver-3\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:13.203197 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6464/pod-subpath-test-preprovisionedpv-j64s\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:13.572304 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-7761/exec-volume-test-preprovisionedpv-2ppn\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:13.668128 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"security-context-9719/security-context-359a1dda-a7cb-4873-a598-39266d718756\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:13.888391 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-9966/pod-f5a1e933-0d11-4985-b51e-98f2e2232fe5\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:14.745828 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-3358/pause\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:15.048966 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-2576/exec-volume-test-preprovisionedpv-8w74\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:15.484211 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"configmap-9137/pod-configmaps-5a9bd23f-1b36-4c5b-a36e-2160ba4294f4\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:18.859599 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4050/pvc-volume-tester-v9jlp\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:19.218233 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"svcaccounts-8317/test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:20.674894 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2438/pod-5afe54d3-db3a-426f-bdf3-6fae55aefb91\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:20.928295 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"webhook-6694/sample-webhook-deployment-78948c58f6-29b78\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:21.252555 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"downward-api-1520/downward-api-086aa857-7484-4c67-9b93-e8d7c6431478\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:24.028210 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"security-context-7974/security-context-c4dbf756-1d6c-4c06-a338-57f01eff7d32\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:24.050314 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-9966/hostexec-ip-172-20-0-138.ec2.internal-q6kfx\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:26.375753 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1197/externalname-service-9cnt4\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:26.394519 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1197/externalname-service-j6r8d\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:26.783906 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"emptydir-3619/pod-cefdf01c-c8ed-413c-b313-72f455e6079e\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:26.967295 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2909/pvc-volume-tester-fl4vs\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:27.233113 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4050/pvc-volume-tester-6f78m\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:27.670770 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-5578/execpodmzgzm\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:29.191506 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9441/hostexec-ip-172-20-0-138.ec2.internal-w6jcg\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:31.623551 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"emptydir-2967/pod-b16ee8d9-6f49-4d82-8be3-2fa9f1d3ad02\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:31.661197 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"svcaccounts-8317/test-pod-7e15d689-6623-435d-82cc-d3f2ed520e18\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:32.514681 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-414/httpd\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:32.959646 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-2919/e2e-test-httpd-pod\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:33.010204 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-4116/pod-subpath-test-inlinevolume-l28m\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:33.048061 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-expand-6106/pod-37a5418e-ce40-41b2-b3a8-10a0d631f042\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:33.237820 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-9262/ss-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:34.856939 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-2938/inline-volume-tester2-mcj5k\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester2-mcj5k-my-volume-0\\\".\"\nI0622 08:41:35.312949 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-runtime-6335/termination-message-containera100bc2d-1acd-45c0-be6f-176cac7d1185\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:35.446239 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1197/execpodt7tsg\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:35.776844 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9441/pod-7a546702-0b69-4278-865e-60fff0a22126\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:37.067304 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"emptydir-7974/pod-6ec0ab50-240d-45cf-90aa-3b72c795e6ff\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:37.200751 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"proxy-5255/proxy-service-x9rx4-rvgdn\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:37.358464 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-2938/inline-volume-tester2-mcj5k\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:38.071800 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-5175/pod-subpath-test-inlinevolume-fkrw\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:38.510376 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"clientset-2670/pod005d81d8-25c5-4c17-a5e2-0651d158b16e\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:40.242456 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-9343-6478/csi-hostpathplugin-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:41.058534 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-8447/test-container-pod\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:42.081247 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-4664/aws-client\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:43.176848 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"apply-7039/test-pod\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:43.731237 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2643/hostexec-ip-172-20-0-92.ec2.internal-cpsjh\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:44.274879 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6573/hostexec-ip-172-20-0-114.ec2.internal-pjdhk\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:44.304728 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replicaset-9558/my-hostname-basic-e08c375a-2336-4ad2-9dc3-6cea09f9397a-rlsth\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:46.354004 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-7387/hostexec-ip-172-20-0-92.ec2.internal-zn874\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:47.037148 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"port-forwarding-4909/pfpod\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:48.366744 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-2677/hostexec-ip-172-20-0-138.ec2.internal-7w4j8\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:48.506238 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9441/pod-584cd7f1-aca0-46cd-9e33-95be4a81af83\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:48.573220 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-2598/inline-volume-92n22\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-92n22-my-volume\\\".\"\nI0622 08:41:49.026019 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-7858/inline-volume-mpswf\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-mpswf-my-volume\\\".\"\nI0622 08:41:50.805495 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-2598/inline-volume-tester-2k5qh\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-2k5qh-my-volume-0\\\".\"\nI0622 08:41:50.911972 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-6906/httpd\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:51.039378 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-414/run-log-test\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:51.272985 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-7858/inline-volume-tester-9l6q4\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-9l6q4-my-volume-0\\\".\"\nI0622 08:41:52.357489 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-7858/inline-volume-tester-9l6q4\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:41:53.849574 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4962/hostexec-ip-172-20-0-114.ec2.internal-7td9v\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:54.358397 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-7858/inline-volume-tester-9l6q4\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:41:54.672715 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-9343/hostpath-injector\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:56.281203 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2643/pod-1685a042-f656-41dd-ba9f-759c895cc3f8\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:56.382513 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-2598/inline-volume-tester-2k5qh\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:56.393177 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4962/pod-7e2061aa-27f0-4cac-9c89-9e3cba32df16\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:57.754686 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-2937/hostexec-ip-172-20-0-238.ec2.internal-dlc5l\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:58.367889 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-7858/inline-volume-tester-9l6q4\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:41:59.017367 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-2677/pod-subpath-test-preprovisionedpv-cxqw\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:59.086314 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4962/pod-537189dc-2a96-4f3b-b0b3-eadc98a1a5d7\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:59.351375 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6573/pod-subpath-test-preprovisionedpv-qh6b\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:59.537109 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-7387/pod-subpath-test-preprovisionedpv-c52s\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:41:59.721048 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-9376/pod-exec-websocket-2fed1431-41b6-4657-9eb9-b31a9a361f0c\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:00.177514 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-250/hostexec-ip-172-20-0-92.ec2.internal-pdvnb\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:42:05.373410 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-5d9fdcc779-d6997\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:05.400466 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-5d9fdcc779-cmqgl\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:05.411256 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-5d9fdcc779-vlrdw\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:05.411991 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-5d9fdcc779-vzcmb\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:05.412263 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-5d9fdcc779-pcvmt\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:05.412564 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-5d9fdcc779-cx89b\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:05.709326 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-expand-5069-9078/csi-hostpathplugin-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:42:06.912858 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-probe-7976/busybox-8c1e26eb-d30d-409f-a78d-b3a703440700\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:07.554111 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-689bb66d9d-wnxvc\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:07.562393 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-689bb66d9d-svwjb\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:07.635970 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-689bb66d9d-wsgsb\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:07.668508 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"downward-api-6221/downward-api-eba5c416-3a06-4641-9b2b-9ce0a657902b\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:08.633482 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-5d9fdcc779-tdblc\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:08.683683 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-689bb66d9d-g55cx\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:08.768413 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-689bb66d9d-255df\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:09.001155 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"dns-2861/dns-test-1f31f2da-b5a3-4d55-8abd-284bda704a59\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:09.812822 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-5d9fdcc779-2ghll\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:09.868133 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-5d9fdcc779-rlqvg\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:10.322890 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-5d9fdcc779-wm289\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:10.411274 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-5cf7b9bdb6-4gtc7\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:10.440149 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-5cf7b9bdb6-777sv\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:10.440459 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-5cf7b9bdb6-zfsjt\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:11.465222 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"secrets-7929/pod-secrets-7f618c87-7cb1-499c-89ed-7b4840f25086\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:11.676239 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"prestop-6209/server\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:12.083840 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-9262/ss-1\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:12.782901 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-8686c6ff66-6nhvs\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:12.793783 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-8686c6ff66-spwpz\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:12.816463 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-8686c6ff66-mtsm9\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:13.300634 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-2937/pod-subpath-test-preprovisionedpv-hzjp\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:42:14.641992 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-1359/pod-f4f67707-a58b-4a31-893e-36536b687d81\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:15.541168 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-8686c6ff66-xrfqs\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:15.546944 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-2677/pod-subpath-test-preprovisionedpv-cxqw\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:42:15.770042 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-8686c6ff66-xrrt6\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:15.842938 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-689bb66d9d-ddcwp\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:15.889159 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-8686c6ff66-lfbps\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:15.931780 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-8686c6ff66-bbtnd\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:18.601191 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"topology-6482/pod-6e364a47-0d5a-4729-a21e-220ed3a0c96c\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:19.477398 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-8686c6ff66-7vjqg\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:20.347709 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-9343/hostpath-client\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:42:20.397928 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replicaset-9869/pod-adoption-release\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:20.517299 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"topology-6550/pod-fdaf9caa-091a-4ec2-ab41-63efc8e576ec\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:42:20.864258 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-689bb66d9d-dkklw\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:22.376617 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"topology-6550/pod-fdaf9caa-091a-4ec2-ab41-63efc8e576ec\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:42:24.113663 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-probe-5730/busybox-2b3226a0-635a-4a7b-baae-5a7314af8483\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:24.224213 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-8686c6ff66-sxltc\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:24.383026 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"topology-6550/pod-fdaf9caa-091a-4ec2-ab41-63efc8e576ec\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:24.733059 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-3516/rs-xk667\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:24.767394 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-3516/rs-q5d6m\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:24.768200 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-3516/rs-4bts2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:24.768580 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-3516/rs-977gn\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:24.768897 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-3516/rs-skbvj\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:24.773692 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-3516/rs-5g88b\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:24.798516 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-3516/rs-c9vnr\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:24.802914 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-3516/rs-kkpps\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:24.823105 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-3516/rs-5mbvx\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:24.823806 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-3516/rs-rf8zr\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:25.797576 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"prestop-6209/tester\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:26.730731 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replicaset-9869/pod-adoption-release-k4gb6\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:26.740373 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-5cf7b9bdb6-kkjbh\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:26.775726 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-5cf7b9bdb6-x4j5b\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:26.854144 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-5cf7b9bdb6-rgkdf\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:26.928006 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-5d9fdcc779-8hrt7\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:26.965347 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-8686c6ff66-5xcfz\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:26.989457 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-8686c6ff66-qc2s8\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:27.082143 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-8686c6ff66-8rhck\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:27.345679 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-8686c6ff66-dqkkq\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:27.437697 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-8686c6ff66-bk9ch\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:27.957080 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-8461/inline-volume-frkzb\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-frkzb-my-volume\\\".\"\nI0622 08:42:29.264622 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-8686c6ff66-s7nwg\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:29.312118 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-250/pod-subpath-test-preprovisionedpv-vfzt\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:42:29.536681 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-659bfd846d-7rw5d\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:29.590525 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-659bfd846d-gkqlz\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:29.640947 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-659bfd846d-zszz5\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:29.701249 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-659bfd846d-cf4wq\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:29.944995 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-8461-53/csi-hostpathplugin-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:42:30.019636 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-8461/inline-volume-tester-xgrk9\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-xgrk9-my-volume-0\\\".\"\nI0622 08:42:30.333113 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"init-container-1621/pod-init-cdb3bf13-8e7e-4bc8-bbbf-70a354bfab41\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:31.382622 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-8461/inline-volume-tester-xgrk9\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:42:31.614084 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-659bfd846d-bplzt\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:32.320099 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-659bfd846d-9zk4g\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:32.336324 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-8686c6ff66-rk7h2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:32.478746 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-57f55c85b8-ssz98\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:32.516925 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-57f55c85b8-k6nks\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:32.538447 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-57f55c85b8-v9mpk\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:32.541542 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-57f55c85b8-ss6td\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:32.558387 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-57f55c85b8-8xhm2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:32.809471 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-1359/hostexec-ip-172-20-0-238.ec2.internal-wmmtf\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:42:33.913798 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-8461/inline-volume-tester-xgrk9\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:42:34.632940 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-57f55c85b8-p6nbn\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:37.273417 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-659bfd846d-dhqrv\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:37.290523 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-659bfd846d-7r72l\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:37.300534 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-659bfd846d-f2k57\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:37.326884 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-5422/hostpath-injector\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:42:38.396466 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-8461/inline-volume-tester-xgrk9\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:42:39.377301 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-57f55c85b8-6gr7t\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:39.404297 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-57f55c85b8-4l4nk\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:39.404625 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-57f55c85b8-5nxhj\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:41.400723 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-57f55c85b8-6s7hp\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:43.388373 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"security-context-5214/security-context-55b49cef-e81f-4815-90d9-4e2c8cd09cd2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:43.485120 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-57f55c85b8-ldx5r\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:43.512080 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-57f55c85b8-n4kmv\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:43.726334 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-68df9976f4-lqqfs\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:43.757230 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-68df9976f4-nwtx8\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:43.763867 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-68df9976f4-qjgkx\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:43.769223 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-68df9976f4-8xrtr\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:43.788402 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-68df9976f4-xgmwn\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:45.686565 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-68df9976f4-qkntx\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:45.836280 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-250/pod-subpath-test-preprovisionedpv-vfzt\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:42:46.217703 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-3873/pod-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:46.249523 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-3873/pod-1\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:46.999253 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-3516/rs-n7qp7\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:47.245633 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-5045/kube-proxy-mode-detector\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:47.742228 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-68df9976f4-tvlbb\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:48.837761 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-9262/ss-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:49.189129 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-68df9976f4-5sd6t\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:49.939390 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8549/webserver-68df9976f4-kwfkg\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:51.271638 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-733/pod-subpath-test-inlinevolume-72b2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:42:51.897651 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-5900/pod-projected-configmaps-b9da4e02-47b5-43d8-bc39-3713a473200a\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:56.147186 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-5045/echo-sourceip\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:42:57.876679 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-7541/hostexec-ip-172-20-0-114.ec2.internal-7ll8s\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:42:58.539589 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-3970/pod-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:00.004181 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-763/hostexec-ip-172-20-0-114.ec2.internal-2xhwt\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:01.280031 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-5422/hostpath-client\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:02.180047 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-6697/pod-hostip-3f830fd5-9f2c-4ba2-b2d1-07530b9ec824\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:02.378071 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-3468/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0622 08:43:02.439430 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-3468/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0622 08:43:03.066707 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-9965/exec-volume-test-dynamicpv-zsvd\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:03.408295 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-3468/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0622 08:43:03.815674 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"security-context-test-902/busybox-user-0-bc6b84f1-01a1-42d6-be36-f1d5b9d369f1\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:04.199705 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-7080/hostexec-ip-172-20-0-92.ec2.internal-h4tc6\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:04.408755 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-3468/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0622 08:43:04.626919 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"limitrange-3468/pfpod\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:04.855115 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"prestop-6142/pod-prestop-hook-84022c72-efc4-49ff-b220-3501e1450b14\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:05.409468 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-3468/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0622 08:43:05.876721 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6857-4630/csi-mockplugin-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:05.909781 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6857-4630/csi-mockplugin-attacher-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:05.939089 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6857-4630/csi-mockplugin-resizer-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:06.381740 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-5045/pause-pod-6d899cd6b-58w77\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:06.395594 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-5045/pause-pod-6d899cd6b-ws57x\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=3\nI0622 08:43:06.409988 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-3468/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0622 08:43:06.886371 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-probe-2822/liveness-48847cdb-8510-4e75-9b4e-4ad28fd9522b\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:07.154327 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-5587/all-pods-removed-xt4hv\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:07.170242 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-5587/all-pods-removed-v65pz\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:07.796026 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-8761/netserver-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:07.841120 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-8761/netserver-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:07.877012 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-8761/netserver-2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:07.906249 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-8761/netserver-3\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:09.412488 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-3468/pod-no-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0622 08:43:09.751358 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-3468/pfpod2\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0622 08:43:09.878093 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6029-206/csi-mockplugin-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:09.967884 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6029-206/csi-mockplugin-attacher-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:10.105113 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-3941/test-pod-1\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:10.264293 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6356/hostexec-ip-172-20-0-92.ec2.internal-v4bc6\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:10.413464 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-3468/pod-partial-resources\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0622 08:43:11.413819 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-3468/pfpod2\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0622 08:43:13.415107 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"limitrange-3468/pfpod2\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient ephemeral-storage.\"\nI0622 08:43:14.665258 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-7541/pod-subpath-test-preprovisionedpv-26hd\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:14.769667 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-763/pod-subpath-test-preprovisionedpv-8dpc\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:15.877455 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-8379/pod-projected-configmaps-a04b9c80-6ac7-462c-878b-42ddc9a41559\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:17.814699 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-7608/hairpin\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:18.000635 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-351/pod-subpath-test-inlinevolume-b428\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:22.209337 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-3941/test-pod-2\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:22.639004 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-4326/hostexec-ip-172-20-0-138.ec2.internal-jb7n9\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:24.513130 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1581/hostexec-ip-172-20-0-138.ec2.internal-8tt4n\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:24.588992 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6857/pvc-volume-tester-828h5\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:25.769176 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-runtime-8024/image-pull-test7a5e0f05-2525-41cf-ad4f-27fca43f9ee4\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:26.796782 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"webhook-5475/sample-webhook-deployment-78948c58f6-h6rmc\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:27.094894 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1581/pod-f8d1b50a-1093-4b95-b0f5-5d4d2aeb1073\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:28.294164 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-3941/test-pod-3\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:28.817234 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-3756/hostexec-ip-172-20-0-92.ec2.internal-g48cj\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:28.881418 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6356/pod-subpath-test-preprovisionedpv-2pkr\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:29.136315 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-5772/pod-qos-class-f200bf60-8ed0-4ffd-9834-010a95981faf\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:29.271578 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-4326/pod-subpath-test-preprovisionedpv-fwg6\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:29.323735 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-279-5337/csi-mockplugin-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:29.363720 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-7080/local-injector\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:29.399310 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-279-5337/csi-mockplugin-attacher-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:29.504264 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-466/hostexec-ip-172-20-0-114.ec2.internal-vbnqx\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:29.541061 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-1554/pod-subpath-test-inlinevolume-kn5d\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:29.748736 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-9051/hostexec-ip-172-20-0-92.ec2.internal-nz2d2\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:32.272911 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-8761/test-container-pod\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:32.305246 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-8761/host-test-container-pod\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:32.527103 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6029/pvc-volume-tester-fq6cf\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:33.545123 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-8155/downwardapi-volume-bdfef638-203a-4717-b7ae-68f3fb965d0b\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:36.342909 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"sctp-4915/hostexec-ip-172-20-0-114.ec2.internal-gf4m8\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:36.395133 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-probe-774/liveness-53cee48e-208b-4e81-9981-bc80d079eab0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:36.918983 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-1531/ss-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:37.250410 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"init-container-7852/pod-init-6663a1a6-0031-44c2-8b74-299a8d8094b6\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:38.706531 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"sctp-4915/hostexec-ip-172-20-0-238.ec2.internal-jjskv\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:38.921830 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6857/pvc-volume-tester-7l5xp\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:39.943975 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1785/externalname-service-6n6l9\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:39.957834 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1785/externalname-service-4lj6w\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:41.312903 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"sctp-4915/pod1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:41.462278 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replicaset-7700/condition-test-8xb86\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:41.490375 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replicaset-7700/condition-test-s76g4\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:42.083090 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-1594/pod-subpath-test-inlinevolume-nptq\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:42.382762 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-3164/hostexec-ip-172-20-0-138.ec2.internal-rntnt\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:42.420395 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-5207/ss2-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:42.582706 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-4731/projected-volume-2b9c5075-cb34-4f26-95be-14199f8b3610\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:42.932274 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-23/pod-handle-http-request\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:43.552232 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-3756/exec-volume-test-preprovisionedpv-xlgd\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:44.179546 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-4167/httpd\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:44.274874 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-466/pod-subpath-test-preprovisionedpv-vscj\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:44.696804 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-9051/pod-subpath-test-preprovisionedpv-w4tn\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:45.396698 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3222-8883/csi-mockplugin-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:45.424495 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3222-8883/csi-mockplugin-attacher-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:45.459298 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3222-8883/csi-mockplugin-resizer-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:46.003692 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1785/execpodmhrmr\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:48.139609 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-279/pvc-volume-tester-xqs6q\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:48.543629 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-7901/hostexec-ip-172-20-0-138.ec2.internal-kspj9\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:49.060488 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-23/pod-with-poststart-exec-hook\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:49.245726 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-7080/local-client\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:50.736407 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replicaset-9961/test-rs-x82gh\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:50.758202 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replicaset-9961/test-rs-xt5z2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:50.766280 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replicaset-9961/test-rs-ltjhn\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:51.701007 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"sctp-4915/hostexec-ip-172-20-0-114.ec2.internal-xx98d\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:52.348366 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-5207/ss2-1\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:54.552584 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-5207/ss2-2\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:55.320082 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8013/test-deployment-6cdc5bc678-bzjps\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:55.339605 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8013/test-deployment-6cdc5bc678-b2vtt\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:55.628223 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"emptydir-5669/pod-bdc2f78d-bb85-42e2-81fc-980be7adbe32\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:56.086487 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"webhook-5585/sample-webhook-deployment-78948c58f6-44ht5\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:57.303169 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3222/pvc-volume-tester-jvkdv\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:58.006358 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"dns-7050/dns-test-08c7abc3-ef46-4c83-9263-794c4e6af7dc\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:58.196620 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"sctp-4915/hostexec-ip-172-20-0-238.ec2.internal-fffxx\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:58.948001 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"svc-latency-3338/svc-latency-rc-qs24f\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:43:59.077262 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-3164/pod-subpath-test-preprovisionedpv-lvv4\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:43:59.358849 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-7901/pod-subpath-test-preprovisionedpv-lnl8\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:00.176544 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-5060/hostexec-ip-172-20-0-138.ec2.internal-lkvhf\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:00.664535 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-4167/run-test\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:03.248489 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3211/hostexec-ip-172-20-0-114.ec2.internal-bj2kh\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:04.431670 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8013/test-deployment-5ddd8b47d8-4q4c6\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:04.623124 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"proxy-1150/agnhost\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:05.642046 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"subpath-9680/pod-subpath-test-secret-v5zv\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:06.059411 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3211/pod-c9bec92e-b868-499e-b4a2-4fc0ccac84cd\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:06.291101 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"dns-7050/dns-test-7ea920f7-32b5-403c-a91c-13aa6d53678b\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:07.707474 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-5207/ss2-2\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:10.834417 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-5956/pod-logs-websocket-d6c7db64-19a5-4742-a03e-bfec2f0e688f\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:10.905673 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8013/test-deployment-854fdc678-r5x6x\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:10.906264 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8013/test-deployment-5ddd8b47d8-mvzq4\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:12.054364 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-4364/agnhost-primary-2n45n\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:12.759619 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"security-context-test-6951/busybox-privileged-true-f2d5996c-ef90-4292-b84c-bcc774fc6d31\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:13.097734 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-5207/ss2-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:13.397179 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"apply-7534/deployment-8d545c96d-776vx\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:13.433374 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"apply-7534/deployment-8d545c96d-96svv\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:13.434045 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"apply-7534/deployment-8d545c96d-pj7j5\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:13.467260 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"apply-7534/deployment-7c658794b9-vbx8g\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:14.180283 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-5060/pod-subpath-test-preprovisionedpv-hrtx\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:14.480361 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6794/hostexec-ip-172-20-0-238.ec2.internal-869qx\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:14.610337 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replication-controller-2636/my-hostname-basic-0bfba110-8a85-4d50-b59b-a173d9a5e633-7s94g\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:14.710503 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"containers-4298/client-containers-7beeeda7-238f-48df-b5db-fe75434dcb25\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:15.992528 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"svcaccounts-2471/oidc-discovery-validator\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:15.993430 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-2636/httpd\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:16.525711 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-runtime-5550/termination-message-container5649bd7f-1877-462c-9cbe-7c903a39faf2\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:16.953163 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"security-context-test-3803/explicit-nonroot-uid\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:17.063505 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"webhook-6779/sample-webhook-deployment-78948c58f6-b7wcr\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:17.809614 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"security-context-test-2496/busybox-user-65534-029f05e7-3e8a-4245-a65f-f3bc6314cf10\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:19.065289 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-5207/ss2-2\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:19.592922 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-8013/test-deployment-854fdc678-8nnmw\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:21.356638 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-4529/hostexec-ip-172-20-0-92.ec2.internal-l772q\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:21.500412 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-lntgs\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:21.500473 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"init-container-857/pod-init-ef18d953-3819-424b-a3e3-677f8c545f46\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:21.517152 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-tsz6r\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:21.549198 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-rzpl8\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:21.549290 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-fhtr6\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:21.587356 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-k8f4g\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:21.598767 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-vfpfc\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:21.598828 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-tmm6f\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:21.628118 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-ggk7j\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:21.641180 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-bvhnz\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:21.649734 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-fgcts\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:22.701049 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-5558/netserver-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:22.785306 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-5558/netserver-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:22.811281 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-5558/netserver-2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:22.886305 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-5558/netserver-3\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:23.592151 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-4167/run-test-2\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:24.260677 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"emptydir-1970/pod-size-memory-volume-a3f93b79-3539-487c-a9d2-f64e2e5c693b\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:26.766956 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6419/pod-f04a0e65-ac04-4c29-857f-a2cd53dc34fe\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:27.098566 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-probe-736/busybox-9f073274-3e10-419b-9b9e-61946e2b226b\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:29.155663 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-1447/hostexec-ip-172-20-0-238.ec2.internal-74mjs\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:29.294850 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-934/netserver-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:29.345297 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6794/pod-subpath-test-preprovisionedpv-q9t2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:29.355452 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-934/netserver-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:29.389185 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-934/netserver-2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:29.419971 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-934/netserver-3\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:30.174995 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-4529/pod-subpath-test-preprovisionedpv-m82d\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:30.525134 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-2636/failure-1\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:30.907254 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-458/httpd\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:32.329987 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2605/hostexec-ip-172-20-0-138.ec2.internal-4wz4c\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:32.885295 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"dns-7050/dns-test-6d537c48-d2aa-4c03-a755-1f26bce5e353\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:33.161800 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-8831/pod-submit-remove-95dcd27b-fecc-4181-8b94-be0cd582d622\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:35.632959 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-4663/pod-subpath-test-inlinevolume-q5mv\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:35.762571 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-566f96c878-j7kt6\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:35.790417 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-566f96c878-wz8q8\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:35.796330 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-566f96c878-8qs5h\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:35.881861 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-566f96c878-lkp26\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:35.896257 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-566f96c878-7rjkk\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:37.021054 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"port-forwarding-8375/pfpod\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:37.546250 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replication-controller-9564/rc-test-r28bw\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:37.712796 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"hostpath-5932/pod-host-path-test\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.225618 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-nk5c5\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.259064 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-6jjwl\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.261381 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-6p5qs\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.265475 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-566f96c878-ffds9\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.288728 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-566f96c878-ckcf6\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.310392 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-566f96c878-9fwwn\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.321185 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-qxgmg\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.321552 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-fwlbm\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.321560 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-566f96c878-2l22h\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.364766 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-566f96c878-5kp2h\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.365383 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-c6t8q\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.370443 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-566f96c878-9nqd2\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.370629 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-tlf4m\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.371380 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-wk57s\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.371622 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-tl4b5\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.371871 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-566f96c878-grfr2\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.376019 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-rdd88\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.379133 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-wwgnc\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.379331 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-5d9fdcc779-7tb98\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:38.379598 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-581/webserver-deployment-566f96c878-f2dpf\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:39.122361 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-7773/pod-2b90d25a-65ee-4966-997e-61d21a909d63\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:39.260352 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2605/pod-b19fa231-99ed-4cde-98cf-45f210667694\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:39.678997 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8152/hostexec-ip-172-20-0-92.ec2.internal-f8vj6\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:40.803047 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-8043/hostexec-ip-172-20-0-114.ec2.internal-gg5v4\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:43.323907 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-9138/pod-projected-configmaps-aafa697f-0f03-4cce-8fa3-80c92aa7c21d\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:43.981399 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-4167/run-test-3\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:44.067597 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-1447/pod-subpath-test-preprovisionedpv-whhm\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:44.507160 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-5207/ss2-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:47.394587 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replication-controller-9564/rc-test-kwvlw\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:47.934475 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6419/pvc-volume-tester-writer-fpbpf\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:48.085968 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8423-2203/csi-mockplugin-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:50.248487 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2605/pod-b3714414-1136-48c4-8aa5-622b7c210007\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:50.851156 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-541/hostexec-ip-172-20-0-138.ec2.internal-jnszf\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:51.241584 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-5558/test-container-pod\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:52.097904 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5870-4442/csi-mockplugin-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:52.214282 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8152/pod-1753e1c9-aa14-40ec-b6ad-da13ade6b269\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:52.431578 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-840/hostexec-ip-172-20-0-238.ec2.internal-jb6nr\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:53.501581 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-8039/pod-subpath-test-dynamicpv-zznx\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:56.041209 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-expand-8309-1286/csi-hostpathplugin-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:44:57.298547 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-5207/ss2-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:57.756725 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-934/test-container-pod\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:57.953363 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"emptydir-9683/pod-6a513499-324e-43d4-9e2d-b2eb85291cfb\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:57.971701 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"configmap-1797/pod-configmaps-0894e577-31d6-4067-9700-4ca33df959e8\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:59.277806 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-1900/aws-injector\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:44:59.505177 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5870/pvc-volume-tester-776xn\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:00.044160 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-8043/local-injector\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:00.215432 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"cronjob-7997/failed-jobs-history-limit-27598125-jv4mk\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:00.572848 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"emptydir-3842/pod-25e685de-f0a0-45c3-8a99-a6e70e51fc96\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:01.643700 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"crd-webhook-2170/sample-crd-conversion-webhook-deployment-bb9577b7b-sp5gk\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:01.875018 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-7773/pod-2dd5ece0-1496-4c2f-ade0-2c4d87288c9e\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:02.378521 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"webhook-4733/sample-webhook-deployment-78948c58f6-xgdpc\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:02.413059 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-541/pod-9b7e1cb1-b1a0-4c01-b91f-e2d3a85092ce\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:02.563842 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-4777/e2e-test-httpd-pod\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:03.279081 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"webhook-36/sample-webhook-deployment-78948c58f6-xtphf\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:04.530977 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-6286/simpletest.rc-sjkmr\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:04.544046 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-6286/simpletest.rc-9bc9g\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:06.742804 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8423/pvc-volume-tester-r7m79\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:08.221736 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9552/hostexec-ip-172-20-0-238.ec2.internal-vnpzb\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:09.314855 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-541/pod-07caf12e-f31d-4c47-9225-77b6bdc0fca6\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:10.093683 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-8043/local-client\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:11.209977 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"containers-908/client-containers-d052365a-5373-4e0b-bd8d-8d9189e01197\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:11.970384 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-4048/hostexec-ip-172-20-0-114.ec2.internal-gqc55\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:13.453742 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-840/pod-subpath-test-preprovisionedpv-fztg\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:15.170059 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-271/netserver-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:15.234998 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-271/netserver-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:15.270584 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-271/netserver-2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:15.307878 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-271/netserver-3\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:15.502396 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-6224/pod-projected-secrets-ae46df91-b85d-429d-873a-81ce827927a3\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:16.415300 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-4147/hostexec-ip-172-20-0-92.ec2.internal-6rh6w\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:16.942090 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6419/pvc-volume-tester-reader-fnb7j\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:17.095104 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-6547/pod-update-activedeadlineseconds-3f58de74-6891-4298-8fe3-660ec309f395\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:17.118613 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-9875/frontend-867dcc8574-g6zqn\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:17.118721 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9552/pod-77530655-11db-4b67-8600-3f73bd7fce17\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:17.147005 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-9875/frontend-867dcc8574-djv6j\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:17.147061 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-9875/frontend-867dcc8574-bbnrt\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:17.352693 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-9875/agnhost-primary-749f9d858d-52jwm\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:18.913329 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-9154/test-ss-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:19.034890 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-9875/agnhost-replica-69bb9d54dd-dbtsw\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:19.035214 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-9875/agnhost-replica-69bb9d54dd-vvssf\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:21.094101 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-1900/aws-client\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:23.927915 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-9136/downwardapi-volume-df330fbe-3b81-4ec1-a42b-c5815c8e7cb0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:24.136708 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"emptydir-9723/pod-sharedvolume-e57885a5-ae6e-4622-8242-ebad16b6f989\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:24.359874 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"webhook-1542/sample-webhook-deployment-78948c58f6-hxpdz\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:25.746971 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9552/pod-259bedeb-1e9c-4f3f-900f-c53e8bd29e90\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:25.896601 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8595/hostexec-ip-172-20-0-238.ec2.internal-556ld\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:28.793221 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-4048/pod-subpath-test-preprovisionedpv-792w\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:29.052103 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-9154/test-ss-1\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:30.772068 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-5451/externalsvc-f2m9f\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:30.817270 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-5451/externalsvc-gtfzs\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:31.101938 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2613/hostexec-ip-172-20-0-138.ec2.internal-r9r9l\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:33.422341 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"security-context-test-8046/alpine-nnp-false-1f2f1dd8-4aa2-4d13-8bd9-55be3448a258\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:33.779719 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-8859/hostexec-ip-172-20-0-238.ec2.internal-2pl22\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:33.922715 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-7679/test-rolling-update-controller-hzgkt\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:34.000552 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2613/pod-d9001d87-c74c-4c71-bbe6-cd0cfb303ef5\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:34.276240 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-4155/ss-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:34.489636 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8595/pod-d505063d-f01c-4b69-8869-58f4a841d769\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:35.659126 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"dns-6591/dns-test-95bf6f06-f278-4b08-b0a1-2da4c804a7cd\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:36.910812 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-5451/execpodfcrwv\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:37.616837 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-271/test-container-pod\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:38.311323 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-expand-349/pod-ca9978ac-30cb-44c3-9b2c-4b4c47a8e16a\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:38.680346 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-9154/test-ss-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:38.710925 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2613/pod-2d6c964c-681d-4ddc-bea1-a730d46d845b\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:39.586036 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"port-forwarding-8382/pfpod\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:42.063908 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-7679/test-rolling-update-deployment-796dbc4547-phq6h\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:42.082100 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-5413/hostexec-ip-172-20-0-238.ec2.internal-xqhf9\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:42.137174 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-4111/hostexec-ip-172-20-0-238.ec2.internal-86vnb\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:42.227157 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"dns-4812/dns-test-fbcd5bf7-49da-4d0d-a8e6-32f5f04e5794\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:43.190015 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-8859/local-injector\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:43.273539 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-4147/pod-subpath-test-preprovisionedpv-kls7\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:44.017378 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pod-network-test-7478/netserver-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:44.053219 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pod-network-test-7478/netserver-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:44.108349 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pod-network-test-7478/netserver-2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:44.130625 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-3365/pod-projected-configmaps-5e5d7b0b-7553-4379-9bc4-32ec3fca413d\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:44.138145 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pod-network-test-7478/netserver-3\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:44.246769 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/up-down-1-m4b2p\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:44.263059 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/up-down-1-sbhpx\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:44.268395 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/up-down-1-bbtqj\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:44.421142 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"configmap-1078/pod-configmaps-09afcbf5-706b-49e3-ad22-bea12983f234\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:48.846301 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-2259/aws-injector\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:49.808700 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-8512/hostexec-ip-172-20-0-114.ec2.internal-2m28h\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:50.536000 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1053-9699/csi-mockplugin-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:50.599835 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1053-9699/csi-mockplugin-attacher-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:50.625173 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-1076/hostexec-ip-172-20-0-114.ec2.internal-wtnh7\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:50.964232 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-probe-5926/startup-67744686-1507-4a4c-ba93-4bec4dc45bfa\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:52.842183 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"webhook-3211/sample-webhook-deployment-78948c58f6-fstv9\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:53.970677 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-9008/pod-subpath-test-inlinevolume-wwqq\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:58.841462 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-847/hostexec-ip-172-20-0-138.ec2.internal-ggg4t\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:59.179327 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-4111/pod-subpath-test-preprovisionedpv-rxt5\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:59.193420 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-1076/pod-subpath-test-preprovisionedpv-tg66\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:59.221039 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-5413/pod-subpath-test-preprovisionedpv-wxqc\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:45:59.414666 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/up-down-2-s82sk\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:59.414971 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/up-down-2-jglvr\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:59.426263 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/up-down-2-6jtvq\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:45:59.471532 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9985/hostexec-ip-172-20-0-92.ec2.internal-7mfg4\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:00.139759 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"cronjob-7997/failed-jobs-history-limit-27598126-66xcm\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:00.380269 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"webhook-3211/webhook-to-be-mutated\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:01.170532 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-8859/local-client\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:02.489681 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1053/pvc-volume-tester-z4rg5\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:05.516760 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/verify-service-up-host-exec-pod\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:06.025786 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9985/pod-929f610c-a5af-4613-aad8-3a87f59e423b\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:07.057445 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"downward-api-5202/downwardapi-volume-2985a5c1-f779-4861-8c70-1bd8aba7204c\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:08.477193 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pod-network-test-7478/test-container-pod\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:08.885284 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"secrets-1143/pod-configmaps-a089a328-bfc5-4755-9b9c-1e29484117a7\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:09.866932 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6190-1769/csi-mockplugin-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:09.955882 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-2259/aws-client\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:11.265085 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-4155/ss-1\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:11.504472 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-604/hostexec-ip-172-20-0-138.ec2.internal-4cnkb\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:11.614219 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/verify-service-up-exec-pod-fqzv6\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:12.454184 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-9346/hostexec-ip-172-20-0-138.ec2.internal-nslgn\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:12.639179 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"emptydir-3482/pod-74700d3d-d1ad-4444-9e33-cc906e993e46\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:12.867131 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9985/pod-5ede1ada-f0f3-41b9-8d16-f52f3a892242\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:13.608910 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-847/pod-subpath-test-preprovisionedpv-56nh\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:14.592367 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-8512/pod-774b574d-5863-4857-a338-a56ab1517f97\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:15.388507 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"downward-api-603/downward-api-582e7689-7938-4da5-9924-178a2114dcaa\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:16.119733 10 volume_binding.go:342] \"Failed to bind volumes for pod\" pod=\"csi-mock-volumes-6190/pvc-volume-tester-4mgn8\" err=\"binding volumes: provisioning failed for PVC \\\"pvc-jbb9d\\\"\"\nE0622 08:46:16.120834 10 framework.go:963] \"Failed running PreBind plugin\" err=\"binding volumes: provisioning failed for PVC \\\"pvc-jbb9d\\\"\" plugin=\"VolumeBinding\" pod=\"csi-mock-volumes-6190/pvc-volume-tester-4mgn8\"\nE0622 08:46:16.121038 10 factory.go:225] \"Error scheduling pod; retrying\" err=\"running PreBind plugin \\\"VolumeBinding\\\": binding volumes: provisioning failed for PVC \\\"pvc-jbb9d\\\"\" pod=\"csi-mock-volumes-6190/pvc-volume-tester-4mgn8\"\nI0622 08:46:17.261139 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"var-expansion-8081/var-expansion-76bf0531-8700-4efd-b22f-0c855447fc02\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:17.559711 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"emptydir-6652/pod-b94402b7-8a47-41d1-b744-c32b14e43576\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:17.609313 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/verify-service-up-host-exec-pod\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:17.915939 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-847/pod-subpath-test-preprovisionedpv-56nh\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:18.090989 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6327-2410/csi-hostpathplugin-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:18.590217 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6190/pvc-volume-tester-4mgn8\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:18.743442 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-8512/hostexec-ip-172-20-0-114.ec2.internal-29czj\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:19.787412 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/verify-service-up-exec-pod-jnprv\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:22.365630 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"configmap-9868/pod-configmaps-69e5ddd7-fd64-42d0-9035-2d36589249bc\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:22.437105 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6327/pod-subpath-test-dynamicpv-sggz\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:22.437449 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"emptydir-9072/pod-f77cdb74-ed8f-4db4-b3c1-7e11e892c235\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:22.554918 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-7633/service-headless-9md52\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:22.703059 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-7633/service-headless-bsrmv\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:22.738085 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-7633/service-headless-ww2sf\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:23.342416 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"configmap-128/pod-configmaps-808eefd5-ecd5-419d-ac4e-57ceb9c625d3\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:23.505438 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"subpath-5299/pod-subpath-test-configmap-xmcr\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:24.486127 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4307-4434/csi-mockplugin-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:24.507521 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-7569/agnhost-primary-mwhc2\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:26.037458 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-5977/test-ss-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:28.394154 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-604/pod-subpath-test-preprovisionedpv-kmr6\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:28.538540 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-7429/pod-17b68f90-b1e9-4101-84c0-55cff75c27c6\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:29.601449 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-9346/pod-12ffdfae-067c-4836-a792-62f44d18b790\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:29.738858 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-probe-6454/startup-bc4baf7d-1d07-4945-a522-2e97ff8b7e09\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:30.969618 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3262/hostexec-ip-172-20-0-92.ec2.internal-qshqh\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:31.023992 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2909/hostexec-ip-172-20-0-238.ec2.internal-hdbbb\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:31.095798 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-2376/ss-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:31.666558 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-7633/service-headless-toggled-7pnhx\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:31.666902 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-7633/service-headless-toggled-hrvqg\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:31.666980 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-7633/service-headless-toggled-s8pbj\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:31.947981 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-probe-1697/test-webserver-a2c55271-2ccb-4e48-b279-65ae35b5d00b\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:33.546440 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-2909/pod-dee507b8-0858-4734-a4d5-4c4694d2a5f3\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0622 08:46:33.748730 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-9346/hostexec-ip-172-20-0-138.ec2.internal-br58t\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:34.577189 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-2909/pod-dee507b8-0858-4734-a4d5-4c4694d2a5f3\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-lszs8\\\" not found.\"\nI0622 08:46:34.583471 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/verify-service-down-host-exec-pod\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nE0622 08:46:34.595966 10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-dee507b8-0858-4734-a4d5-4c4694d2a5f3.16fae4fe76c10007\", GenerateName:\"\", Namespace:\"persistent-local-volumes-test-2909\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"persistent-local-volumes-test-2909\", Name:\"pod-dee507b8-0858-4734-a4d5-4c4694d2a5f3\", UID:\"86ed53c1-94b5-4402-8f44-27b1aaa44e9e\", APIVersion:\"v1\", ResourceVersion:\"28751\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-lszs8\\\" not found.\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 22, 8, 46, 34, 577264647, time.Local), LastTimestamp:time.Date(2022, time.June, 22, 8, 46, 34, 577264647, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-dee507b8-0858-4734-a4d5-4c4694d2a5f3.16fae4fe76c10007\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-2909 because it is being terminated' (will not retry!)\nI0622 08:46:34.730588 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-7429/hostexec-ip-172-20-0-114.ec2.internal-kxf2n\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:34.806064 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-7223/exec-volume-test-inlinevolume-r967\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:36.141895 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-336/hostexec-ip-172-20-0-238.ec2.internal-5xm7k\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:36.578889 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-2909/pod-dee507b8-0858-4734-a4d5-4c4694d2a5f3\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-lszs8\\\" not found.\"\nE0622 08:46:36.583602 10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-dee507b8-0858-4734-a4d5-4c4694d2a5f3.16fae4fe76c10007\", GenerateName:\"\", Namespace:\"persistent-local-volumes-test-2909\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"persistent-local-volumes-test-2909\", Name:\"pod-dee507b8-0858-4734-a4d5-4c4694d2a5f3\", UID:\"86ed53c1-94b5-4402-8f44-27b1aaa44e9e\", APIVersion:\"v1\", ResourceVersion:\"28804\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-lszs8\\\" not found.\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 22, 8, 46, 34, 577264647, time.Local), LastTimestamp:time.Date(2022, time.June, 22, 8, 46, 36, 578959596, time.Local), Count:2, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-dee507b8-0858-4734-a4d5-4c4694d2a5f3.16fae4fe76c10007\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-2909 because it is being terminated' (will not retry!)\nI0622 08:46:37.907362 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3262/pod-6ce0ad09-e7a8-4562-a1c7-2f5f7d897767\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:39.004671 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-4155/ss-2\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:40.669754 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5977-2822/csi-mockplugin-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:40.753474 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-7633/verify-service-up-host-exec-pod\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:40.954568 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-336/pod-12faa56f-6f83-4f4d-8c4b-1747870e439b\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:41.133843 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/verify-service-up-host-exec-pod\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:41.274212 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-2376/ss-1\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:41.375178 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4307/pvc-volume-tester-ddpjl\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:42.351169 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"emptydir-8732/pod-360ce291-e1be-4f45-8ea5-03e9950b6b2b\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:45.762484 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-3528/hostexec-ip-172-20-0-238.ec2.internal-65qkv\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:45.875682 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-336/pod-6c3fa272-93e8-4296-81c1-5f34d5c0c31e\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:45.887991 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7173/hostexec-ip-172-20-0-138.ec2.internal-pwbrm\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:47.753118 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1092/hostexec-ip-172-20-0-238.ec2.internal-jn2wb\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:48.062662 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5977/pvc-volume-tester-q8smf\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:48.848641 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-7633/verify-service-up-exec-pod-k26nn\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:49.226990 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/verify-service-up-exec-pod-nw446\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:50.931085 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-372/ss-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:51.417127 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7173/pod-0b52980e-745d-40df-b616-a62fbe989f1e\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:51.891872 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"var-expansion-4480/var-expansion-3b19b978-34ed-4aad-966a-8245b83fa3b1\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:52.464500 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replication-controller-9351/pod-adoption\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:53.734054 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5872-239/csi-mockplugin-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:53.805156 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5872-239/csi-mockplugin-attacher-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:54.344514 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1092/pod-82db0f71-a6a2-4d25-868e-013f88ec7e75\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:54.460091 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-6816/pod-projected-secrets-64a86604-343c-48ac-93ea-e5982e2a1299\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:56.262548 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-7633/verify-service-down-host-exec-pod\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:56.432759 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"emptydir-4936/pod-e476e1a5-96b4-4833-be8c-3fe8bef89e1e\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:57.271471 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/up-down-3-nqfgx\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:57.312367 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/up-down-3-45qld\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:57.323840 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/up-down-3-lz62s\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:57.870735 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-4298/downwardapi-volume-17d4b595-5528-4b54-9e28-3e8ddaa47db0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:46:58.442330 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-3528/exec-volume-test-preprovisionedpv-d47m\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:59.417231 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-4248/pod-handle-http-request\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:46:59.682989 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ttlafterfinished-97/rand-non-local-7jbmh\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:01.542257 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-4248/pod-with-prestop-http-hook\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:02.614619 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-2460/hostexec-ip-172-20-0-238.ec2.internal-w7zr7\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:04.075789 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"csi-mock-volumes-5872/pvc-volume-tester-b2kct\" err=\"0/5 nodes are available: 1 node(s) did not have enough free storage, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0622 08:47:05.011800 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-3717/emptydir-injector\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:08.066390 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6465/hostexec-ip-172-20-0-92.ec2.internal-j6mdk\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:08.883542 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-7633/verify-service-down-host-exec-pod\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:09.035704 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"webhook-1217/sample-webhook-deployment-78948c58f6-2t6d6\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:09.076585 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"port-forwarding-2582/pfpod\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:09.377094 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/verify-service-up-host-exec-pod\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:09.821856 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-2794/netserver-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:09.864498 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-2794/netserver-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:09.908804 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-2794/netserver-2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:09.938535 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-2794/netserver-3\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:11.518771 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ttlafterfinished-97/rand-non-local-sbdw9\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:12.249431 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-6158/pod-projected-configmaps-0279f7b3-9c18-4253-8c15-2dad6b7a1047\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:12.828912 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-1852/pod-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:12.865107 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-1852/pod-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:12.893658 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-1852/pod-2\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:13.476933 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/verify-service-up-exec-pod-dzkgr\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:13.557257 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pod-network-test-2648/netserver-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:13.596311 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pod-network-test-2648/netserver-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:13.631327 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pod-network-test-2648/netserver-2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:13.654423 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pod-network-test-2648/netserver-3\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:14.734469 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6465/pod-subpath-test-preprovisionedpv-6sfv\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:15.574137 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-7633/verify-service-up-host-exec-pod\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:17.808340 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-5663/pod-subpath-test-inlinevolume-fnhm\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:19.709558 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ttlafterfinished-97/rand-non-local-r8vb9\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:19.991925 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-9100/rs-nbzjg\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:20.012664 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-9100/rs-njw2b\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:20.023519 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-9100/rs-c2qtk\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:20.036052 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-9100/rs-nt2qx\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:20.051060 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-9100/rs-lzgvm\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:20.060767 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-9100/rs-75lpx\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:20.060895 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-9100/rs-hlfwc\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:20.077777 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-9100/rs-kjdwg\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:20.086070 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-9100/rs-tb2h7\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:20.086169 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-9100/rs-wkssj\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:20.395819 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-5928/netserver-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:20.436329 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-5928/netserver-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:20.466216 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-5928/netserver-2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:20.503107 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-5928/netserver-3\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:20.865161 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-4155/ss-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:22.864139 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"sysctl-9732/sysctl-98a87249-3f41-4ab6-a84e-554ee5b0a3e5\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:23.661014 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-7633/verify-service-up-exec-pod-tzh5k\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:24.145871 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/verify-service-up-host-exec-pod\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:26.054614 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-5977/test-ss-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:26.067584 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"configmap-5343/pod-configmaps-5229916e-4efd-4c91-94bb-fe8da57ff9ea\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:26.708240 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"webhook-8977/sample-webhook-deployment-78948c58f6-5z924\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:28.860772 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-2460/pod-subpath-test-preprovisionedpv-fhn4\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:29.083094 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"secrets-4989/pod-secrets-2e8c2719-71e2-4c03-9ade-1d1a19afd2b5\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:30.239268 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1596/verify-service-up-exec-pod-7psb4\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:30.556131 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-3879/configmap-client\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:31.578501 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4098-6825/csi-mockplugin-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:31.631480 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4098-6825/csi-mockplugin-attacher-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:33.714635 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"downward-api-708/downwardapi-volume-ad51cc7c-8f00-44f7-84fa-9ee14ece8f02\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:33.913372 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"subpath-7512/pod-subpath-test-configmap-2r9v\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:33.986659 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"webhook-8977/to-be-attached-pod\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:34.035457 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-2472/test-rollover-controller-59jv4\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:34.248029 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"init-container-4558/pod-init-6d14b60b-1b4f-48fa-a5bd-e4d575d4032d\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:34.273527 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-9100/rs-x28zj\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:34.307139 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-2794/test-container-pod\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:34.524553 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-7567/server-envvars-30765f48-832e-433d-9c7e-d38d3f7781fa\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:34.549891 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-7633/verify-service-down-host-exec-pod\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:35.322350 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"var-expansion-7631/var-expansion-a7326482-0abe-417d-bf71-fbe182dc517e\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:37.578712 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"secrets-2676/pod-secrets-9e405c4b-53eb-41a7-aa4e-eb7189b0e969\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:40.538829 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumelimits-4192-5457/csi-hostpathplugin-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:41.843778 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-9778/httpd\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:41.862012 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"csi-mock-volumes-4098/pvc-volume-tester-v6j28\" err=\"0/5 nodes are available: 1 node(s) did not have enough free storage, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0622 08:47:43.048698 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-7241/pod-subpath-test-inlinevolume-hfsm\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:43.826960 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-3056-6231/csi-hostpathplugin-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:43.969451 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-3056/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:47:44.052987 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pod-network-test-2648/test-container-pod\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:44.087187 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pod-network-test-2648/host-test-container-pod\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:45.188730 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-3056/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:47:46.359699 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-3917/pod-b0de527e-003c-467e-880e-399e61aeba61\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:47.627378 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-3056/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:47:47.740340 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-5882/hostexec-ip-172-20-0-138.ec2.internal-vttkj\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:48.898968 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-5928/test-container-pod\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:48.929404 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-5928/host-test-container-pod\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:51.630314 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-3056/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:47:51.920474 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-4155/ss-1\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:52.708163 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-7567/client-envvars-16a39688-ad81-4c13-94e6-b58c63b69e75\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:55.496704 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"containers-1885/client-containers-8eee898b-532f-4570-ac9a-ccd3b1bc9262\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:55.524596 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7349-5016/csi-mockplugin-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:55.561942 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7349-5016/csi-mockplugin-attacher-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:56.148692 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"security-context-test-3326/busybox-readonly-false-0c45f4fe-39fd-4fcb-91df-19f08d3262ff\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:56.204114 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-2472/test-rollover-deployment-784bc44b77-7k9zf\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:56.482221 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-2472/test-rollover-deployment-668b7f667d-zwr2l\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:57.913220 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-1-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:57.924155 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-0-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:57.926476 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-2-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:58.191256 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pv-1729/pod-ephm-test-projected-nx68\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:58.459612 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-5882/pod-subpath-test-preprovisionedpv-9bhx\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:47:58.731306 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-4062/pod-subpath-test-inlinevolume-pt94\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:59.359481 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"port-forwarding-2768/pfpod\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:47:59.643805 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-3056/hostpath-injector\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:00.141251 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"cronjob-3832/concurrent-27598128-7ljjr\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:00.246717 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"downward-api-3118/downwardapi-volume-31b48d70-c151-436f-a56b-4dbc63c2dd01\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:01.269107 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-9772/hostexec-ip-172-20-0-138.ec2.internal-rczzz\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:01.794008 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-1-1\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:02.062911 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-4155/ss-2\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:02.494623 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-2954/fail-once-local-tr6wt\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:02.495119 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-2954/fail-once-local-txgbx\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:02.593520 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-2-1\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:02.609083 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9693/hostexec-ip-172-20-0-238.ec2.internal-vc5gj\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:02.890989 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7349/pvc-volume-tester-slzv6\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:02.892903 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4692/hostexec-ip-172-20-0-114.ec2.internal-m9md8\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:03.601087 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-0-1\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:05.388807 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-probe-2920/busybox-4bc2956e-92c6-4ef7-87d5-dce9859f5496\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:06.992478 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-2-2\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:09.436498 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4692/pod-c61932f4-2076-4e75-aa98-399eb97db4c4\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:10.907989 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-2-3\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:11.983117 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"dns-8647/dns-test-f9341bd5-642b-45ea-87a6-b4fffd14c4ec\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:12.176645 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4692/pod-2c8f75f0-3872-4bb1-b416-cbb0fa33a8cb\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:13.165149 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-8611/hostexec-ip-172-20-0-238.ec2.internal-j8rgh\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:13.976659 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-9772/pod-subpath-test-preprovisionedpv-8pqz\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:14.739398 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-1-2\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:16.352095 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-4054/metadata-volume-7ad06a2c-daca-48eb-bc16-82fa435aa80d\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:16.689652 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-2-4\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:17.034445 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"conntrack-1831/pod-client\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:17.535938 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-0-2\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:17.575952 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-3056/hostpath-client\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:48:17.794284 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-1-3\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:18.654830 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-3056/hostpath-client\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:48:19.256765 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9693/pod-54a5eb0f-b027-4c44-8787-7f0861a39600\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:20.545236 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-7237-4346/csi-hostpathplugin-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:20.660243 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-3056/hostpath-client\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:21.114907 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-2954/fail-once-local-btvhs\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:21.203919 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"conntrack-1831/pod-server-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:22.499539 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9150/inline-volume-sqf4j\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-sqf4j-my-volume\\\".\"\nI0622 08:48:22.522290 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-0-3\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:22.965478 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"downward-api-2717/downwardapi-volume-2091aeb7-f5ec-416f-95e6-4b05945addb7\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:23.123725 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-2954/fail-once-local-27k2n\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:23.245750 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-1656/netserver-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:23.263353 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-8622/pod-subpath-test-inlinevolume-c82s\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:23.289358 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-1656/netserver-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:23.323612 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-1656/netserver-2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:23.344395 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-1-4\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:23.358226 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-1656/netserver-3\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:23.702678 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-3181/hostexec-ip-172-20-0-138.ec2.internal-zdv6q\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:24.191941 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-9150-2231/csi-hostpathplugin-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:24.278940 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9150/inline-volume-tester-9vmmt\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-9vmmt-my-volume-0\\\".\"\nI0622 08:48:25.107488 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-2-5\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:25.880644 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"hostpath-4647/pod-host-path-test\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:26.848211 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-7237/pod-subpath-test-dynamicpv-5tn9\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:28.316158 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-2-6\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:29.086898 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-0-4\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:29.886381 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-8611/exec-volume-test-preprovisionedpv-lxk6\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:29.908439 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6840/hostexec-ip-172-20-0-92.ec2.internal-nj9br\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:30.006641 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9693/pod-b35a1c5b-5dcf-401c-9dbd-addcf876e8b5\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:31.090614 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-1-5\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:31.788395 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"downward-api-4144/downwardapi-volume-c9c46358-0c7d-497c-ba8b-c95790817984\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:32.436768 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-5028/inline-volume-6gq5s\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-6gq5s-my-volume\\\".\"\nI0622 08:48:33.544754 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"conntrack-1831/pod-server-2\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:34.024259 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-2-7\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:34.334482 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"configmap-3669/pod-configmaps-d67da186-c0d1-4bbe-8def-11f0f6ad1ef1\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:34.471039 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-3917/pod-265a2eae-6a78-41e4-a308-776288b210b5\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:34.688730 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-5028/inline-volume-tester-cthf2\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-cthf2-my-volume-0\\\".\"\nI0622 08:48:34.697653 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-9150/inline-volume-tester-9vmmt\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:36.117058 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3407-5317/csi-mockplugin-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:36.149355 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3407-5317/csi-mockplugin-attacher-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:36.664944 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-5028/inline-volume-tester-cthf2\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:48:37.386097 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-8355/inline-volume-r2qjc\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-r2qjc-my-volume\\\".\"\nI0622 08:48:38.020292 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-2-8\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:38.666094 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-5028/inline-volume-tester-cthf2\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:48:39.005326 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-0-5\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:39.823260 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-1-6\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:39.850897 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"security-context-5629/security-context-14c90f17-965e-4891-97ff-371708832185\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:40.803053 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-runtime-880/termination-message-container1bbc78e3-1322-4c6a-8de5-4368efd007e9\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:41.275678 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-8355-1727/csi-hostpathplugin-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:41.381763 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-8355/inline-volume-tester-6prmj\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-6prmj-my-volume-0\\\".\"\nI0622 08:48:42.669559 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-8355/inline-volume-tester-6prmj\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:48:42.678034 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-5028/inline-volume-tester-cthf2\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:42.947378 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"security-context-5174/security-context-e53ef479-e15c-48b2-8538-66df111c434d\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:44.006214 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6798-8179/csi-mockplugin-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:44.090214 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6798-8179/csi-mockplugin-attacher-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:44.127301 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6798-8179/csi-mockplugin-resizer-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:44.491815 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-3181/pod-63a1dc07-6411-4c8b-82c6-f2a616ab0d8e\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:44.543284 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5782-2141/csi-mockplugin-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:44.654453 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6840/pod-subpath-test-preprovisionedpv-4nd4\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:45.024714 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-2-9\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:45.055001 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-9103/exec-volume-test-preprovisionedpv-2flx\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:45.669277 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-8355/inline-volume-tester-6prmj\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:48:48.007228 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3407/pvc-volume-tester-zxfr2\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:48.614792 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-1-7\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:49.683026 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-8355/inline-volume-tester-6prmj\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:50.445375 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-8971/pod1\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:50.477795 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-8971/pod2\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:50.513453 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-8971/pod3\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:51.671777 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-1656/test-container-pod\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:52.615959 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-0-6\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:52.647101 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-3181/hostexec-ip-172-20-0-138.ec2.internal-tvvg8\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:55.930201 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6798/pvc-volume-tester-s2nqb\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:55.972177 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-8460/hostexec-ip-172-20-0-92.ec2.internal-vq2pn\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:58.287231 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"downward-api-8263/labelsupdate2060db5a-f519-45ef-a3e5-9c3f96e3794e\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:48:58.804424 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5406-4086/csi-mockplugin-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:58.867652 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5406-4086/csi-mockplugin-attacher-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:48:59.976442 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-4034-1509/csi-hostpathplugin-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:00.025469 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-4034/inline-volume-tester-np5zj\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:00.068882 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-2-10\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:00.156192 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"cronjob-3832/concurrent-27598129-dhx2l\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:00.158193 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-5435/simple-27598129-qfxbr\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:00.239493 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"downward-api-9841/downwardapi-volume-85172862-f486-40ea-9e35-a22f9b469ee4\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:01.481293 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5782/pvc-volume-tester-bclmm\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:02.102338 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-7317/hostexec-ip-172-20-0-138.ec2.internal-9rfdz\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:02.654939 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-0-7\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:04.414960 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-1-8\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:05.512731 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-8355/inline-volume-tester2-xqfkw\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester2-xqfkw-my-volume-0\\\".\"\nI0622 08:49:05.998020 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-0-8\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:06.713143 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-8355/inline-volume-tester2-xqfkw\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:06.965385 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-4007/hostexec-ip-172-20-0-238.ec2.internal-6p8gw\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:08.033571 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-1-9\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:08.931602 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-5028/inline-volume-tester2-w5kg6\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester2-w5kg6-my-volume-0\\\".\"\nI0622 08:49:09.440890 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-2-11\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:09.561404 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-0-9\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:10.468670 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3896-4067/csi-mockplugin-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:10.689038 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-5028/inline-volume-tester2-w5kg6\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:49:10.792789 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"mount-propagation-4344/hostexec-ip-172-20-0-238.ec2.internal-mg7tn\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:10.799716 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5406/pvc-volume-tester-t68rk\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:11.164547 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"subpath-5407/pod-subpath-test-projected-qxlt\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:12.028853 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"svcaccounts-4266/test-pod-c7554a5f-9062-4d86-af74-5fe4f32affe4\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:12.673749 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-1-10\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:13.347851 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-7317/pod-subpath-test-preprovisionedpv-4gcz\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:13.417468 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-2073/pod-projected-secrets-bc61d3b3-83d9-4bf8-bbd1-03706b2f4cdf\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:13.689674 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-5028/inline-volume-tester2-w5kg6\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:49:14.923856 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-4007/pod-318f6cb5-8e23-4e1d-b5b4-57b73ba01556\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:15.038747 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-8460/pod-subpath-test-preprovisionedpv-jmj5\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:15.262476 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6766/hostexec-ip-172-20-0-138.ec2.internal-p7gcx\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:15.820527 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-2-12\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:17.165490 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-1-11\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:17.615995 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-0-10\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:17.698567 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-5028/inline-volume-tester2-w5kg6\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:19.077969 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-4007/hostexec-ip-172-20-0-238.ec2.internal-wbw6v\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:20.067287 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-8101/pod-projected-configmaps-07edea21-f773-471c-9574-742fd30d691d\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:20.099385 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-8052/hostexec-ip-172-20-0-138.ec2.internal-9bcrp\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:20.426104 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-5123/test-rolling-update-with-lb-ff68f99db-pwqzv\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:20.426479 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-5123/test-rolling-update-with-lb-ff68f99db-4ksw8\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=3\nI0622 08:49:20.450129 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-5123/test-rolling-update-with-lb-ff68f99db-zd7lg\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=2\nI0622 08:49:22.451981 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3896/pvc-volume-tester-2jdzm\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:22.728315 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-5486/hostexec-ip-172-20-0-92.ec2.internal-mhqkq\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:23.025777 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-3899/pod-1cbf7a1c-9c14-4e49-9e98-a1fe5f5f08d5\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:23.040318 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-8852/hostexec-ip-172-20-0-138.ec2.internal-xmncw\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:23.818114 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-6500/suspend-true-to-false-s5xp2\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:23.838109 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-6500/suspend-true-to-false-pr9lq\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:24.026337 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-2-13\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:24.872386 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-1185/exec-volume-test-inlinevolume-dbts\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:25.353282 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2526-6886/csi-mockplugin-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:26.473512 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"webhook-718/sample-webhook-deployment-78948c58f6-jhptp\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:27.157470 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-3899/hostexec-ip-172-20-0-238.ec2.internal-jrw92\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:27.525114 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-402/pod-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:27.552327 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-402/pod-1\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:27.592382 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-402/pod-2\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:27.626509 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-1-12\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:27.921771 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4449-3217/csi-mockplugin-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:29.706677 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-8052/pod-subpath-test-preprovisionedpv-hslc\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:29.758779 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-8852/pod-6c8a5c4d-496c-4635-836b-e6ef696dc7f2\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:30.033092 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6766/pod-subpath-test-preprovisionedpv-ljvw\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:30.128715 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-2-14\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:30.893100 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-6500/suspend-true-to-false-zcmqj\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:30.970412 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-6575/inline-volume-fhnh4\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-fhnh4-my-volume\\\".\"\nI0622 08:49:31.681866 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-6500/suspend-true-to-false-5zj42\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:31.820665 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-0-11\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:33.201989 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-6575/inline-volume-tester-qb5kh\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-qb5kh-my-volume-0\\\".\"\nI0622 08:49:33.919010 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-8852/hostexec-ip-172-20-0-138.ec2.internal-8ghtd\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:35.910415 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-0-12\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:36.224222 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-2424/ss2-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:37.049309 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-1-13\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:37.095650 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-5411/httpd\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:37.743978 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2526/pvc-volume-tester-9vgm9\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:38.722604 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-6575/inline-volume-tester-qb5kh\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:39.108652 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"security-context-test-3578/alpine-nnp-nil-12b46826-311b-4b47-bb36-accf9696a656\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:39.289479 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-2424/ss2-1\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:39.870233 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-2761/hostexec-ip-172-20-0-114.ec2.internal-qgnr2\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:40.104994 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"apply-1189/deployment-shared-map-item-removal-7c658794b9-ktc4s\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:40.133461 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"apply-1189/deployment-shared-map-item-removal-7c658794b9-9mfvm\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:40.149074 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"apply-1189/deployment-shared-map-item-removal-7c658794b9-s227n\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:40.207635 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"apply-1189/deployment-shared-map-item-removal-7c658794b9-nvnp5\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:40.703171 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replication-controller-5993/condition-test-rfpp6\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:40.720942 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replication-controller-5993/condition-test-prn2f\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:41.284157 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-0-13\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:41.604672 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-2424/ss2-2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:42.267756 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-8643-7896/csi-hostpathplugin-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:43.390239 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6080/hostexec-ip-172-20-0-92.ec2.internal-6xbbq\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:44.281296 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-5486/pod-subpath-test-preprovisionedpv-2rsg\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:44.353030 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-expand-5147-3978/csi-hostpathplugin-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:44.489396 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-2761/pod-subpath-test-preprovisionedpv-rwhk\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:45.556842 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4449/pvc-volume-tester-khwsx\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:45.685796 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-1-14\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:45.829105 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-1239/test-ss-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:47.862398 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-3690/inline-volume-gfrff\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-gfrff-my-volume\\\".\"\nI0622 08:49:50.107633 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-3690/inline-volume-tester-zs89p\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-zs89p-my-volume-0\\\".\"\nI0622 08:49:50.277122 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-7912/inline-volume-r4jmw\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-r4jmw-my-volume\\\".\"\nI0622 08:49:50.481230 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-7912/inline-volume-tester-nrvmn\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-nrvmn-my-volume-0\\\".\"\nI0622 08:49:50.648081 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-8643/pod-444f0a9f-0aa9-44db-addd-b5a2e78f0986\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:51.724271 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-7912/inline-volume-tester-nrvmn\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:49:52.701268 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-expand-5147/pod-348a1bc3-01da-4624-a452-783f17ced48d\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:53.729070 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-293/pod-submit-status-0-14\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:54.651191 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-5662/hostexec-ip-172-20-0-138.ec2.internal-tmmx8\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:54.735183 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-7912/inline-volume-tester-nrvmn\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:54.796031 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-8643/hostexec-ip-172-20-0-238.ec2.internal-cb6nc\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:55.745753 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-3690/inline-volume-tester-zs89p\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:56.167118 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"configmap-1130/pod-configmaps-d1beb62b-d1f8-49e2-a637-a70d70bf6cde\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:56.443771 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-788-8841/csi-hostpathplugin-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:56.479691 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-2424/ss2-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:57.578938 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-2612/hostexec-ip-172-20-0-114.ec2.internal-vwpxd\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:59.578759 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-runtime-1781/termination-message-container52856c7d-92f2-442f-9c55-aad296f5696e\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:49:59.693138 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-4945/netserver-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:59.725058 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-4945/netserver-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:59.760000 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-4945/netserver-2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:59.790114 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-4945/netserver-3\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:49:59.883991 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-2424/ss2-1\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:00.033040 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6080/pod-subpath-test-preprovisionedpv-sfb5\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:00.136754 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"cronjob-1240/concurrent-27598130-6hkjh\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:00.572138 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-expand-5303-1059/csi-hostpathplugin-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:00.684954 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"svcaccounts-4755/pod-service-account-defaultsa\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:00.721911 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"svcaccounts-4755/pod-service-account-mountsa\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:00.746729 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"svcaccounts-4755/pod-service-account-nomountsa\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:00.765286 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-788/pod-subpath-test-dynamicpv-tbng\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:00.791926 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"svcaccounts-4755/pod-service-account-defaultsa-mountspec\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:00.818990 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"svcaccounts-4755/pod-service-account-mountsa-mountspec\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:00.860603 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"svcaccounts-4755/pod-service-account-nomountsa-mountspec\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:00.891466 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"svcaccounts-4755/pod-service-account-defaultsa-nomountspec\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:00.901068 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-1669/backofflimit-k6kb7\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:00.914279 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"svcaccounts-4755/pod-service-account-mountsa-nomountspec\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:00.968947 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"svcaccounts-4755/pod-service-account-nomountsa-nomountspec\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:03.276053 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-8363/httpd\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:04.668673 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"dns-6867/dns-test-c12be025-0490-44d1-b167-62b16c674a01\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:07.010703 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-5528/pod-subpath-test-inlinevolume-5hfd\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:08.941212 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-expand-5303/pod-a8b59bad-86ad-4331-83a9-83396f271c98\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:09.669646 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-2424/ss2-2\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:10.067991 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-1669/backofflimit-rpsmv\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:13.235407 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-3153/inline-volume-h6kxl\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-h6kxl-my-volume\\\".\"\nI0622 08:50:13.321353 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"downward-api-8798/metadata-volume-e1fc4bae-4a3d-439f-a919-5f20f1120186\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:13.476402 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-5662/exec-volume-test-preprovisionedpv-f6fd\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:13.755208 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-4791/agnhost-primary-hvcxh\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:14.220957 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-2612/pod-b3d8bc3f-9bc8-4066-8bc6-38632a18c92e\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:14.880410 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-3153-6142/csi-hostpathplugin-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:14.951277 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-3153/inline-volume-tester-98r5r\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-98r5r-my-volume-0\\\".\"\nI0622 08:50:16.908765 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-2424/ss2-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:17.671964 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-3292/aws-injector\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:17.761688 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-expand-7326/pod-6432dec8-f11d-43fd-8232-f7dd688d6e14\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:17.852095 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-containers-test-3515/ephemeral-containers-target-pod\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:19.298914 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-788/pod-subpath-test-dynamicpv-tbng\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:20.116302 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1604-3601/csi-mockplugin-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:20.217790 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1604-3601/csi-mockplugin-attacher-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:22.294077 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-4910/pod-always-succeed4efb18e3-4578-4d48-adf5-a1c23bef84fe\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:22.389915 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-2612/hostexec-ip-172-20-0-114.ec2.internal-nfjwz\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:22.756350 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-3153/inline-volume-tester-98r5r\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:22.891883 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-2424/ss2-1\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:24.534054 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubelet-test-2380/bin-false647c1e2b-8131-4ea5-bfb0-c0c89cc08957\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:24.702409 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4972/hostexec-ip-172-20-0-92.ec2.internal-54bgb\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:25.520506 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1604/pvc-volume-tester-r9wcn\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:26.081139 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-4945/test-container-pod\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:26.116190 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-4945/host-test-container-pod\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:28.012235 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4972/pod-0d90e70a-f7ff-4ceb-b9bf-a10de2dd7417\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:28.859107 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-2424/ss2-2\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:29.392095 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"downward-api-6314/metadata-volume-8889724e-979d-430a-b272-48c599fde3fa\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:29.635297 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3278-4532/csi-mockplugin-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:29.723613 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3278-4532/csi-mockplugin-attacher-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:30.680711 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"emptydir-4621/pod-4b36679a-5afa-4864-a7c0-4902d804bbca\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:31.020018 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"webhook-4603/sample-webhook-deployment-78948c58f6-pwljj\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:32.040844 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-9921/httpd-deployment-5bf95dfb4d-n7vvw\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:32.062625 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-9921/httpd-deployment-5bf95dfb4d-sd4pz\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:32.417575 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replication-controller-1645/pod-release-lj9kh\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:32.527183 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replication-controller-1645/pod-release-zfg45\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:32.770352 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-9921/httpd-deployment-5bf95dfb4d-b9dm9\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:32.891251 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"secrets-3714/pod-secrets-3b2bd452-0836-4c3d-85d4-e2f2fc55ae8f\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:33.120810 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-9921/httpd-deployment-95bc5655f-xf48p\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:33.221050 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-6145/hostexec-ip-172-20-0-138.ec2.internal-lbvn5\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:35.113189 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9641-4272/csi-mockplugin-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:35.119143 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-8336/hostexec-ip-172-20-0-92.ec2.internal-6pw9x\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:35.188789 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9641-4272/csi-mockplugin-attacher-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:36.908737 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-9443/pod-projected-secrets-d3c2198f-4f8c-48d1-851e-bacf6c7d5323\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:39.213081 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"downward-api-8201/annotationupdate7355540c-e690-4ed4-9a00-7ce2418dad2c\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:40.429648 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-4126-2318/csi-hostpathplugin-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:43.874189 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-6145/local-injector\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:43.922997 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-8336/pod-subpath-test-preprovisionedpv-mlv5\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:44.713392 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-4126/pod-subpath-test-dynamicpv-fkpw\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:45.918522 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-1481/pod-subpath-test-inlinevolume-hl9s\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:47.059946 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9641/pvc-volume-tester-72jhj\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:48.887034 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"proxy-2002/agnhost\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:49.656447 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"apply-1199/test-pod\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:50.234422 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"security-context-test-8804/busybox-readonly-true-dd30734d-0784-40c6-b4ab-8d3b8fd63b46\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:51.460404 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replicaset-4374/test-rs-4wqld\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:51.485791 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-6145/local-client\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:52.286568 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-2076/inline-volume-4s2jg\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-4s2jg-my-volume\\\".\"\nI0622 08:50:52.339972 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3278/pvc-volume-tester-2xddf\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:54.701247 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8133/hostexec-ip-172-20-0-114.ec2.internal-w8fv4\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:55.641467 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replicaset-4374/test-rs-lh486\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:55.707892 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replicaset-4374/test-rs-pxc8x\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:55.733952 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"replicaset-4374/test-rs-x4xzf\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:50:56.340590 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-2076-8244/csi-hostpathplugin-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:56.379717 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-2076/inline-volume-tester-2crx2\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-2crx2-my-volume-0\\\".\"\nI0622 08:50:56.425074 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-7313/hostexec-ip-172-20-0-92.ec2.internal-f8r2t\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:50:57.768652 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-399-5366/csi-hostpathplugin-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:51:00.209339 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"cronjob-4109/concurrent-27598131-zkkgn\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:00.209963 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"cronjob-1240/concurrent-27598131-hpmdp\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:00.277841 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-runtime-220/image-pull-test267405bd-9d53-4f09-bec3-83dd28a84e92\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.171302 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9641/inline-volume-5gmxw\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.388556 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-hfcwm\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.402200 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-xmgwm\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.435921 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-rndsf\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.439367 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-2qwqx\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.477742 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-v8dnp\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.477897 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-sts5v\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.477960 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-g2mhh\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.513497 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-z5897\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.537254 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-7qws8\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.580611 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-j4qmz\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.590377 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-sdzph\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.590470 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-dxrtn\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.590534 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-t8sbj\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.590597 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-kv792\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.590647 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-txlsr\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.590709 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-g9mdb\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.590761 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-skj2j\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.607768 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-99qnq\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.608062 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-4h7b5\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.608141 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-l45hm\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.608282 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-2k9pv\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.608675 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-62d7h\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.609096 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-s7c47\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.609187 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-wz9c6\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.638146 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-8drz8\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.638483 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-qlnc5\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.638716 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-8szj4\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.638984 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-4bdwk\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.662815 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-md2wm\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.688574 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-856kh\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.704887 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-lv5k2\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.734731 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-4kq2z\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.773579 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-r7nt5\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.804659 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-2076/inline-volume-tester-2crx2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:51:01.824506 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-6064/pod-subpath-test-dynamicpv-4pct\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.841821 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-5kn8w\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:01.998717 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-7bsl5\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.024524 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-tl6x7\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.030952 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-8rgkr\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.069082 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-8vpzg\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.169418 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-kndz6\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.298189 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-d5w22\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.324022 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-82j8l\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.359211 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-t8vwk\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.359592 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-4v6mx\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.384146 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-t8bzp\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.391127 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-c8xzt\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.464819 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-2ml45\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.541163 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-s887v\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.563786 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-qs8bd\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.697730 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-ngqbm\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.771665 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-7mnxr\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.771981 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-zprzx\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.835000 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-5mjxw\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.848961 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-h4cdd\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.855859 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-9fjgf\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.918939 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-s5q9f\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.941521 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-svd4d\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:02.973761 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-nvzrr\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:03.016611 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-g8mpz\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:03.108542 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-7hzbc\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:03.129022 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-565lq\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:03.181402 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-jpgbk\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:03.256793 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-8tzxx\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:03.293727 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-6cbfs\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:03.318413 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-3933-8484/csi-hostpathplugin-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:51:03.371602 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-6dpxj\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:03.427592 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-6vm9b\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:03.490042 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-d4xc2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:03.517180 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-qfjml\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:03.558909 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-wdl48\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:03.620847 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-z5wgn\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:03.679609 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-kpszk\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:03.702856 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-hh2cs\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:03.754301 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-m72r6\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:03.805844 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-6hss8\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:03.873251 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-ckcv4\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:03.901802 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8133/pod-0c9b14e4-bf7d-4a26-a823-01e467c556f5\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:51:03.914450 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-mn8q5\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:03.960508 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-4td5r\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.006280 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-nvbv6\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.055606 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-5gxp5\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.105082 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-mhrw9\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.163139 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-rldb5\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.208091 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-5jgrl\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.254302 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-wzxcq\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.298854 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-4mf2f\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.357797 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-dfw5w\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.401870 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-2kxld\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.448812 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-2vvcx\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.495599 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-9z4kj\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.546997 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-89zxg\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.598090 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-27jcs\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.647622 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-97vwg\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.696534 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-wjhk7\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.745555 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-xvghd\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.803727 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-9q7xh\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.857028 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-wv49h\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.900236 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-k9d59\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:04.953919 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-4zxg8\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:05.005505 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-cqrnl\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:05.108442 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-qjqnp\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:05.146112 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-pjz8c\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:05.195148 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-7380/simpletest.rc-c8bhb\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:06.405232 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-expand-7326/pod-de5ce9ae-3646-430c-bd77-29b5c2c696f1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:08.989157 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-5572/pod-handle-http-request\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:51:11.245234 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubelet-test-6697/busybox-scheduling-fa528828-f265-4fd2-ab9b-6ec2f29f93d1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:12.288281 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-3292/aws-client\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:14.043600 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-7313/pod-subpath-test-preprovisionedpv-wqvw\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:51:14.269991 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-399/pod-subpath-test-dynamicpv-cd4f\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:51:16.040599 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-4802/hostexec-ip-172-20-0-238.ec2.internal-khpcn\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:51:29.246924 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"configmap-7306/pod-configmaps-505d77cd-7217-49ad-825b-649e30947586\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:30.515389 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-7313/pod-subpath-test-preprovisionedpv-wqvw\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:51:36.460413 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-7403/aws-injector\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:37.257226 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-1704/exec-volume-test-dynamicpv-p7tf\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:38.360336 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-probe-8646/busybox-08a944bd-b01e-4b68-bc18-f5ec7bf54eea\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:39.036529 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"security-context-test-7390/busybox-privileged-false-41471032-6e27-40cb-8c04-70cf06835576\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:40.075486 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-3933/pod-caaf6f8a-2abb-4770-91bc-b8990a77c61b\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:51:41.813569 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"var-expansion-8725/var-expansion-80bbc2c4-edce-475d-b079-18227ad77e69\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:44.144358 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-4802/exec-volume-test-preprovisionedpv-lh5p\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:51:46.035334 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-642/ss2-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:47.368832 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"webhook-9615/sample-webhook-deployment-78948c58f6-6xjz9\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:53.269092 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-3081/pod-subpath-test-inlinevolume-nc5t\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:54.229901 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4301/hostexec-ip-172-20-0-238.ec2.internal-s6bnp\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:51:54.489186 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-207-2913/csi-mockplugin-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:51:54.511395 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-207-2913/csi-mockplugin-attacher-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:51:54.595804 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"port-forwarding-6399/pfpod\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:55.125035 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-5572/pod-with-poststart-http-hook\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:51:55.579803 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-6702/hostexec-ip-172-20-0-114.ec2.internal-n2z8k\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:51:55.646294 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pvc-protection-783/pvc-tester-q6dfg\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:51:55.855524 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-runtime-5419/image-pull-test32828ca4-3232-4b03-9c08-0135681bf628\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:00.227087 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-3933/hostexec-ip-172-20-0-92.ec2.internal-2kg5g\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:02.027987 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-125/hostexec-ip-172-20-0-92.ec2.internal-82mhm\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:02.065295 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-642/ss2-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:02.169385 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-6872/pod-update-28f91042-b08a-4509-9480-51bf1063345e\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:04.200414 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-2818/hostexec-ip-172-20-0-138.ec2.internal-bhj7m\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:04.297545 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubelet-test-3457/bin-false5627b783-533f-4331-86d1-684b57ac4c11\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:04.705700 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-2465/hostexec-ip-172-20-0-238.ec2.internal-2hj62\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:04.733127 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"configmap-7439/pod-configmaps-fbba1ed2-4681-4eea-b514-36c6a522a8c7\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:05.332444 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-4708/affinity-clusterip-5m68k\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:05.355128 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-4708/affinity-clusterip-wg5sx\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:05.359129 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-4708/affinity-clusterip-zltrz\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:06.390447 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-207/pvc-volume-tester-5n79t\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:06.651582 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1072/service-proxy-disabled-xb4ts\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:06.657048 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1072/service-proxy-disabled-vkt5j\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:06.665673 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1072/service-proxy-disabled-5s5bv\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:06.837099 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-4301/pod-816d0dc8-82d7-4c65-a28c-3042087791b9\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0622 08:52:07.586359 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7533/hostexec-ip-172-20-0-138.ec2.internal-svdsx\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:08.835392 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-4301/pod-816d0dc8-82d7-4c65-a28c-3042087791b9\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-85xwc\\\" not found.\"\nE0622 08:52:08.846225 10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-816d0dc8-82d7-4c65-a28c-3042087791b9.16fae54c4a1c65c9\", GenerateName:\"\", Namespace:\"persistent-local-volumes-test-4301\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"persistent-local-volumes-test-4301\", Name:\"pod-816d0dc8-82d7-4c65-a28c-3042087791b9\", UID:\"f9aaa3b0-53ac-45d9-9b74-07a08b2ff72e\", APIVersion:\"v1\", ResourceVersion:\"45034\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-85xwc\\\" not found.\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 22, 8, 52, 8, 835728841, time.Local), LastTimestamp:time.Date(2022, time.June, 22, 8, 52, 8, 835728841, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-816d0dc8-82d7-4c65-a28c-3042087791b9.16fae54c4a1c65c9\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-4301 because it is being terminated' (will not retry!)\nI0622 08:52:10.837052 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-4301/pod-816d0dc8-82d7-4c65-a28c-3042087791b9\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-85xwc\\\" not found.\"\nE0622 08:52:10.853126 10 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-816d0dc8-82d7-4c65-a28c-3042087791b9.16fae54c4a1c65c9\", GenerateName:\"\", Namespace:\"persistent-local-volumes-test-4301\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"persistent-local-volumes-test-4301\", Name:\"pod-816d0dc8-82d7-4c65-a28c-3042087791b9\", UID:\"f9aaa3b0-53ac-45d9-9b74-07a08b2ff72e\", APIVersion:\"v1\", ResourceVersion:\"45135\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-85xwc\\\" not found.\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:time.Date(2022, time.June, 22, 8, 52, 8, 835728841, time.Local), LastTimestamp:time.Date(2022, time.June, 22, 8, 52, 10, 837774377, time.Local), Count:2, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-816d0dc8-82d7-4c65-a28c-3042087791b9.16fae54c4a1c65c9\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-4301 because it is being terminated' (will not retry!)\nI0622 08:52:12.465884 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-642/ss2-2\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:13.117855 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-7403/aws-client\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:13.143966 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-8453/e2e-test-httpd-pod\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:13.643012 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-expand-4909-5716/csi-hostpathplugin-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:14.465016 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-6702/local-injector\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:14.803205 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-2818/pod-subpath-test-preprovisionedpv-v2lf\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:14.851212 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-125/pod-subpath-test-preprovisionedpv-d78f\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:15.823875 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1072/service-proxy-toggled-vw75m\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:15.866753 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1072/service-proxy-toggled-fhfmz\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:15.875304 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1072/service-proxy-toggled-cn6lp\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:16.176659 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7533/pod-36605ca4-66aa-4fc9-80db-b5a68c6d7ce2\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:17.483016 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-4708/execpod-affinity85ql8\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:17.969873 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6761-3866/csi-mockplugin-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:18.052586 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6761-3866/csi-mockplugin-attacher-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:18.731975 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-8480-6500/csi-hostpathplugin-0\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:19.402678 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-probe-3180/test-webserver-b3ebc258-3449-496f-81cc-e0651fe598a6\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:19.972455 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-expand-4909/pod-a2f24985-fafb-4a11-93df-6a5cca87fb84\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:21.311167 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-3085/inline-volume-jk62j\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-jk62j-my-volume\\\".\"\nI0622 08:52:21.852085 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-3126/hostexec-ip-172-20-0-238.ec2.internal-njvt8\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:22.095390 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pvc-protection-9232/pvc-tester-rh2jf\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:23.298339 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-runtime-2248/image-pull-test68c200b7-9d8b-41f9-849f-4236e9194272\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:23.343420 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6761/pvc-volume-tester-pl5dk\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:23.542253 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-3085/inline-volume-tester-kzmfs\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-kzmfs-my-volume-0\\\".\"\nI0622 08:52:24.097273 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-6702/local-client\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:24.869059 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-5123/test-rolling-update-with-lb-97f5699f6-ndrc2\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=3\nI0622 08:52:26.298880 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6666/hostexec-ip-172-20-0-114.ec2.internal-zccdn\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:27.046428 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-8480/pod-subpath-test-dynamicpv-jxh5\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:27.170879 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-5123/test-rolling-update-with-lb-97f5699f6-c8tld\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=2\nI0622 08:52:27.369646 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-1462/inline-volume-zl8t6\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-zl8t6-my-volume\\\".\"\nI0622 08:52:27.839220 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"apply-2132/deployment-shared-unset-c757c87b9-9qlv6\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:27.870914 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"apply-2132/deployment-shared-unset-c757c87b9-wtgjn\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:27.873085 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"apply-2132/deployment-shared-unset-c757c87b9-cd6x5\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:28.461264 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8879/hostexec-ip-172-20-0-114.ec2.internal-qbq7z\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:28.779892 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-8065/agnhost-primary-96wl8\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:28.837995 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-3126/local-injector\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:28.891901 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-3085/inline-volume-tester-kzmfs\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:28.909675 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6666/pod-74fbcccc-b081-411c-a30c-df86b964c08f\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:29.150380 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-8065/agnhost-primary-qdjx9\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:29.183699 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-1462-1829/csi-hostpathplugin-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:29.261453 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-1462/inline-volume-tester-6d7qg\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-6d7qg-my-volume-0\\\".\"\nI0622 08:52:29.491124 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-1552/netserver-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:29.523590 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-1552/netserver-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:29.542206 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-2465/pod-d25becc0-5f41-46d5-b1d2-89d5ffea94a0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:29.570601 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-1552/netserver-2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:29.583921 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pod-network-test-5232/netserver-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:29.590897 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-1552/netserver-3\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:29.614416 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pod-network-test-5232/netserver-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:29.645580 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pod-network-test-5232/netserver-2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:29.681898 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pod-network-test-5232/netserver-3\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:30.968614 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1072/verify-service-up-host-exec-pod\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:32.175536 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-5123/test-rolling-update-with-lb-97f5699f6-n97x2\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:35.222300 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-6492/pod-submit-remove-df830931-ab6d-46be-92dd-5fe98b17fad4\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:35.246399 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8879/pod-aa6401ad-f5dd-4cca-85f5-0004812d3c44\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:36.306167 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-expand-4909/pod-cb5f3820-2ebc-44ad-9b40-77b7e376df7f\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:37.708311 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-2465/hostexec-ip-172-20-0-238.ec2.internal-zfd78\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:39.860777 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-1462/inline-volume-tester-6d7qg\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:41.148211 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-5123/test-rolling-update-with-lb-7f98d964c9-ks9mk\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=3\nI0622 08:52:41.405445 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-3662/pod-subpath-test-inlinevolume-6lzj\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:41.464766 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-642/ss2-2\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:41.609163 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6666/pod-9c966c37-81bd-4913-8759-282855f4c5b1\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:43.299070 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"pvc-protection-9232/pvc-tester-294xz\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-protection6gj6q\\\" is being deleted.\"\nI0622 08:52:44.347342 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-3126/local-client\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:45.070120 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1072/verify-service-up-exec-pod-7zwjn\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:45.621837 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"conntrack-6517/boom-server\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:46.079772 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-8989/all-succeed-ntjfl\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:46.102690 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-8989/all-succeed-wwnj5\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:47.638143 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-5123/test-rolling-update-with-lb-7f98d964c9-gp9w9\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=2\nI0622 08:52:47.707100 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-6720/netserver-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:47.738108 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-6720/netserver-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:47.778586 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-6720/netserver-2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:47.808790 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-6720/netserver-3\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:48.695770 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"hostpath-6457/pod-host-path-test\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:48.721085 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"downward-api-1803/downward-api-098cd6b3-cf7d-4eb0-9479-007591283933\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:48.852995 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-8989/all-succeed-9p9fc\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:48.891189 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-8989/all-succeed-n9r9g\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:51.574250 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-642/ss2-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:52.081253 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"security-context-test-8263/implicit-root-uid\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:52.495753 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-1282/pod-subpath-test-inlinevolume-g26h\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:52.994597 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1072/verify-service-down-host-exec-pod\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:53.419017 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9917/hostexec-ip-172-20-0-114.ec2.internal-w7zmp\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:53.890866 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-1552/test-container-pod\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:54.001012 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pod-network-test-5232/test-container-pod\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:54.391799 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"events-1903/send-events-c7fcdc07-19fa-45c2-844b-fe2e024427dd\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:55.241822 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-1909/foo-x6zbv\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:55.253436 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-1909/foo-n9kkg\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:57.458341 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-603/logs-generator\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:52:57.795105 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"conntrack-6517/startup-script\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:58.401279 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-5123/test-rolling-update-with-lb-7f98d964c9-nvs7g\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:52:59.399633 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-1462/inline-volume-tester2-ccggd\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester2-ccggd-my-volume-0\\\".\"\nI0622 08:52:59.953572 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9917/pod-de4cddb0-9c8b-4c77-a85b-1cd132e5fa09\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:00.348584 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-7724/hostexec-ip-172-20-0-114.ec2.internal-4gqmj\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:01.567009 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-642/ss2-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:01.637455 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1072/verify-service-down-host-exec-pod\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:01.886719 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-1462/inline-volume-tester2-ccggd\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:02.565154 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-1182/pfpod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0622 08:53:03.333598 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-2763/simpletest.deployment-78cb48dccd-p9f48\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:03.373973 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-2763/simpletest.deployment-78cb48dccd-w2s9p\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:03.392862 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-5697/downwardapi-volume-5a020f4c-5297-414e-a70f-999fe0994608\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:03.424299 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-5123/test-rolling-update-with-lb-7bb5b8d6cd-6x47g\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=3\nI0622 08:53:03.795990 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-1324/simpletest.rc-jftcx\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:03.813035 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"gc-1324/simpletest.rc-zhfm8\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:06.416475 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-3662/aws-injector\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:06.863292 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"dns-1397/test-dns-nameservers\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:07.284842 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"conntrack-642/pod-client\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:08.313113 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1072/verify-service-up-host-exec-pod\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:08.778174 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-1182/burstable-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0622 08:53:08.864408 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9917/pod-229ceb7c-d259-4307-adec-67a5f155c9f7\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:09.691808 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-5123/test-rolling-update-with-lb-7bb5b8d6cd-2kcbq\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=2\nI0622 08:53:11.410322 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"deployment-5123/test-rolling-update-with-lb-7bb5b8d6cd-vzjqr\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:11.969673 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-8488/pod-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:12.009676 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-8488/pod-1\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:12.060453 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"disruption-8488/pod-2\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:12.223703 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-6720/test-container-pod\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:12.268697 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-6720/host-test-container-pod\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:13.537663 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"apply-5075/deployment-7c658794b9-62gjx\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:13.577652 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"apply-5075/deployment-7c658794b9-j5h86\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:13.578198 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"apply-5075/deployment-7c658794b9-v2snt\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:13.645647 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"apply-5075/deployment-7c658794b9-ztbwr\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:13.669058 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"apply-5075/deployment-7c658794b9-5smd2\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:14.864613 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"subpath-2202/pod-subpath-test-configmap-gx86\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:15.290548 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-7724/pod-subpath-test-preprovisionedpv-9pzg\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:15.495165 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"security-context-834/security-context-bd829b2c-4b07-4000-96e9-31c476da0971\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:17.440340 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"conntrack-642/pod-server-1\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:17.915289 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-2103/downwardapi-volume-e2c57b65-49dd-4067-99c1-77966ab09098\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:18.427120 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1072/verify-service-up-exec-pod-ngctj\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:19.583546 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-227/pod-33317641-243f-49e3-85f4-699e5407bac5\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:19.758334 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"secrets-1246/pod-secrets-ec859788-9654-4921-801b-ac57dca9e65d\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:20.383390 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"containers-8365/client-containers-211a80cb-7cc1-4366-82aa-594e50685d9e\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:20.723740 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-8120/inline-volume-hfmzl\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-hfmzl-my-volume\\\".\"\nI0622 08:53:22.083114 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-8543/hostexec-ip-172-20-0-138.ec2.internal-qtv2h\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:22.289044 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-expand-7462-732/csi-hostpathplugin-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:23.019986 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-8120/inline-volume-tester-jtj44\" err=\"0/5 nodes are available: 5 waiting for ephemeral volume controller to create the persistentvolumeclaim \\\"inline-volume-tester-jtj44-my-volume-0\\\".\"\nI0622 08:53:25.671905 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"emptydir-wrapper-4697/pod-secrets-e802cb7f-a202-4abd-9ff3-fc5052a87ec2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:26.474499 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1072/verify-service-down-host-exec-pod\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:26.510813 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"emptydir-5643/pod-67772352-630e-425c-933c-74cafd1ba348\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:28.467003 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-8425/hostexec\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:28.646551 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-expand-7462/pod-e3086b5c-9667-4396-9288-98b847c103d4\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:28.916908 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"ephemeral-8120/inline-volume-tester-jtj44\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:28.976830 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"projected-395/pod-projected-configmaps-4033bf89-bdf6-439e-a627-baacb90a481e\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:30.434788 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-9757-6582/csi-hostpathplugin-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:30.581966 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-9757/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:53:31.266260 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pv-7912/pod-ephm-test-projected-s9hp\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:31.749334 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-9757/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:53:32.563487 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-3662/aws-client\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:32.571008 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-4034/hostexec-ip-172-20-0-114.ec2.internal-kdt8p\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:33.422707 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"emptydir-7758/pod-7819a14c-b5c8-416c-b9cb-0c0be1a45629\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:33.633897 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"aggregator-381/sample-apiserver-deployment-7b4b967944-t9r2b\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:33.907221 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-9757/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:53:34.160246 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"container-probe-7379/liveness-ee2a62f2-c1ce-419e-9173-124ab558b0ba\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:36.496201 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-642/ss2-2\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:36.720515 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumelimits-3933-5279/csi-hostpathplugin-0\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:37.185108 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-227/pod-217072c8-b898-4852-924d-fa7ab85da3dc\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:37.935756 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-9757/hostpath-injector\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:38.672837 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-5108/httpd\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:39.442071 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-3875/pod-e87f1aef-5aa2-4052-be12-c1208e794803\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:39.498686 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-642/ss2-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:42.490468 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1141/kube-proxy-mode-detector\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:43.242750 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-4034/pod-subpath-test-preprovisionedpv-pd2t\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:43.736611 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-8543/pod-subpath-test-preprovisionedpv-bhlp\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:45.581953 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volumemode-3875/hostexec-ip-172-20-0-138.ec2.internal-46lzw\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:45.812156 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-8490/netserver-0\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:45.843815 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-8490/netserver-1\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:45.879212 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-8490/netserver-2\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:45.910127 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"nettest-8490/netserver-3\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:49.563283 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-4034/pod-subpath-test-preprovisionedpv-pd2t\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:50.206082 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-8812/pod-subpath-test-dynamicpv-6h5f\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:51.193805 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1141/affinity-clusterip-timeout-wg7tv\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:51.213425 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1141/affinity-clusterip-timeout-k6hvf\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:51.221424 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1141/affinity-clusterip-timeout-cls8l\" node=\"ip-172-20-0-114.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:52.839712 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"pods-696/pod-test\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:53.361859 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"statefulset-642/ss2-0\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:54.185117 10 factory.go:209] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-9757/hostpath-client\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0622 08:53:54.405725 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"downward-api-9511/downward-api-08495df6-c1be-4a07-8b79-d5c49f8e8401\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:54.576504 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubelet-test-9854/busybox-readonly-fs32be789e-e49a-4aa4-87de-d76ba931e022\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:55.935654 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-9757/hostpath-client\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:53:57.389807 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"kubectl-6994/httpd-deployment-95bc5655f-4s69j\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:53:58.113244 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"dns-2691/dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:54:00.141030 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"cronjob-7332/replace-27598134-8d8h8\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:54:00.995702 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-expand-7462/pod-1dc8f293-a223-4c73-bb8b-45057d705cfc\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:54:01.060189 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"secrets-409/pod-secrets-ea87132f-26ef-4508-af5c-cf9bc1e38fa4\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:54:02.508353 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"volume-1613/hostpathsymlink-injector\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:54:03.343109 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"services-1141/execpod-affinitylmbp2\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:54:03.811801 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-9290/exceed-active-deadline-xcx7v\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:54:03.829736 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-9290/exceed-active-deadline-k8xbt\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:54:04.353519 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"provisioning-862/hostexec-ip-172-20-0-92.ec2.internal-psbt6\" node=\"ip-172-20-0-92.ec2.internal\" evaluatedNodes=5 feasibleNodes=1\nI0622 08:54:06.103087 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-3609/indexed-job-0-rzjrw\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:54:06.117930 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-3609/indexed-job-1-75rpl\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:54:06.966520 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-7229/adopt-release-nz547\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:54:06.980089 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"job-7229/adopt-release-jhtdv\" node=\"ip-172-20-0-238.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\nI0622 08:54:07.294492 10 scheduler.go:615] \"Successfully bound pod to node\" pod=\"downward-api-4999/downwardapi-volume-29165241-3631-46f1-bad5-26b4d4bcc10a\" node=\"ip-172-20-0-138.ec2.internal\" evaluatedNodes=5 feasibleNodes=4\n==== END logs for container kube-scheduler of pod kube-system/kube-scheduler-ip-172-20-0-28.ec2.internal ====\n==== START logs for container metrics-server of pod kube-system/metrics-server-655dc594b4-h7bxn ====\nI0622 08:37:49.490142 1 serving.go:341] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)\nI0622 08:37:49.875133 1 secure_serving.go:197] Serving securely on [::]:443\nI0622 08:37:49.875342 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController\nI0622 08:37:49.875386 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController\nI0622 08:37:49.875446 1 dynamic_serving_content.go:130] Starting serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key\nI0622 08:37:49.875515 1 tlsconfig.go:240] Starting DynamicServingCertificateController\nI0622 08:37:49.875880 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0622 08:37:49.875937 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0622 08:37:49.875988 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0622 08:37:49.876013 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0622 08:37:49.975640 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController \nI0622 08:37:49.976693 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file \nI0622 08:37:49.976695 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file \nE0622 08:38:03.353051 1 scraper.go:139] \"Failed to scrape node\" err=\"Get \\\"https://ip-172-20-0-145.ec2.internal:10250/stats/summary?only_cpu_and_memory=true\\\": context deadline exceeded\" node=\"ip-172-20-0-145.ec2.internal\"\nE0622 08:38:03.353189 1 scraper.go:139] \"Failed to scrape node\" err=\"Get \\\"https://ip-172-20-0-16.ec2.internal:10250/stats/summary?only_cpu_and_memory=true\\\": dial tcp 172.20.0.16:10250: i/o timeout\" node=\"ip-172-20-0-16.ec2.internal\"\nE0622 08:38:03.355560 1 scraper.go:139] \"Failed to scrape node\" err=\"Get \\\"https://ip-172-20-0-206.ec2.internal:10250/stats/summary?only_cpu_and_memory=true\\\": context deadline exceeded\" node=\"ip-172-20-0-206.ec2.internal\"\nE0622 08:38:33.353835 1 scraper.go:139] \"Failed to scrape node\" err=\"Get \\\"https://ip-172-20-0-74.ec2.internal:10250/stats/summary?only_cpu_and_memory=true\\\": context deadline exceeded\" node=\"ip-172-20-0-74.ec2.internal\"\nE0622 08:38:48.353285 1 scraper.go:139] \"Failed to scrape node\" err=\"Get \\\"https://ip-172-20-0-74.ec2.internal:10250/stats/summary?only_cpu_and_memory=true\\\": context deadline exceeded\" node=\"ip-172-20-0-74.ec2.internal\"\n==== END logs for container metrics-server of pod kube-system/metrics-server-655dc594b4-h7bxn ====\n==== START logs for container metrics-server of pod kube-system/metrics-server-655dc594b4-wctbl ====\nI0622 08:37:08.602839 1 serving.go:341] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)\nI0622 08:37:09.185357 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController\nI0622 08:37:09.185379 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController\nI0622 08:37:09.185558 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0622 08:37:09.185651 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0622 08:37:09.185746 1 secure_serving.go:197] Serving securely on [::]:443\nI0622 08:37:09.185749 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0622 08:37:09.185910 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0622 08:37:09.185859 1 dynamic_serving_content.go:130] Starting serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key\nI0622 08:37:09.185868 1 tlsconfig.go:240] Starting DynamicServingCertificateController\nI0622 08:37:09.286552 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file \nI0622 08:37:09.286721 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file \nI0622 08:37:09.286839 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController \nI0622 08:37:35.282063 1 server.go:188] \"Failed probe\" probe=\"metric-storage-ready\" err=\"not metrics to serve\"\nE0622 08:37:37.662581 1 scraper.go:139] \"Failed to scrape node\" err=\"Get \\\"https://ip-172-20-0-145.ec2.internal:10250/stats/summary?only_cpu_and_memory=true\\\": context deadline exceeded\" node=\"ip-172-20-0-145.ec2.internal\"\nE0622 08:37:52.662926 1 scraper.go:139] \"Failed to scrape node\" err=\"Get \\\"https://ip-172-20-0-206.ec2.internal:10250/stats/summary?only_cpu_and_memory=true\\\": dial tcp 172.20.0.206:10250: i/o timeout\" node=\"ip-172-20-0-206.ec2.internal\"\nE0622 08:37:52.662962 1 scraper.go:139] \"Failed to scrape node\" err=\"Get \\\"https://ip-172-20-0-145.ec2.internal:10250/stats/summary?only_cpu_and_memory=true\\\": dial tcp 172.20.0.145:10250: i/o timeout\" node=\"ip-172-20-0-145.ec2.internal\"\nE0622 08:37:52.664093 1 scraper.go:139] \"Failed to scrape node\" err=\"Get \\\"https://ip-172-20-0-16.ec2.internal:10250/stats/summary?only_cpu_and_memory=true\\\": context deadline exceeded\" node=\"ip-172-20-0-16.ec2.internal\"\nE0622 08:38:07.663492 1 scraper.go:139] \"Failed to scrape node\" err=\"Get \\\"https://ip-172-20-0-206.ec2.internal:10250/stats/summary?only_cpu_and_memory=true\\\": context deadline exceeded\" node=\"ip-172-20-0-206.ec2.internal\"\nE0622 08:38:07.663492 1 scraper.go:139] \"Failed to scrape node\" err=\"Get \\\"https://ip-172-20-0-16.ec2.internal:10250/stats/summary?only_cpu_and_memory=true\\\": context deadline exceeded\" node=\"ip-172-20-0-16.ec2.internal\"\nE0622 08:38:07.664701 1 scraper.go:139] \"Failed to scrape node\" err=\"Get \\\"https://ip-172-20-0-145.ec2.internal:10250/stats/summary?only_cpu_and_memory=true\\\": context deadline exceeded\" node=\"ip-172-20-0-145.ec2.internal\"\nE0622 08:38:22.663855 1 scraper.go:139] \"Failed to scrape node\" err=\"Get \\\"https://ip-172-20-0-74.ec2.internal:10250/stats/summary?only_cpu_and_memory=true\\\": dial tcp 172.20.0.74:10250: i/o timeout\" node=\"ip-172-20-0-74.ec2.internal\"\nE0622 08:38:37.663257 1 scraper.go:139] \"Failed to scrape node\" err=\"Get \\\"https://ip-172-20-0-74.ec2.internal:10250/stats/summary?only_cpu_and_memory=true\\\": context deadline exceeded\" node=\"ip-172-20-0-74.ec2.internal\"\nE0622 08:38:52.663823 1 scraper.go:139] \"Failed to scrape node\" err=\"Get \\\"https://ip-172-20-0-74.ec2.internal:10250/stats/summary?only_cpu_and_memory=true\\\": context deadline exceeded\" node=\"ip-172-20-0-74.ec2.internal\"\n==== END logs for container metrics-server of pod kube-system/metrics-server-655dc594b4-wctbl ====\n==== START logs for container node-cache of pod kube-system/node-local-dns-9dkkb ====\n2022/06/22 08:36:27 [INFO] Starting node-cache image: 1.21.3\n2022/06/22 08:36:27 [INFO] Using Corefile /etc/Corefile\n2022/06/22 08:36:27 [INFO] Using Pidfile \n2022/06/22 08:36:27 [INFO] Updated Corefile with 0 custom stubdomains and upstream servers /etc/resolv.conf\n2022/06/22 08:36:27 [INFO] Using config file:\ncluster.local:53 {\n errors\n cache {\n success 9984 30\n denial 9984 5\n }\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n health 169.254.20.10:3989\n}\nin-addr.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\nip6.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\n.:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . /etc/resolv.conf\n prometheus :9253\n}\n2022/06/22 08:36:27 [INFO] Updated Corefile with 0 custom stubdomains and upstream servers /etc/resolv.conf\n2022/06/22 08:36:27 [INFO] Using config file:\ncluster.local:53 {\n errors\n cache {\n success 9984 30\n denial 9984 5\n }\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n health 169.254.20.10:3989\n}\nin-addr.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\nip6.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\n.:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . /etc/resolv.conf\n prometheus :9253\n}\n2022/06/22 08:36:27 [INFO] Added interface - nodelocaldns\n.:53 on 169.254.20.10\ncluster.local.:53 on 169.254.20.10\nin-addr.arpa.:53 on 169.254.20.10\nip6.arpa.:53 on 169.254.20.10\n[INFO] plugin/reload: Running configuration MD5 = bee5c1414ced0a6463928ef5821d1b56\nCoreDNS-1.7.0\nlinux/amd64, go1.16.10, \n==== END logs for container node-cache of pod kube-system/node-local-dns-9dkkb ====\n==== START logs for container node-cache of pod kube-system/node-local-dns-bs67t ====\n2022/06/22 08:36:22 [INFO] Starting node-cache image: 1.21.3\n2022/06/22 08:36:22 [INFO] Using Corefile /etc/Corefile\n2022/06/22 08:36:22 [INFO] Using Pidfile \n2022/06/22 08:36:22 [INFO] Updated Corefile with 0 custom stubdomains and upstream servers /etc/resolv.conf\n2022/06/22 08:36:22 [INFO] Using config file:\ncluster.local:53 {\n errors\n cache {\n success 9984 30\n denial 9984 5\n }\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n health 169.254.20.10:3989\n}\nin-addr.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\nip6.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\n.:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . /etc/resolv.conf\n prometheus :9253\n}\n2022/06/22 08:36:22 [INFO] Updated Corefile with 0 custom stubdomains and upstream servers /etc/resolv.conf\n2022/06/22 08:36:22 [INFO] Using config file:\ncluster.local:53 {\n errors\n cache {\n success 9984 30\n denial 9984 5\n }\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n health 169.254.20.10:3989\n}\nin-addr.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\nip6.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\n.:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . /etc/resolv.conf\n prometheus :9253\n}\n2022/06/22 08:36:22 [INFO] Added interface - nodelocaldns\ncluster.local.:53 on 169.254.20.10\nin-addr.arpa.:53 on 169.254.20.10\nip6.arpa.:53 on 169.254.20.10\n.:53 on 169.254.20.10\n[INFO] plugin/reload: Running configuration MD5 = bee5c1414ced0a6463928ef5821d1b56\nCoreDNS-1.7.0\nlinux/amd64, go1.16.10, \n==== END logs for container node-cache of pod kube-system/node-local-dns-bs67t ====\n==== START logs for container node-cache of pod kube-system/node-local-dns-nmw5h ====\n2022/06/22 08:23:23 [INFO] Starting node-cache image: 1.21.3\n2022/06/22 08:23:23 [INFO] Using Corefile /etc/Corefile\n2022/06/22 08:23:23 [INFO] Using Pidfile \n2022/06/22 08:23:23 [INFO] Updated Corefile with 0 custom stubdomains and upstream servers /etc/resolv.conf\n2022/06/22 08:23:23 [INFO] Using config file:\ncluster.local:53 {\n errors\n cache {\n success 9984 30\n denial 9984 5\n }\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n health 169.254.20.10:3989\n}\nin-addr.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\nip6.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\n.:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . /etc/resolv.conf\n prometheus :9253\n}\n2022/06/22 08:23:23 [INFO] Updated Corefile with 0 custom stubdomains and upstream servers /etc/resolv.conf\n2022/06/22 08:23:23 [INFO] Using config file:\ncluster.local:53 {\n errors\n cache {\n success 9984 30\n denial 9984 5\n }\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n health 169.254.20.10:3989\n}\nin-addr.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\nip6.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\n.:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . /etc/resolv.conf\n prometheus :9253\n}\n2022/06/22 08:23:23 [INFO] Added interface - nodelocaldns\ncluster.local.:53 on 169.254.20.10\nin-addr.arpa.:53 on 169.254.20.10\nip6.arpa.:53 on 169.254.20.10\n.:53 on 169.254.20.10\n[INFO] plugin/reload: Running configuration MD5 = bee5c1414ced0a6463928ef5821d1b56\nCoreDNS-1.7.0\nlinux/amd64, go1.16.10, \n[ERROR] plugin/errors: 2 8999296950508906025.8154243191446495253.ip6.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4055183396795292855.5737974498059146153.in-addr.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4988660656420667220.6558930464253432446.cluster.local. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 8999296950508906025.8154243191446495253.ip6.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4055183396795292855.5737974498059146153.in-addr.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4988660656420667220.6558930464253432446.cluster.local. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4055183396795292855.5737974498059146153.in-addr.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4988660656420667220.6558930464253432446.cluster.local. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 8999296950508906025.8154243191446495253.ip6.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 8999296950508906025.8154243191446495253.ip6.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4055183396795292855.5737974498059146153.in-addr.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4988660656420667220.6558930464253432446.cluster.local. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 8999296950508906025.8154243191446495253.ip6.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4055183396795292855.5737974498059146153.in-addr.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4988660656420667220.6558930464253432446.cluster.local. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4055183396795292855.5737974498059146153.in-addr.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4988660656420667220.6558930464253432446.cluster.local. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 8999296950508906025.8154243191446495253.ip6.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4055183396795292855.5737974498059146153.in-addr.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 8999296950508906025.8154243191446495253.ip6.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4988660656420667220.6558930464253432446.cluster.local. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4055183396795292855.5737974498059146153.in-addr.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 8999296950508906025.8154243191446495253.ip6.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4988660656420667220.6558930464253432446.cluster.local. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4055183396795292855.5737974498059146153.in-addr.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 8999296950508906025.8154243191446495253.ip6.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4988660656420667220.6558930464253432446.cluster.local. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 8999296950508906025.8154243191446495253.ip6.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4988660656420667220.6558930464253432446.cluster.local. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4055183396795292855.5737974498059146153.in-addr.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 sqs.us-east-1.amazonaws.com.kube-system.svc.cluster.local. AAAA: dial tcp 100.65.2.111:53: connect: connection refused\n[ERROR] plugin/errors: 2 sqs.us-east-1.amazonaws.com.kube-system.svc.cluster.local. A: dial tcp 100.65.2.111:53: connect: connection refused\n[ERROR] plugin/errors: 2 sqs.us-east-1.amazonaws.com.kube-system.svc.cluster.local. AAAA: dial tcp 100.65.2.111:53: connect: connection refused\n[ERROR] plugin/errors: 2 sqs.us-east-1.amazonaws.com.kube-system.svc.cluster.local. A: dial tcp 100.65.2.111:53: connect: connection refused\n[ERROR] plugin/errors: 2 sqs.us-east-1.amazonaws.com.svc.cluster.local. A: dial tcp 100.65.2.111:53: connect: connection refused\n[ERROR] plugin/errors: 2 sqs.us-east-1.amazonaws.com.svc.cluster.local. AAAA: dial tcp 100.65.2.111:53: connect: connection refused\n[ERROR] plugin/errors: 2 sqs.us-east-1.amazonaws.com.svc.cluster.local. A: dial tcp 100.65.2.111:53: connect: connection refused\n[ERROR] plugin/errors: 2 sqs.us-east-1.amazonaws.com.svc.cluster.local. AAAA: dial tcp 100.65.2.111:53: connect: connection refused\n[ERROR] plugin/errors: 2 sqs.us-east-1.amazonaws.com.cluster.local. AAAA: dial tcp 100.65.2.111:53: connect: connection refused\n[ERROR] plugin/errors: 2 sqs.us-east-1.amazonaws.com.cluster.local. A: dial tcp 100.65.2.111:53: connect: connection refused\n[ERROR] plugin/errors: 2 sqs.us-east-1.amazonaws.com.cluster.local. A: dial tcp 100.65.2.111:53: connect: connection refused\n[ERROR] plugin/errors: 2 sqs.us-east-1.amazonaws.com.cluster.local. AAAA: dial tcp 100.65.2.111:53: connect: connection refused\n[ERROR] plugin/errors: 2 ec2.us-east-1.amazonaws.com.kube-system.svc.cluster.local. A: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 ec2.us-east-1.amazonaws.com.kube-system.svc.cluster.local. AAAA: dial tcp 100.65.2.111:53: i/o timeout\n==== END logs for container node-cache of pod kube-system/node-local-dns-nmw5h ====\n==== START logs for container node-cache of pod kube-system/node-local-dns-t99f2 ====\n2022/06/22 08:32:44 [INFO] Starting node-cache image: 1.21.3\n2022/06/22 08:32:44 [INFO] Using Corefile /etc/Corefile\n2022/06/22 08:32:44 [INFO] Using Pidfile \n2022/06/22 08:32:44 [INFO] Updated Corefile with 0 custom stubdomains and upstream servers /etc/resolv.conf\n2022/06/22 08:32:44 [INFO] Using config file:\ncluster.local:53 {\n errors\n cache {\n success 9984 30\n denial 9984 5\n }\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n health 169.254.20.10:3989\n}\nin-addr.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\nip6.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\n.:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . /etc/resolv.conf\n prometheus :9253\n}\n2022/06/22 08:32:44 [INFO] Updated Corefile with 0 custom stubdomains and upstream servers /etc/resolv.conf\n2022/06/22 08:32:44 [INFO] Using config file:\ncluster.local:53 {\n errors\n cache {\n success 9984 30\n denial 9984 5\n }\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n health 169.254.20.10:3989\n}\nin-addr.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\nip6.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\n.:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . /etc/resolv.conf\n prometheus :9253\n}\n2022/06/22 08:32:44 [INFO] Added interface - nodelocaldns\ncluster.local.:53 on 169.254.20.10\nin-addr.arpa.:53 on 169.254.20.10\nip6.arpa.:53 on 169.254.20.10\n.:53 on 169.254.20.10\n[INFO] plugin/reload: Running configuration MD5 = bee5c1414ced0a6463928ef5821d1b56\nCoreDNS-1.7.0\nlinux/amd64, go1.16.10, \n[ERROR] plugin/errors: 2 8585175369393338312.7516074672655220401.cluster.local. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4617784794433790537.6733048040847370949.ip6.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 2442361350091752603.7696857799393331274.in-addr.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 8585175369393338312.7516074672655220401.cluster.local. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 2442361350091752603.7696857799393331274.in-addr.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n[ERROR] plugin/errors: 2 4617784794433790537.6733048040847370949.ip6.arpa. HINFO: dial tcp 100.65.2.111:53: i/o timeout\n==== END logs for container node-cache of pod kube-system/node-local-dns-t99f2 ====\n==== START logs for container node-cache of pod kube-system/node-local-dns-zrm79 ====\n2022/06/22 08:36:15 [INFO] Starting node-cache image: 1.21.3\n2022/06/22 08:36:15 [INFO] Using Corefile /etc/Corefile\n2022/06/22 08:36:15 [INFO] Using Pidfile \n2022/06/22 08:36:15 [INFO] Updated Corefile with 0 custom stubdomains and upstream servers /etc/resolv.conf\n2022/06/22 08:36:15 [INFO] Using config file:\ncluster.local:53 {\n errors\n cache {\n success 9984 30\n denial 9984 5\n }\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n health 169.254.20.10:3989\n}\nin-addr.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\nip6.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\n.:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . /etc/resolv.conf\n prometheus :9253\n}\n2022/06/22 08:36:15 [INFO] Updated Corefile with 0 custom stubdomains and upstream servers /etc/resolv.conf\n2022/06/22 08:36:15 [INFO] Using config file:\ncluster.local:53 {\n errors\n cache {\n success 9984 30\n denial 9984 5\n }\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n health 169.254.20.10:3989\n}\nin-addr.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\nip6.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . 100.65.2.111 {\n force_tcp\n }\n prometheus :9253\n}\n.:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.20.10\n forward . /etc/resolv.conf\n prometheus :9253\n}\n2022/06/22 08:36:15 [INFO] Added interface - nodelocaldns\ncluster.local.:53 on 169.254.20.10\nin-addr.arpa.:53 on 169.254.20.10\nip6.arpa.:53 on 169.254.20.10\n.:53 on 169.254.20.10\n[INFO] plugin/reload: Running configuration MD5 = bee5c1414ced0a6463928ef5821d1b56\nCoreDNS-1.7.0\nlinux/amd64, go1.16.10, \n==== END logs for container node-cache of pod kube-system/node-local-dns-zrm79 ====\n{\n \"kind\": \"EventList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\": \"19505\"\n },\n \"items\": []\n}\n{\n \"kind\": \"ReplicationControllerList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\": \"50839\"\n },\n \"items\": []\n}\n{\n \"kind\": \"ServiceList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\": \"50841\"\n },\n \"items\": []\n}\n{\n \"kind\": \"DaemonSetList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"resourceVersion\": \"50841\"\n },\n \"items\": []\n}\n{\n \"kind\": \"DeploymentList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"resourceVersion\": \"50842\"\n },\n \"items\": []\n}\n{\n \"kind\": \"ReplicaSetList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"resourceVersion\": \"50844\"\n },\n \"items\": []\n}\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\": \"50845\"\n },\n \"items\": []\n}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:54:09.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-2441" for this suite. [32m•[0m ... skipping 6 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 22 08:54:07.298: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29165241-3631-46f1-bad5-26b4d4bcc10a" in namespace "downward-api-4999" to be "Succeeded or Failed" Jun 22 08:54:07.328: INFO: Pod "downwardapi-volume-29165241-3631-46f1-bad5-26b4d4bcc10a": Phase="Pending", Reason="", readiness=false. Elapsed: 29.395423ms Jun 22 08:54:09.361: INFO: Pod "downwardapi-volume-29165241-3631-46f1-bad5-26b4d4bcc10a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062562117s Jun 22 08:54:11.392: INFO: Pod "downwardapi-volume-29165241-3631-46f1-bad5-26b4d4bcc10a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093417934s [1mSTEP[0m: Saw pod success Jun 22 08:54:11.392: INFO: Pod "downwardapi-volume-29165241-3631-46f1-bad5-26b4d4bcc10a" satisfied condition "Succeeded or Failed" Jun 22 08:54:11.427: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod downwardapi-volume-29165241-3631-46f1-bad5-26b4d4bcc10a container client-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:54:11.496: INFO: Waiting for pod downwardapi-volume-29165241-3631-46f1-bad5-26b4d4bcc10a to disappear Jun 22 08:54:11.527: INFO: Pod downwardapi-volume-29165241-3631-46f1-bad5-26b4d4bcc10a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:54:11.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-4999" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":246,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:54:11.596: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 101 lines ... [32m• [SLOW TEST:19.593 seconds][0m [sig-node] Pods [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should run through the lifecycle of Pods and PodStatus [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":23,"skipped":204,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:54:12.226: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 98 lines ... [1mSTEP[0m: Deleting pod aws-client in namespace volume-3662 Jun 22 08:53:59.862: INFO: Waiting for pod aws-client to disappear Jun 22 08:53:59.893: INFO: Pod aws-client still exists Jun 22 08:54:01.894: INFO: Waiting for pod aws-client to disappear Jun 22 08:54:01.925: INFO: Pod aws-client no longer exists [1mSTEP[0m: cleaning the environment after aws Jun 22 08:54:02.060: INFO: Couldn't delete PD "aws://us-east-1a/vol-0385cb867ae9db934", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0385cb867ae9db934 is currently attached to i-0aec406fbaec3a605 status code: 400, request id: 1838aba7-025d-4acb-bd28-434d6416dedf Jun 22 08:54:07.662: INFO: Couldn't delete PD "aws://us-east-1a/vol-0385cb867ae9db934", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0385cb867ae9db934 is currently attached to i-0aec406fbaec3a605 status code: 400, request id: b70f239b-fab5-4799-9972-fc86579ee764 Jun 22 08:54:13.040: INFO: Successfully deleted PD "aws://us-east-1a/vol-0385cb867ae9db934". [AfterEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:54:13.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-3662" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (ext4)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data","total":-1,"completed":27,"skipped":212,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 25 lines ... [32m• [SLOW TEST:11.527 seconds][0m [sig-apps] Job [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should adopt matching orphans and release non-matching pods [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":21,"skipped":182,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:54:18.304: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 137 lines ... [32m• [SLOW TEST:299.612 seconds][0m [sig-apps] Deployment [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should not disrupt a cloud load-balancer's connectivity during rollout [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:161[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","total":-1,"completed":16,"skipped":106,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:54:19.786: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 158 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds","total":-1,"completed":29,"skipped":275,"failed":0} [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:54:09.957: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename configmap [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name configmap-test-volume-25a02e99-78bb-405e-83ea-e075fe3ee323 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 22 08:54:10.187: INFO: Waiting up to 5m0s for pod "pod-configmaps-cbb441b4-4048-41af-bf00-1ba933cdbe44" in namespace "configmap-626" to be "Succeeded or Failed" Jun 22 08:54:10.219: INFO: Pod "pod-configmaps-cbb441b4-4048-41af-bf00-1ba933cdbe44": Phase="Pending", Reason="", readiness=false. Elapsed: 31.717844ms Jun 22 08:54:12.251: INFO: Pod "pod-configmaps-cbb441b4-4048-41af-bf00-1ba933cdbe44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063907249s Jun 22 08:54:14.285: INFO: Pod "pod-configmaps-cbb441b4-4048-41af-bf00-1ba933cdbe44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097267686s Jun 22 08:54:16.317: INFO: Pod "pod-configmaps-cbb441b4-4048-41af-bf00-1ba933cdbe44": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130163451s Jun 22 08:54:18.351: INFO: Pod "pod-configmaps-cbb441b4-4048-41af-bf00-1ba933cdbe44": Phase="Pending", Reason="", readiness=false. Elapsed: 8.163764977s Jun 22 08:54:20.383: INFO: Pod "pod-configmaps-cbb441b4-4048-41af-bf00-1ba933cdbe44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.195671376s [1mSTEP[0m: Saw pod success Jun 22 08:54:20.383: INFO: Pod "pod-configmaps-cbb441b4-4048-41af-bf00-1ba933cdbe44" satisfied condition "Succeeded or Failed" Jun 22 08:54:20.415: INFO: Trying to get logs from node ip-172-20-0-238.ec2.internal pod pod-configmaps-cbb441b4-4048-41af-bf00-1ba933cdbe44 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:54:20.488: INFO: Waiting for pod pod-configmaps-cbb441b4-4048-41af-bf00-1ba933cdbe44 to disappear Jun 22 08:54:20.520: INFO: Pod pod-configmaps-cbb441b4-4048-41af-bf00-1ba933cdbe44 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":275,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:54:20.589: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 2 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: dir-link-bindmounted] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 106 lines ... Jun 22 08:53:46.075: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-8812jftld [1mSTEP[0m: creating a claim Jun 22 08:53:46.107: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-6h5f [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:53:46.206: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-6h5f" in namespace "provisioning-8812" to be "Succeeded or Failed" Jun 22 08:53:46.238: INFO: Pod "pod-subpath-test-dynamicpv-6h5f": Phase="Pending", Reason="", readiness=false. Elapsed: 32.279384ms Jun 22 08:53:48.270: INFO: Pod "pod-subpath-test-dynamicpv-6h5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063849879s Jun 22 08:53:50.301: INFO: Pod "pod-subpath-test-dynamicpv-6h5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095422916s Jun 22 08:53:52.332: INFO: Pod "pod-subpath-test-dynamicpv-6h5f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12638851s Jun 22 08:53:54.364: INFO: Pod "pod-subpath-test-dynamicpv-6h5f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157864813s Jun 22 08:53:56.397: INFO: Pod "pod-subpath-test-dynamicpv-6h5f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.191690867s ... skipping 2 lines ... Jun 22 08:54:02.492: INFO: Pod "pod-subpath-test-dynamicpv-6h5f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.286255079s Jun 22 08:54:04.523: INFO: Pod "pod-subpath-test-dynamicpv-6h5f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.317672113s Jun 22 08:54:06.556: INFO: Pod "pod-subpath-test-dynamicpv-6h5f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.35027555s Jun 22 08:54:08.592: INFO: Pod "pod-subpath-test-dynamicpv-6h5f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.38589515s Jun 22 08:54:10.623: INFO: Pod "pod-subpath-test-dynamicpv-6h5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.417815504s [1mSTEP[0m: Saw pod success Jun 22 08:54:10.624: INFO: Pod "pod-subpath-test-dynamicpv-6h5f" satisfied condition "Succeeded or Failed" Jun 22 08:54:10.654: INFO: Trying to get logs from node ip-172-20-0-238.ec2.internal pod pod-subpath-test-dynamicpv-6h5f container test-container-volume-dynamicpv-6h5f: <nil> [1mSTEP[0m: delete the pod Jun 22 08:54:10.723: INFO: Waiting for pod pod-subpath-test-dynamicpv-6h5f to disappear Jun 22 08:54:10.755: INFO: Pod pod-subpath-test-dynamicpv-6h5f no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-6h5f Jun 22 08:54:10.755: INFO: Deleting pod "pod-subpath-test-dynamicpv-6h5f" in namespace "provisioning-8812" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support non-existent path [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":17,"skipped":97,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:54:21.121: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 142 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should be able to unmount after the subpath directory is deleted [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":21,"skipped":113,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:54:24.136: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 9 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:214[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m {"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":24,"skipped":151,"failed":0} [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:54:05.910: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename job [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 11 lines ... [32m• [SLOW TEST:20.318 seconds][0m [sig-apps] Job [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should create pods for an Indexed job with completion indexes and specified hostname [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:150[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname","total":-1,"completed":25,"skipped":151,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy ... skipping 100 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:214[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":-1,"completed":37,"skipped":314,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:54:28.406: INFO: Only supported for providers [openstack] (not aws) ... skipping 116 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: vsphere] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mOnly supported for providers [vsphere] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438 [90m------------------------------[0m ... skipping 145 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:54:28.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "runtimeclass-2814" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeFeature:RuntimeHandler]","total":-1,"completed":38,"skipped":361,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:54:28.821: INFO: Only supported for providers [gce gke] (not aws) ... skipping 25 lines ... Jun 22 08:53:58.061: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename volume [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should store data /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159 Jun 22 08:53:58.240: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Jun 22 08:53:58.318: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-1613" in namespace "volume-1613" to be "Succeeded or Failed" Jun 22 08:53:58.348: INFO: Pod "hostpath-symlink-prep-volume-1613": Phase="Pending", Reason="", readiness=false. Elapsed: 30.714929ms Jun 22 08:54:00.380: INFO: Pod "hostpath-symlink-prep-volume-1613": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062789099s Jun 22 08:54:02.415: INFO: Pod "hostpath-symlink-prep-volume-1613": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097697047s [1mSTEP[0m: Saw pod success Jun 22 08:54:02.415: INFO: Pod "hostpath-symlink-prep-volume-1613" satisfied condition "Succeeded or Failed" Jun 22 08:54:02.415: INFO: Deleting pod "hostpath-symlink-prep-volume-1613" in namespace "volume-1613" Jun 22 08:54:02.451: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-1613" to be fully deleted Jun 22 08:54:02.482: INFO: Creating resource for inline volume [1mSTEP[0m: starting hostpathsymlink-injector [1mSTEP[0m: Writing text file contents in the container. Jun 22 08:54:06.581: INFO: Running '/logs/artifacts/403903f7-f202-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-1613 exec hostpathsymlink-injector --namespace=volume-1613 -- /bin/sh -c echo 'Hello from hostPathSymlink from namespace volume-1613' > /opt/0/index.html' ... skipping 49 lines ... Jun 22 08:54:20.715: INFO: Pod hostpathsymlink-client still exists Jun 22 08:54:22.715: INFO: Waiting for pod hostpathsymlink-client to disappear Jun 22 08:54:22.748: INFO: Pod hostpathsymlink-client still exists Jun 22 08:54:24.716: INFO: Waiting for pod hostpathsymlink-client to disappear Jun 22 08:54:24.751: INFO: Pod hostpathsymlink-client no longer exists [1mSTEP[0m: cleaning the environment after hostpathsymlink Jun 22 08:54:24.799: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-1613" in namespace "volume-1613" to be "Succeeded or Failed" Jun 22 08:54:24.830: INFO: Pod "hostpath-symlink-prep-volume-1613": Phase="Pending", Reason="", readiness=false. Elapsed: 31.486ms Jun 22 08:54:26.862: INFO: Pod "hostpath-symlink-prep-volume-1613": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06296716s Jun 22 08:54:28.894: INFO: Pod "hostpath-symlink-prep-volume-1613": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095245261s [1mSTEP[0m: Saw pod success Jun 22 08:54:28.894: INFO: Pod "hostpath-symlink-prep-volume-1613" satisfied condition "Succeeded or Failed" Jun 22 08:54:28.894: INFO: Deleting pod "hostpath-symlink-prep-volume-1613" in namespace "volume-1613" Jun 22 08:54:28.949: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-1613" to be fully deleted [AfterEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:54:28.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-1613" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":22,"skipped":257,"failed":0} [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":16,"skipped":73,"failed":0} [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:51:45.764: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename statefulset [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 71 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99[0m should perform rolling updates and roll backs of template modifications [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":17,"skipped":73,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 22 08:54:24.138: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename secrets [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating secret with name secret-test-800b56a7-eb21-42ef-b228-a7435664dde4 [1mSTEP[0m: Creating a pod to test consume secrets Jun 22 08:54:24.361: INFO: Waiting up to 5m0s for pod "pod-secrets-0ae5af4c-88a4-4439-a157-ce3e5ed597b0" in namespace "secrets-5740" to be "Succeeded or Failed" Jun 22 08:54:24.392: INFO: Pod "pod-secrets-0ae5af4c-88a4-4439-a157-ce3e5ed597b0": Phase="Pending", Reason="", readiness=false. Elapsed: 30.488217ms Jun 22 08:54:26.427: INFO: Pod "pod-secrets-0ae5af4c-88a4-4439-a157-ce3e5ed597b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065683429s Jun 22 08:54:28.458: INFO: Pod "pod-secrets-0ae5af4c-88a4-4439-a157-ce3e5ed597b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097336034s Jun 22 08:54:30.491: INFO: Pod "pod-secrets-0ae5af4c-88a4-4439-a157-ce3e5ed597b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.129980433s [1mSTEP[0m: Saw pod success Jun 22 08:54:30.491: INFO: Pod "pod-secrets-0ae5af4c-88a4-4439-a157-ce3e5ed597b0" satisfied condition "Succeeded or Failed" Jun 22 08:54:30.529: INFO: Trying to get logs from node ip-172-20-0-138.ec2.internal pod pod-secrets-0ae5af4c-88a4-4439-a157-ce3e5ed597b0 container secret-env-test: <nil> [1mSTEP[0m: delete the pod Jun 22 08:54:30.604: INFO: Waiting for pod pod-secrets-0ae5af4c-88a4-4439-a157-ce3e5ed597b0 to disappear Jun 22 08:54:30.637: INFO: Pod pod-secrets-0ae5af4c-88a4-4439-a157-ce3e5ed597b0 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:6.576 seconds][0m [sig-node] Secrets [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be consumable from pods in env vars [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":114,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 58 lines ... Jun 22 08:54:27.056: INFO: PersistentVolumeClaim pvc-bshw5 found but phase is Pending instead of Bound. Jun 22 08:54:29.086: INFO: PersistentVolumeClaim pvc-bshw5 found and phase=Bound (14.24360627s) Jun 22 08:54:29.086: INFO: Waiting up to 3m0s for PersistentVolume local-78wv9 to have phase Bound Jun 22 08:54:29.116: INFO: PersistentVolume local-78wv9 found and phase=Bound (29.58535ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-qsnn [1mSTEP[0m: Creating a pod to test subpath Jun 22 08:54:29.225: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qsnn" in namespace "provisioning-9393" to be "Succeeded or Failed" Jun 22 08:54:29.257: INFO: Pod "pod-subpath-test-preprovisionedpv-qsnn": Phase="Pending", Reason="", readiness=false. Elapsed: 31.835695ms Jun 22 08:54:31.289: INFO: Pod "pod-subpath-test-preprovisionedpv-qsnn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064562313s Jun 22 08:54:33.322: INFO: Pod "pod-subpath-test-preprovisionedpv-qsnn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096829387s [1mSTEP[0m: Saw pod success Jun 22 08:54:33.322: INFO: Pod "pod-subpath-test-preprovisionedpv-qsnn" satisfied condition "Succeeded or Failed" Jun 22 08:54:33.363: INFO: Trying to get logs from node ip-172-20-0-114.ec2.internal pod pod-subpath-test-preprovisionedpv-qsnn container test-container-volume-preprovisionedpv-qsnn: <nil> [1mSTEP[0m: delete the pod Jun 22 08:54:33.458: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qsnn to disappear Jun 22 08:54:33.488: INFO: Pod pod-subpath-test-preprovisionedpv-qsnn no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-qsnn Jun 22 08:54:33.488: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qsnn" in namespace "provisioning-9393" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support non-existent path [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":24,"skipped":213,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:54:34.046: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 65 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Kubectl copy [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1368[0m should copy a file from a running Pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1385[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":39,"skipped":377,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:54:34.611: INFO: Only supported for providers [gce gke] (not aws) ... skipping 47 lines ... Jun 22 08:54:27.624: INFO: PersistentVolumeClaim pvc-pv4z8 found but phase is Pending instead of Bound. Jun 22 08:54:29.659: INFO: PersistentVolumeClaim pvc-pv4z8 found and phase=Bound (6.133414916s) Jun 22 08:54:29.659: INFO: Waiting up to 3m0s for PersistentVolume local-vvzcv to have phase Bound Jun 22 08:54:29.690: INFO: PersistentVolume local-vvzcv found and phase=Bound (31.399837ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-69gt [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 22 08:54:29.788: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-69gt" in namespace "volume-4037" to be "Succeeded or Failed" Jun 22 08:54:29.819: INFO: Pod "exec-volume-test-preprovisionedpv-69gt": Phase="Pending", Reason="", readiness=false. Elapsed: 31.142191ms Jun 22 08:54:31.851: INFO: Pod "exec-volume-test-preprovisionedpv-69gt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063069288s Jun 22 08:54:33.893: INFO: Pod "exec-volume-test-preprovisionedpv-69gt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104792654s [1mSTEP[0m: Saw pod success Jun 22 08:54:33.893: INFO: Pod "exec-volume-test-preprovisionedpv-69gt" satisfied condition "Succeeded or Failed" Jun 22 08:54:33.925: INFO: Trying to get logs from node ip-172-20-0-114.ec2.internal pod exec-volume-test-preprovisionedpv-69gt container exec-container-preprovisionedpv-69gt: <nil> [1mSTEP[0m: delete the pod Jun 22 08:54:34.037: INFO: Waiting for pod exec-volume-test-preprovisionedpv-69gt to disappear Jun 22 08:54:34.088: INFO: Pod exec-volume-test-preprovisionedpv-69gt no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-69gt Jun 22 08:54:34.088: INFO: Deleting pod "exec-volume-test-preprovisionedpv-69gt" in namespace "volume-4037" ... skipping 28 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (ext4)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":27,"skipped":208,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:54:35.507: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 166 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should create read/write inline ephemeral volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:194[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":16,"skipped":171,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:54:36.096: INFO: Only supported for providers [gce gke] (not aws) ... skipping 69 lines ... Jun 22 08:54:29.739: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 [1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 22 08:54:29.930: INFO: Waiting up to 5m0s for pod "security-context-502997c1-0e73-497e-b00f-708049d7a240" in namespace "security-context-8633" to be "Succeeded or Failed" Jun 22 08:54:29.960: INFO: Pod "security-context-502997c1-0e73-497e-b00f-708049d7a240": Phase="Pending", Reason="", readiness=false. Elapsed: 30.21861ms Jun 22 08:54:31.991: INFO: Pod "security-context-502997c1-0e73-497e-b00f-708049d7a240": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061210994s Jun 22 08:54:34.032: INFO: Pod "security-context-502997c1-0e73-497e-b00f-708049d7a240": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101744646s Jun 22 08:54:36.066: INFO: Pod "security-context-502997c1-0e73-497e-b00f-708049d7a240": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.135823331s [1mSTEP[0m: Saw pod success Jun 22 08:54:36.066: INFO: Pod "security-context-502997c1-0e73-497e-b00f-708049d7a240" satisfied condition "Succeeded or Failed" Jun 22 08:54:36.099: INFO: Trying to get logs from node ip-172-20-0-238.ec2.internal pod security-context-502997c1-0e73-497e-b00f-708049d7a240 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 22 08:54:36.213: INFO: Waiting for pod security-context-502997c1-0e73-497e-b00f-708049d7a240 to disappear Jun 22 08:54:36.248: INFO: Pod security-context-502997c1-0e73-497e-b00f-708049d7a240 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:6.587 seconds][0m [sig-node] Security Context [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":18,"skipped":74,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:54:36.332: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 121 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 22 08:54:36.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "node-lease-test-8206" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] NodeLease NodeLease should have OwnerReferences set","total":-1,"completed":17,"skipped":188,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 22 08:54:36.487: INFO: Driver hostPath doesn't support DynamicPV -- skipping ... skipping 39 lines ... Jun 22 08:54:10.326: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2691.svc.cluster.local from pod dns-2691/dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9: the server could not find the requested resource (get pods dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9) Jun 22 08:54:10.360: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2691.svc.cluster.local from pod dns-2691/dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9: the server could not find the requested resource (get pods dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9) Jun 22 08:54:10.517: INFO: Unable to read jessie_udp@dns-test-service.dns-2691.svc.cluster.local from pod dns-2691/dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9: the server could not find the requested resource (get pods dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9) Jun 22 08:54:10.549: INFO: Unable to read jessie_tcp@dns-test-service.dns-2691.svc.cluster.local from pod dns-2691/dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9: the server could not find the requested resource (get pods dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9) Jun 22 08:54:10.582: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2691.svc.cluster.local from pod dns-2691/dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9: the server could not find the requested resource (get pods dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9) Jun 22 08:54:10.612: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2691.svc.cluster.local from pod dns-2691/dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9: the server could not find the requested resource (get pods dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9) Jun 22 08:54:10.734: INFO: Lookups using dns-2691/dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9 failed for: [wheezy_udp@dns-test-service.dns-2691.svc.cluster.local wheezy_tcp@dns-test-service.dns-2691.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2691.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2691.svc.cluster.local jessie_udp@dns-test-service.dns-2691.svc.cluster.local jessie_tcp@dns-test-service.dns-2691.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2691.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2691.svc.cluster.local] Jun 22 08:54:15.765: INFO: Unable to read wheezy_udp@dns-test-service.dns-2691.svc.cluster.local from pod dns-2691/dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9: the server could not find the requested resource (get pods dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9) Jun 22 08:54:15.796: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2691.svc.cluster.local from pod dns-2691/dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9: the server could not find the requested resource (get pods dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9) Jun 22 08:54:15.826: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2691.svc.cluster.local from pod dns-2691/dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9: the server could not find the requested resource (get pods dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9) Jun 22 08:54:15.857: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2691.svc.cluster.local from pod dns-2691/dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9: the server could not find the requested resource (get pods dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9) Jun 22 08:54:16.015: INFO: Unable to read jessie_udp@dns-test-service.dns-2691.svc.cluster.local from pod dns-2691/dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9: the server could not find the requested resource (get pods dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9) Jun 22 08:54:16.045: INFO: Unable to read jessie_tcp@dns-test-service.dns-2691.svc.cluster.local from pod dns-2691/dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9: the server could not find the requested resource (get pods dns-test-5b9a64ba-c4d1-43f6-9f95-0fe39fe96dd9) Jun 22 08:54:16.075: INFO: Unable to read jessie_udp@_ht